DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management
@ 2021-10-06  4:48 Alexander Kozyrev
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
                   ` (3 more replies)
  0 siblings, 4 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-06  4:48 UTC (permalink / raw)
  To: dev
  Cc: thomas, orika, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

The current flow rules insertion mechanism assumes applications
manage flow rules lifecycle in the control path. The flow rules
creation/destruction is performed synchronously and under a lock.
But for applications doing this job as part of the datapath, any
blocking operations are not desirable as they cause delay in the
packet processing.

These patches optimize a datapath-focused flow rules management
approach based on four main concepts:

1. Pre-configuration hints.
In order to reduce the overhead of the flow rules management, the
application may provide some hints at the initialization phase about
flow rules characteristics to be used. The configuration funtion
pre-allocates all the needed resources inside a PMD/HW beforehand and
these resources are used at a later stage without costly allocations.

2. Flow grouping using templates.
Unlike current stage where each flow rule is treated as independent entity,
new approach can leverage application knowledge about common patterns in
most of flows. Similar flows are grouped toghether using templates to enable
better resource management inside the PMD/HW.

3. Queue-based flow management.
Flow rules creation/destruction is done by using lockless flow queues.
The application configures number of queues during the initialization stage.
Then create/destroy operations are enqueued without any lock.

4. Asynchronous operations.
There is a way to spare the datapath from waiting for the flow rule
creation/destruction. Adopting an asynchronous queue-based approach,
the packet processing can continue with handling next packets while
inserting/deleting a flow rule inside the hardware. The application
is expected to poll for results later to see if the flow rule is
successfully inserted/deleted or not.

Example on how to use this approach. Init stage consists from the resources
preallocation, item and action templates definition and corresponding tables
create. All these steps should be done before a device is started:

rte_eth_dev_configure();
rte_flow_configure(port_id, number_of_flow_queues, max_num_of_counters);
rte_flow_item_template_create(port_id, items("eth/ipv4/udp"));
rte_flow_action_template_create(port_id, actions("counter/set_tp_src"));
rte_flow_table_create(port_id, item_template, action_template);
rte_eth_dev_start();

The packet processing can start once all the resources are preallocated.
Flow rules creation/destruction jobs are enqueued as a part of the packet
handling logic. These jobs are then flushed to the PMD/HW and their status
is beign rquested via the dequeue API as a method to ensure flow rules
are successfully created/destroyed.

rte_eth_rx_burst();
for (every received packet in the burts) {
  if (flow rule needs to be created) {
    rte_flow_q_flow_create(port_id, flow_queue_id, table_id,
        item_template_id, items("eth/ipv4 is 1.1.1.1/udp"),
         action_template_id, actions("counter/set_tp_src is 5555"));
  } else {flow rule needs tp be destroyed) {
    rte_flow_q_flow_destroy(port_id, flow_queue_id, flow_rule_id);
  }
  rte_flow_q_flush(port_id, flow_queue_id);
  rte_flow_q_dequeue(port_id, flow_queue_id, &result);
}

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Suggested-by: Ori Kam <orika@nvidia.com>

Alexander Kozyrev (3):
  ethdev: introduce flow pre-configuration hints
  ethdev: add flow item/action templates
  ethdev: add async queue-based flow rules operations

 lib/ethdev/rte_flow.h | 626 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 626 insertions(+)

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints
  2021-10-06  4:48 [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2021-10-06  4:48 ` Alexander Kozyrev
  2021-10-13  4:11   ` Ajit Khaparde
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates Alexander Kozyrev
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-06  4:48 UTC (permalink / raw)
  To: dev
  Cc: thomas, orika, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Suggested-by: Ori Kam <orika@nvidia.com>
---
 lib/ethdev/rte_flow.h | 70 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7b1ed7f110..c69d503b90 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4288,6 +4288,76 @@ rte_flow_tunnel_item_release(uint16_t port_id,
 			     struct rte_flow_item *items,
 			     uint32_t num_of_items,
 			     struct rte_flow_error *error);
+
+/**
+ * Flow engine configuration.
+ */
+__extension__
+struct rte_flow_port_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Memory size allocated for the flow rules management.
+	 * If set to 0, memory is allocated dynamically.
+	 */
+	uint32_t mem_size;
+	/**
+	 * Number of counter actions pre-configured.
+	 * If set to 0, PMD will allocate counters dynamically.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging actions pre-configured.
+	 * If set to 0, PMD will allocate aging dynamically.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging;
+	/**
+	 * Number of traffic metering actions pre-configured.
+	 * If set to 0, PMD will allocate meters dynamically.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+	/**
+	 * Resources reallocation strategy.
+	 * If set to 1, PMD is not allowed to allocate more resources on demand.
+	 * An application can only allocate more resources by calling the
+	 * configure API again with new values (may not be supported by PMD).
+	 */
+	uint32_t fixed_resource_size:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure flow rules module.
+ * To pre-allocate resources as per the flow port attributes,
+ * this configuration function must be called before any flow rule is created.
+ * No other rte_flow function should be called while this function is invoked.
+ * This function can be called again to change the configuration.
+ * Some PMDs may not support re-configuration at all,
+ * or may only allow increasing the number of resources allocated.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates
  2021-10-06  4:48 [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2021-10-06  4:48 ` Alexander Kozyrev
  2021-10-06 17:24   ` Ivan Malov
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations Alexander Kozyrev
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  3 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-06  4:48 UTC (permalink / raw)
  To: dev
  Cc: thomas, orika, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The item template defines common matching fields (the item mask) without
values. The action template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines item and action templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at table creation time.

The flow rule creation is done by selecting a table, an item template
and an action template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Suggested-by: Ori Kam <orika@nvidia.com>
---
 lib/ethdev/rte_flow.h | 268 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 268 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index c69d503b90..ba3204b17e 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4358,6 +4358,274 @@ int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successfull creation of item template.
+ * This handle can be used to manage the created item template.
+ */
+struct rte_flow_item_template;
+
+__extension__
+struct rte_flow_item_template_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/* No attributes so far. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create item template.
+ * The item template defines common matching fields (item mask) without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The order of items in the template must be the same at rule insertion.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] attr
+ *   Item template attributes.
+ * @param[in] items
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_item_template *
+rte_flow_item_template_create(uint16_t port_id,
+			      const struct rte_flow_item_template_attr *attr,
+			      const struct rte_flow_item items[],
+			      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy item template.
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_item_template_destroy(uint16_t port_id,
+			       struct rte_flow_item_template *template,
+			       struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successfull creation of action template.
+ * This handle can be used to manage the created action template.
+ */
+struct rte_flow_action_template;
+
+__extension__
+struct rte_flow_action_template_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/* No attributes so far. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create action template.
+ * The action template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ *
+ * The order of the action in the template must be kept when inserting rules.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if the mask is 1.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_action_template *
+rte_flow_action_template_create(uint16_t port_id,
+			const struct rte_flow_action_template_attr *attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy action template.
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_action_template_destroy(uint16_t port_id,
+			const struct rte_flow_action_template *template,
+			struct rte_flow_error *error);
+
+
+/**
+ * Opaque type returned after successfull creation of table.
+ * This handle can be used to manage the created table.
+ */
+struct rte_flow_table;
+
+enum rte_flow_table_mode {
+	/**
+	 * Fixed size, the number of flow rules will be limited.
+	 * It is possible that some of the rules will not be inserted
+	 * due to conflicts/lack of space.
+	 * When rule insertion fails with try again error,
+	 * the application may use one of the following ways
+	 * to address this state:
+	 * 1. Keep this rule processing in the software.
+	 * 2. Try to offload this rule at a later time,
+	 *    after some rules have been removed from the hardware.
+	 * 3. Create a new table and add this rule to the new table.
+	 */
+	RTE_FLOW_TABLE_MODE_FIXED,
+	/**
+	 * Resizable, the PMD/HW will insert all rules.
+	 * No try again error will be received in this mode.
+	 */
+	RTE_FLOW_TABLE_MODE_RESIZABLE,
+};
+
+/**
+ * Table attributes.
+ */
+struct rte_flow_table_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Flow attributes that will be used in the table.
+	 */
+	struct rte_flow_attr attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 * It can be hard or soft limit depending on the mode.
+	 */
+	uint32_t max_rules;
+	/**
+	 * Table mode.
+	 */
+	enum rte_flow_table_mode mode;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create table.
+ * Table is a group of flow rules with the same flow attributes
+ * (group ID, priority and traffic direction) defined for it.
+ * The table holds multiple item and action templates to build a flow rule.
+ * Each rule is free to use any combination of item and action templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] attr
+ *   Table attributes.
+ * @param[in] item_templates
+ *   Array of item templates to be used in this table.
+ * @param[in] nb_item_templates
+ *   The number of item templates in the item_templates array.
+ * @param[in] action_templates
+ *   Array of action templates to be used in this table.
+ * @param[in] nb_action_templates
+ *   The number of action templates in the action_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_table *
+rte_flow_table_create(uint16_t port_id, struct rte_flow_table_attr *attr,
+		      const struct rte_flow_item_template *item_templates[],
+		      uint8_t nb_item_templates,
+		      const struct rte_flow_action_template *action_templates[],
+		      uint8_t nb_action_templates,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy table.
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table,
+		       struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations
  2021-10-06  4:48 [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates Alexander Kozyrev
@ 2021-10-06  4:48 ` Alexander Kozyrev
  2021-10-06 16:24   ` Ivan Malov
  2021-10-13  4:57   ` Ajit Khaparde
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  3 siblings, 2 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-06  4:48 UTC (permalink / raw)
  To: dev
  Cc: thomas, orika, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and queue-based
operations can be safely invoked without any locks from a single thread.

The rte_flow_q_flow_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_q_dequeue() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_q_flow_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Suggested-by: Ori Kam <orika@nvidia.com>
---
 lib/ethdev/rte_flow.h | 288 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 288 insertions(+)

diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index ba3204b17e..8cdffd8d2e 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4298,6 +4298,13 @@ struct rte_flow_port_attr {
 	 * Version of the struct layout, should be 0.
 	 */
 	uint32_t version;
+	/**
+	 * Number of flow queues to be configured.
+	 * Flow queues are used for asyncronous flow rule creation/destruction.
+	 * The order of operations is not guaranteed inside a queue.
+	 * Flow queues are not thread-safe.
+	 */
+	uint16_t nb_queues;
 	/**
 	 * Memory size allocated for the flow rules management.
 	 * If set to 0, memory is allocated dynamically.
@@ -4330,6 +4337,21 @@ struct rte_flow_port_attr {
 	uint32_t fixed_resource_size:1;
 };
 
+/**
+ * Flow engine queue configuration.
+ */
+__extension__
+struct rte_flow_queue_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4346,6 +4368,8 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each queue.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4357,6 +4381,7 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -4626,6 +4651,269 @@ __rte_experimental
 int
 rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table,
 		       struct rte_flow_error *error);
+
+/**
+ * Queue operation attributes
+ */
+__extension__
+struct rte_flow_q_ops_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * The user data that will be returned on the completion.
+	 */
+	void *user_data;
+	/**
+	 * When set, the requested action must be sent to the HW without
+	 * any delay. Any prior requests must be also sent to the HW.
+	 * If this bit is cleared, the application must call the
+	 * rte_flow_queue_flush API to actually send the request to the HW.
+	 */
+	uint32_t flush:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue
+ *   Flow queue used to insert the rule.
+ * @param[in] attr
+ *   Rule creation operation attributes.
+ * @param[in] table
+ *   Table to select templates from.
+ * @param[in] items
+ *   List of pattern items to be used.
+ *   The list order should match the order in the item template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] item_template_index
+ *   Item template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the action template.
+ * @param[in] action_template_index
+ *   Action template index in the table.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule was offloaded.
+ *   Only completion result indicates that the rule was offloaded.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id, uint32_t queue,
+		       const struct rte_flow_q_ops_attr *attr,
+		       const struct rte_flow_table *table,
+			   const struct rte_flow_item items[],
+		       uint8_t item_template_index,
+			   const struct rte_flow_action actions[],
+		       uint8_t action_template_index,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flow_destroy(uint16_t port_id, uint32_t queue,
+			struct rte_flow_q_ops_attr *attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect rule creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue
+ *   Flow queue which is used to create the rule.
+ * @param[in] attr
+ *   Queue operation attributes.
+ * @param[in] conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id, uint32_t queue,
+				const struct rte_flow_q_ops_attr *attr,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action,
+				struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect rule destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] attr
+ *   Queue operation attributes.
+ * @param[in] handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id, uint32_t queue,
+				struct rte_flow_q_ops_attr *attr,
+				struct rte_flow_action_handle *handle,
+				struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flush all internally stored rules to the HW.
+ * Non-flushed rules are rules that were inserted without the flush flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue
+ *   Flow queue to be flushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *    0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flush(uint16_t port_id, uint32_t queue,
+		 struct rte_flow_error *error);
+
+/**
+ * Dequeue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * Dequeue operation result.
+ */
+struct rte_flow_q_op_res {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * User data that was supplied during operation submission.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to receive the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue
+ *   Flow queue which is used to dequeue the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were dequeued,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_dequeue(uint16_t port_id, uint32_t queue,
+		   struct rte_flow_q_op_res res[], uint16_t n_res,
+		   struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations Alexander Kozyrev
@ 2021-10-06 16:24   ` Ivan Malov
  2021-10-13  1:10     ` Alexander Kozyrev
  2021-10-13  4:57   ` Ajit Khaparde
  1 sibling, 1 reply; 220+ messages in thread
From: Ivan Malov @ 2021-10-06 16:24 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: thomas, orika, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Hi,

On 06/10/2021 07:48, Alexander Kozyrev wrote:
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and queue-based
> operations can be safely invoked without any locks from a single thread.
> 
> The rte_flow_q_flow_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_q_dequeue() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_q_flow_destroy() function
> enqueues a flow destruction to the requested queue.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Suggested-by: Ori Kam <orika@nvidia.com>
> ---
>   lib/ethdev/rte_flow.h | 288 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 288 insertions(+)
> 
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index ba3204b17e..8cdffd8d2e 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4298,6 +4298,13 @@ struct rte_flow_port_attr {
>   	 * Version of the struct layout, should be 0.
>   	 */
>   	uint32_t version;
> +	/**
> +	 * Number of flow queues to be configured.
> +	 * Flow queues are used for asyncronous flow rule creation/destruction.

Typo: asyncronous --> asynchronous

> +	 * The order of operations is not guaranteed inside a queue.
> +	 * Flow queues are not thread-safe.
> +	 */
> +	uint16_t nb_queues;
>   	/**
>   	 * Memory size allocated for the flow rules management.
>   	 * If set to 0, memory is allocated dynamically.
> @@ -4330,6 +4337,21 @@ struct rte_flow_port_attr {
>   	uint32_t fixed_resource_size:1;
>   };
>   
> +/**
> + * Flow engine queue configuration.
> + */
> +__extension__
> +struct rte_flow_queue_attr {

Perhaps "struct rte_flow_queue_mode" or "struct rte_flow_queue_conf". I 
don't insist.

> +	/**
> +	 * Version of the struct layout, should be 0.
> +	 */
> +	uint32_t version;
> +	/**
> +	 * Number of flow rule operations a queue can hold.
> +	 */
> +	uint32_t size;
> +};
> +
>   /**
>    * @warning
>    * @b EXPERIMENTAL: this API may change without prior notice.
> @@ -4346,6 +4368,8 @@ struct rte_flow_port_attr {
>    *   Port identifier of Ethernet device.
>    * @param[in] port_attr
>    *   Port configuration attributes.
> + * @param[in] queue_attr
> + *   Array that holds attributes for each queue.

This should probably say that the number of queues / array size is taken 
from port_attr->nb_queues.

Also, consider "... for each flow queue".

>    * @param[out] error
>    *   Perform verbose error reporting if not NULL.
>    *   PMDs initialize this structure in case of error only.
> @@ -4357,6 +4381,7 @@ __rte_experimental
>   int
>   rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
> +		   const struct rte_flow_queue_attr *queue_attr[],
>   		   struct rte_flow_error *error);
>   
>   /**
> @@ -4626,6 +4651,269 @@ __rte_experimental
>   int
>   rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table,
>   		       struct rte_flow_error *error);
> +
> +/**
> + * Queue operation attributes
> + */
> +__extension__
> +struct rte_flow_q_ops_attr {
> +	/**
> +	 * Version of the struct layout, should be 0.
> +	 */
> +	uint32_t version;
> +	/**
> +	 * The user data that will be returned on the completion.

Maybe "on completion events"?

> +	 */
> +	void *user_data;
> +	/**
> +	 * When set, the requested action must be sent to the HW without
> +	 * any delay. Any prior requests must be also sent to the HW.
> +	 * If this bit is cleared, the application must call the
> +	 * rte_flow_queue_flush API to actually send the request to the HW.

Not sure that I understand the "Any prior requests ..." part. If this 
structure configures operation mode for the whole queue and not for each 
enqueue request, then no "prior requests" can exist in the first place 
because each submission is meant to be immediately sent to the HW.

But if this structure can vary across enqueue requests, then this 
documentation should be improved to say this clearly.


> +	 */
> +	uint32_t flush:1;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue rule creation operation.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue
> + *   Flow queue used to insert the rule.
> + * @param[in] attr
> + *   Rule creation operation attributes.

At a bare minimum, please consider renaming this to "queue_attr". More 
variants (see above): "queue_mode", "queue_conf". I suggest doing so to 
avoid confusion with "struct rte_flow_attr" which sits in "struct 
rte_flow_table_attr" in fact.

If this structure must be exactly the same as the one used in 
rte_flow_configure(), please say so. If, however, this structure can be 
completely different on enqueue operations, then the argument name 
should indicate it somehow. Maybe, "override_queue_conf". Otherwise, 
it's unclear.

There are similar occurrences below.

> + * @param[in] table
> + *   Table to select templates from.

Perhaps "template_table"?

> + * @param[in] items
> + *   List of pattern items to be used.
> + *   The list order should match the order in the item template.
> + *   The spec is the only relevant member of the item that is being used.
> + * @param[in] item_template_index
> + *   Item template index in the table.
> + * @param[in] actions
> + *   List of actions to be used.
> + *   The list order should match the order in the action template.
> + * @param[in] action_template_index
> + *   Action template index in the table.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + *   The rule handle doesn't mean that the rule was offloaded.
> + *   Only completion result indicates that the rule was offloaded.
> + */
> +__rte_experimental
> +struct rte_flow *
> +rte_flow_q_flow_create(uint16_t port_id, uint32_t queue,
> +		       const struct rte_flow_q_ops_attr *attr,
> +		       const struct rte_flow_table *table,
> +			   const struct rte_flow_item items[],
> +		       uint8_t item_template_index,
> +			   const struct rte_flow_action actions[],
> +		       uint8_t action_template_index,
> +		       struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue rule destruction operation.
> + *
> + * This function enqueues a destruction operation on the queue.
> + * Application should assume that after calling this function
> + * the rule handle is not valid anymore.
> + * Completion indicates the full removal of the rule from the HW.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue
> + *   Flow queue which is used to destroy the rule.
> + *   This must match the queue on which the rule was created.
> + * @param[in] attr
> + *   Rule destroy operation attributes.
> + * @param[in] flow
> + *   Flow handle to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_flow_destroy(uint16_t port_id, uint32_t queue,
> +			struct rte_flow_q_ops_attr *attr,
> +			struct rte_flow *flow,
> +			struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue indirect rule creation operation.
> + * @see rte_flow_action_handle_create

Why "indirect rule"? This API seems to enqueue an "indirect action"...

> + *
> + * @param[in] port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] queue
> + *   Flow queue which is used to create the rule.
> + * @param[in] attr
> + *   Queue operation attributes.
> + * @param[in] conf
> + *   Action configuration for the indirect action object creation.

Perhaps "indir_conf" or "indir_action_conf"?

> + * @param[in] action
> + *   Specific configuration of the indirect action object.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   - (0) if success.
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-ENOSYS) if underlying device does not support this functionality.
> + *   - (-EIO) if underlying device is removed.
> + *   - (-ENOENT) if action pointed by *action* handle was not found.
> + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> + *   rte_errno is also set.
> + */
> +__rte_experimental
> +struct rte_flow_action_handle *
> +rte_flow_q_action_handle_create(uint16_t port_id, uint32_t queue,
> +				const struct rte_flow_q_ops_attr *attr,
> +				const struct rte_flow_indir_action_conf *conf,
> +				const struct rte_flow_action *action,
> +				struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue indirect rule destruction operation.

Please see above. Did you mean "indirect action"?

> + * The destroy queue must be the same
> + * as the queue on which the action was created.
> + *
> + * @param[in] port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] queue
> + *   Flow queue which is used to destroy the rule.
> + * @param[in] attr
> + *   Queue operation attributes.
> + * @param[in] handle
> + *   Handle for the indirect action object to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   - (0) if success.
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-ENOSYS) if underlying device does not support this functionality.
> + *   - (-EIO) if underlying device is removed.
> + *   - (-ENOENT) if action pointed by *action* handle was not found.
> + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> + *   rte_errno is also set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_action_handle_destroy(uint16_t port_id, uint32_t queue,
> +				struct rte_flow_q_ops_attr *attr,
> +				struct rte_flow_action_handle *handle,
> +				struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flush all internally stored rules to the HW.
> + * Non-flushed rules are rules that were inserted without the flush flag set.
> + * Can be used to notify the HW about batch of rules prepared by the SW to
> + * reduce the number of communications between the HW and SW.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue
> + *   Flow queue to be flushed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *    0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_flush(uint16_t port_id, uint32_t queue,
> +		 struct rte_flow_error *error);
> +
> +/**
> + * Dequeue operation status > + */
> +enum rte_flow_q_op_status {
> +	/**
> +	 * The operation was completed successfully.
> +	 */
> +	RTE_FLOW_Q_OP_SUCCESS,
> +	/**
> +	 * The operation was not completed successfully.
> +	 */
> +	RTE_FLOW_Q_OP_ERROR,
> +};
> +
> +/**
> + * Dequeue operation result.
> + */
> +struct rte_flow_q_op_res {
> +	/**
> +	 * Version of the struct layout, should be 0.
> +	 */
> +	uint32_t version;
> +	/**
> +	 * Returns the status of the operation that this completion signals.
> +	 */
> +	enum rte_flow_q_op_status status;
> +	/**
> +	 * User data that was supplied during operation submission.
> +	 */
> +	void *user_data;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Dequeue a rte flow operation.
> + * The application must invoke this function in order to complete
> + * the flow rule offloading and to receive the flow rule operation status.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue
> + *   Flow queue which is used to dequeue the operation.
> + * @param[out] res
> + *   Array of results that will be set.
> + * @param[in] n_res
> + *   Maximum number of results that can be returned.
> + *   This value is equal to the size of the res array.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Number of results that were dequeued,
> + *   a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_dequeue(uint16_t port_id, uint32_t queue,
> +		   struct rte_flow_q_op_res res[], uint16_t n_res,
> +		   struct rte_flow_error *error);
>   #ifdef __cplusplus
>   }
>   #endif
> 

-- 
Ivan M

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates Alexander Kozyrev
@ 2021-10-06 17:24   ` Ivan Malov
  2021-10-13  1:25     ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Ivan Malov @ 2021-10-06 17:24 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: thomas, orika, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Hi,

On 06/10/2021 07:48, Alexander Kozyrev wrote:
> Treating every single flow rule as a completely independent and separate
> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> application, many flow rules share a common structure (the same item mask
> and/or action list) so they can be grouped and classified together.
> This knowledge may be used as a source of optimization by a PMD/HW.
> 
> The item template defines common matching fields (the item mask) without
> values. The action template holds a list of action types that will be used
> together in the same rule. The specific values for items and actions will
> be given only during the rule creation.
> 
> A table combines item and action templates along with shared flow rule
> attributes (group ID, priority and traffic direction). This way a PMD/HW
> can prepare all the resources needed for efficient flow rules creation in
> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> number of flow rules is defined at table creation time.
> 
> The flow rule creation is done by selecting a table, an item template
> and an action template (which are bound to the table), and setting unique
> values for the items and actions.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Suggested-by: Ori Kam <orika@nvidia.com>
> ---
>   lib/ethdev/rte_flow.h | 268 ++++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 268 insertions(+)
> 
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index c69d503b90..ba3204b17e 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4358,6 +4358,274 @@ int
>   rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
>   		   struct rte_flow_error *error);
> +
> +/**
> + * Opaque type returned after successfull creation of item template.

Typo: "successfull" --> "successful".

> + * This handle can be used to manage the created item template.
> + */
> +struct rte_flow_item_template;
> +
> +__extension__
> +struct rte_flow_item_template_attr {
> +	/**
> +	 * Version of the struct layout, should be 0.
> +	 */
> +	uint32_t version;
> +	/* No attributes so far. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create item template.
> + * The item template defines common matching fields (item mask) without values.
> + * For example, matching on 5 tuple TCP flow, the template will be
> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> + * while values for each rule will be set during the flow rule creation.
> + * The order of items in the template must be the same at rule insertion.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] attr
> + *   Item template attributes.

Please consider adding meaningful prefixes to "attr" here and below. 
This is needed to avoid confusion with "struct rte_flow_attr".

Example: "template_attr".

> + * @param[in] items
> + *   Pattern specification (list terminated by the END pattern item).
> + *   The spec member of an item is not used unless the end member is used.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_item_template *
> +rte_flow_item_template_create(uint16_t port_id,
> +			      const struct rte_flow_item_template_attr *attr,
> +			      const struct rte_flow_item items[],
> +			      struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Destroy item template.
> + * This function may be called only when
> + * there are no more tables referencing this template.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template
> + *   Handle to the template to be destroyed.

Perhaps "handle OF the template"?

> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_item_template_destroy(uint16_t port_id,
> +			       struct rte_flow_item_template *template,
> +			       struct rte_flow_error *error);
> +
> +/**
> + * Opaque type returned after successfull creation of action template.

Single "l" in "successful".

> + * This handle can be used to manage the created action template.
> + */
> +struct rte_flow_action_template;
> +
> +__extension__
> +struct rte_flow_action_template_attr {
> +	/**
> +	 * Version of the struct layout, should be 0.
> +	 */
> +	uint32_t version;
> +	/* No attributes so far. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create action template.
> + * The action template holds a list of action types without values.
> + * For example, the template to change TCP ports is TCP(s_port + d_port),
> + * while values for each rule will be set during the flow rule creation.
> + *
> + * The order of the action in the template must be kept when inserting rules.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] attr
> + *   Template attributes.

Perhaps add a meaningful prefix to "attr".

> + * @param[in] actions
> + *   Associated actions (list terminated by the END action).
> + *   The spec member is only used if the mask is 1.

Maybe "its mask is all ones"?

> + * @param[in] masks
> + *   List of actions that marks which of the action's member is constant.

Consider the following action example:

struct rte_flow_action_vxlan_encap {
         struct rte_flow_item *definition;
};

So, if "definition" is not NULL, the whole header definition is supposed 
to be constant, right? Or am I missing something?

> + *   A mask has the same format as the corresponding action.
> + *   If the action field in @p masks is not 0,
> + *   the corresponding value in an action from @p actions will be the part
> + *   of the template and used in all flow rules.
> + *   The order of actions in @p masks is the same as in @p actions.
> + *   In case of indirect actions present in @p actions,
> + *   the actual action type should be present in @p mask.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_action_template *
> +rte_flow_action_template_create(uint16_t port_id,
> +			const struct rte_flow_action_template_attr *attr,
> +			const struct rte_flow_action actions[],
> +			const struct rte_flow_action masks[],
> +			struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Destroy action template.
> + * This function may be called only when
> + * there are no more tables referencing this template.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template
> + *   Handle to the template to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_action_template_destroy(uint16_t port_id,
> +			const struct rte_flow_action_template *template,
> +			struct rte_flow_error *error);
> +
> +
> +/**
> + * Opaque type returned after successfull creation of table.

Redundant "l" in "successful".

> + * This handle can be used to manage the created table.
> + */
> +struct rte_flow_table;
> +
> +enum rte_flow_table_mode {
> +	/**
> +	 * Fixed size, the number of flow rules will be limited.
> +	 * It is possible that some of the rules will not be inserted
> +	 * due to conflicts/lack of space.
> +	 * When rule insertion fails with try again error,
> +	 * the application may use one of the following ways
> +	 * to address this state:
> +	 * 1. Keep this rule processing in the software.
> +	 * 2. Try to offload this rule at a later time,
> +	 *    after some rules have been removed from the hardware.
> +	 * 3. Create a new table and add this rule to the new table.
> +	 */
> +	RTE_FLOW_TABLE_MODE_FIXED,
> +	/**
> +	 * Resizable, the PMD/HW will insert all rules.
> +	 * No try again error will be received in this mode.
> +	 */
> +	RTE_FLOW_TABLE_MODE_RESIZABLE,
> +};
> +
> +/**
> + * Table attributes.
> + */
> +struct rte_flow_table_attr {
> +	/**
> +	 * Version of the struct layout, should be 0.
> +	 */
> +	uint32_t version;
> +	/**
> +	 * Flow attributes that will be used in the table.
> +	 */
> +	struct rte_flow_attr attr;

Perhaps, "flow_attr" then?

> +	/**
> +	 * Maximum number of flow rules that this table holds.
> +	 * It can be hard or soft limit depending on the mode.
> +	 */
> +	uint32_t max_rules;

How about "nb_flows_max"?

> +	/**
> +	 * Table mode.
> +	 */
> +	enum rte_flow_table_mode mode;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create table.
> + * Table is a group of flow rules with the same flow attributes
> + * (group ID, priority and traffic direction) defined for it.
> + * The table holds multiple item and action templates to build a flow rule.
> + * Each rule is free to use any combination of item and action templates
> + * and specify particular values for items and actions it would like to change.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] attr
> + *   Table attributes.
> + * @param[in] item_templates
> + *   Array of item templates to be used in this table.
> + * @param[in] nb_item_templates
> + *   The number of item templates in the item_templates array.
> + * @param[in] action_templates
> + *   Array of action templates to be used in this table.
> + * @param[in] nb_action_templates
> + *   The number of action templates in the action_templates array.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_table *
> +rte_flow_table_create(uint16_t port_id, struct rte_flow_table_attr *attr,
> +		      const struct rte_flow_item_template *item_templates[],
> +		      uint8_t nb_item_templates,
> +		      const struct rte_flow_action_template *action_templates[],
> +		      uint8_t nb_action_templates,
> +		      struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Destroy table.
> + * This function may be called only when
> + * there are no more flow rules referencing this table.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] table
> + *   Handle to the table to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table,
> +		       struct rte_flow_error *error);
>   #ifdef __cplusplus
>   }
>   #endif
> 

-- 
Ivan M

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations
  2021-10-06 16:24   ` Ivan Malov
@ 2021-10-13  1:10     ` Alexander Kozyrev
  0 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-13  1:10 UTC (permalink / raw)
  To: Ivan Malov, dev
  Cc: NBU-Contact-Thomas Monjalon, Ori Kam, andrew.rybchenko,
	ferruh.yigit, mohammad.abdul.awal, qi.z.zhang, jerinj,
	ajit.khaparde

> From: Ivan Malov <Ivan.Malov@oktetlabs.ru> on Wednesday, October 6, 2021 12:25
> On 06/10/2021 07:48, Alexander Kozyrev wrote:
> > A new, faster, queue-based flow rules management mechanism is needed for
> > applications offloading rules inside the datapath. This asynchronous
> > and lockless mechanism frees the CPU for further packet processing and
> > reduces the performance impact of the flow rules creation/destruction
> > on the datapath. Note that queues are not thread-safe and queue-based
> > operations can be safely invoked without any locks from a single thread.
> >
> > The rte_flow_q_flow_create() function enqueues a flow creation to the
> > requested queue. It benefits from already configured resources and sets
> > unique values on top of item and action templates. A flow rule is enqueued
> > on the specified flow queue and offloaded asynchronously to the hardware.
> > The function returns immediately to spare CPU for further packet
> > processing. The application must invoke the rte_flow_q_dequeue() function
> > to complete the flow rule operation offloading, to clear the queue, and to
> > receive the operation status. The rte_flow_q_flow_destroy() function
> > enqueues a flow destruction to the requested queue.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Suggested-by: Ori Kam <orika@nvidia.com>
> > ---
> >   lib/ethdev/rte_flow.h | 288
> ++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 288 insertions(+)
> >
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index ba3204b17e..8cdffd8d2e 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4298,6 +4298,13 @@ struct rte_flow_port_attr {
> >   	 * Version of the struct layout, should be 0.
> >   	 */
> >   	uint32_t version;
> > +	/**
> > +	 * Number of flow queues to be configured.
> > +	 * Flow queues are used for asyncronous flow rule creation/destruction.
> 
> Typo: asyncronous --> asynchronous
Thanks for noticing, will correct all the typos.

> > +	 * The order of operations is not guaranteed inside a queue.
> > +	 * Flow queues are not thread-safe.
> > +	 */
> > +	uint16_t nb_queues;
> >   	/**
> >   	 * Memory size allocated for the flow rules management.
> >   	 * If set to 0, memory is allocated dynamically.
> > @@ -4330,6 +4337,21 @@ struct rte_flow_port_attr {
> >   	uint32_t fixed_resource_size:1;
> >   };
> >
> > +/**
> > + * Flow engine queue configuration.
> > + */
> > +__extension__
> > +struct rte_flow_queue_attr {
> 
> Perhaps "struct rte_flow_queue_mode" or "struct rte_flow_queue_conf". I
> don't insist.
I would prefer sticking to attributes for consistency. We have them elsewhere.

> > +	/**
> > +	 * Version of the struct layout, should be 0.
> > +	 */
> > +	uint32_t version;
> > +	/**
> > +	 * Number of flow rule operations a queue can hold.
> > +	 */
> > +	uint32_t size;
> > +};
> > +
> >   /**
> >    * @warning
> >    * @b EXPERIMENTAL: this API may change without prior notice.
> > @@ -4346,6 +4368,8 @@ struct rte_flow_port_attr {
> >    *   Port identifier of Ethernet device.
> >    * @param[in] port_attr
> >    *   Port configuration attributes.
> > + * @param[in] queue_attr
> > + *   Array that holds attributes for each queue.
> 
> This should probably say that the number of queues / array size is taken
> from port_attr->nb_queues.
Good idea, will mention this.
> Also, consider "... for each flow queue".
Sounds good.

> >    * @param[out] error
> >    *   Perform verbose error reporting if not NULL.
> >    *   PMDs initialize this structure in case of error only.
> > @@ -4357,6 +4381,7 @@ __rte_experimental
> >   int
> >   rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> > +		   const struct rte_flow_queue_attr *queue_attr[],
> >   		   struct rte_flow_error *error);
> >
> >   /**
> > @@ -4626,6 +4651,269 @@ __rte_experimental
> >   int
> >   rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table,
> >   		       struct rte_flow_error *error);
> > +
> > +/**
> > + * Queue operation attributes
> > + */
> > +__extension__
> > +struct rte_flow_q_ops_attr {
> > +	/**
> > +	 * Version of the struct layout, should be 0.
> > +	 */
> > +	uint32_t version;
> > +	/**
> > +	 * The user data that will be returned on the completion.
> 
> Maybe "on completion events"?
Agree.

> > +	 */
> > +	void *user_data;
> > +	/**
> > +	 * When set, the requested action must be sent to the HW without
> > +	 * any delay. Any prior requests must be also sent to the HW.
> > +	 * If this bit is cleared, the application must call the
> > +	 * rte_flow_queue_flush API to actually send the request to the HW.
> 
> Not sure that I understand the "Any prior requests ..." part. If this
> structure configures operation mode for the whole queue and not for each
> enqueue request, then no "prior requests" can exist in the first place
> because each submission is meant to be immediately sent to the HW.
This structure configures attributes per single operation. 
User can send multiple operations without  "flush" attribute set to keep them in a queue.
Then it is possible to issue one operation with "flush" attribute set to purge the whole queue.
> But if this structure can vary across enqueue requests, then this
> documentation should be improved to say this clearly.
I'll try to make it clear that it is per-operation attributes indeed.


> > +	 */
> > +	uint32_t flush:1;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue rule creation operation.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param queue
> > + *   Flow queue used to insert the rule.
> > + * @param[in] attr
> > + *   Rule creation operation attributes.
> 
> At a bare minimum, please consider renaming this to "queue_attr". More
> variants (see above): "queue_mode", "queue_conf". I suggest doing so to
> avoid confusion with "struct rte_flow_attr" which sits in "struct
> rte_flow_table_attr" in fact.
> If this structure must be exactly the same as the one used in
> rte_flow_configure(), please say so. If, however, this structure can be
> completely different on enqueue operations, then the argument name
> should indicate it somehow. Maybe, "override_queue_conf". Otherwise,
> it's unclear.
Sounds reasonable, will do. Although it is operation attributes, not the queue's ones.

> There are similar occurrences below.
> > + * @param[in] table
> > + *   Table to select templates from.
> 
> Perhaps "template_table"?
Noted.

> > + * @param[in] items
> > + *   List of pattern items to be used.
> > + *   The list order should match the order in the item template.
> > + *   The spec is the only relevant member of the item that is being used.
> > + * @param[in] item_template_index
> > + *   Item template index in the table.
> > + * @param[in] actions
> > + *   List of actions to be used.
> > + *   The list order should match the order in the action template.
> > + * @param[in] action_template_index
> > + *   Action template index in the table.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + *   The rule handle doesn't mean that the rule was offloaded.
> > + *   Only completion result indicates that the rule was offloaded.
> > + */
> > +__rte_experimental
> > +struct rte_flow *
> > +rte_flow_q_flow_create(uint16_t port_id, uint32_t queue,
> > +		       const struct rte_flow_q_ops_attr *attr,
> > +		       const struct rte_flow_table *table,
> > +			   const struct rte_flow_item items[],
> > +		       uint8_t item_template_index,
> > +			   const struct rte_flow_action actions[],
> > +		       uint8_t action_template_index,
> > +		       struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue rule destruction operation.
> > + *
> > + * This function enqueues a destruction operation on the queue.
> > + * Application should assume that after calling this function
> > + * the rule handle is not valid anymore.
> > + * Completion indicates the full removal of the rule from the HW.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param queue
> > + *   Flow queue which is used to destroy the rule.
> > + *   This must match the queue on which the rule was created.
> > + * @param[in] attr
> > + *   Rule destroy operation attributes.
> > + * @param[in] flow
> > + *   Flow handle to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_q_flow_destroy(uint16_t port_id, uint32_t queue,
> > +			struct rte_flow_q_ops_attr *attr,
> > +			struct rte_flow *flow,
> > +			struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue indirect rule creation operation.
> > + * @see rte_flow_action_handle_create
> 
> Why "indirect rule"? This API seems to enqueue an "indirect action"...
Thanks, will change to action.

> > + *
> > + * @param[in] port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] queue
> > + *   Flow queue which is used to create the rule.
> > + * @param[in] attr
> > + *   Queue operation attributes.
> > + * @param[in] conf
> > + *   Action configuration for the indirect action object creation.
> 
> Perhaps "indir_conf" or "indir_action_conf"?
Ok.

> > + * @param[in] action
> > + *   Specific configuration of the indirect action object.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   - (0) if success.
> > + *   - (-ENODEV) if *port_id* invalid.
> > + *   - (-ENOSYS) if underlying device does not support this functionality.
> > + *   - (-EIO) if underlying device is removed.
> > + *   - (-ENOENT) if action pointed by *action* handle was not found.
> > + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> > + *   rte_errno is also set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_action_handle *
> > +rte_flow_q_action_handle_create(uint16_t port_id, uint32_t queue,
> > +				const struct rte_flow_q_ops_attr *attr,
> > +				const struct rte_flow_indir_action_conf *conf,
> > +				const struct rte_flow_action *action,
> > +				struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue indirect rule destruction operation.
> 
> Please see above. Did you mean "indirect action"?
Again, you are right, action is a better word here.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates
  2021-10-06 17:24   ` Ivan Malov
@ 2021-10-13  1:25     ` Alexander Kozyrev
  2021-10-13  2:26       ` Ajit Khaparde
  2021-10-13 11:25       ` Ivan Malov
  0 siblings, 2 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-13  1:25 UTC (permalink / raw)
  To: Ivan Malov, dev
  Cc: NBU-Contact-Thomas Monjalon, Ori Kam, andrew.rybchenko,
	ferruh.yigit, mohammad.abdul.awal, qi.z.zhang, jerinj,
	ajit.khaparde

> From: Ivan Malov <Ivan.Malov@oktetlabs.ru> On Wednesday, October 6, 2021 13:25
> On 06/10/2021 07:48, Alexander Kozyrev wrote:
> > Treating every single flow rule as a completely independent and separate
> > entity negatively impacts the flow rules insertion rate. Oftentimes in an
> > application, many flow rules share a common structure (the same item
> mask
> > and/or action list) so they can be grouped and classified together.
> > This knowledge may be used as a source of optimization by a PMD/HW.
> >
> > The item template defines common matching fields (the item mask)
> without
> > values. The action template holds a list of action types that will be used
> > together in the same rule. The specific values for items and actions will
> > be given only during the rule creation.
> >
> > A table combines item and action templates along with shared flow rule
> > attributes (group ID, priority and traffic direction). This way a PMD/HW
> > can prepare all the resources needed for efficient flow rules creation in
> > the datapath. To avoid any hiccups due to memory reallocation, the
> maximum
> > number of flow rules is defined at table creation time.
> >
> > The flow rule creation is done by selecting a table, an item template
> > and an action template (which are bound to the table), and setting unique
> > values for the items and actions.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Suggested-by: Ori Kam <orika@nvidia.com>
> > ---
> >   lib/ethdev/rte_flow.h | 268
> ++++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 268 insertions(+)
> >
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index c69d503b90..ba3204b17e 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4358,6 +4358,274 @@ int
> >   rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> >   		   struct rte_flow_error *error);
> > +
> > +/**
> > + * Opaque type returned after successfull creation of item template.
> 
> Typo: "successfull" --> "successful".
Thanks for noticing, will correct.

> > + * This handle can be used to manage the created item template.
> > + */
> > +struct rte_flow_item_template;
> > +
> > +__extension__
> > +struct rte_flow_item_template_attr {
> > +	/**
> > +	 * Version of the struct layout, should be 0.
> > +	 */
> > +	uint32_t version;
> > +	/* No attributes so far. */
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create item template.
> > + * The item template defines common matching fields (item mask)
> without values.
> > + * For example, matching on 5 tuple TCP flow, the template will be
> > + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> > + * while values for each rule will be set during the flow rule creation.
> > + * The order of items in the template must be the same at rule insertion.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] attr
> > + *   Item template attributes.
> 
> Please consider adding meaningful prefixes to "attr" here and below.
> This is needed to avoid confusion with "struct rte_flow_attr".
> 
> Example: "template_attr".
No problem.

> > + * @param[in] items
> > + *   Pattern specification (list terminated by the END pattern item).
> > + *   The spec member of an item is not used unless the end member is
> used.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_item_template *
> > +rte_flow_item_template_create(uint16_t port_id,
> > +			      const struct rte_flow_item_template_attr *attr,
> > +			      const struct rte_flow_item items[],
> > +			      struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Destroy item template.
> > + * This function may be called only when
> > + * there are no more tables referencing this template.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template
> > + *   Handle to the template to be destroyed.
> 
> Perhaps "handle OF the template"?
You are right.

> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_item_template_destroy(uint16_t port_id,
> > +			       struct rte_flow_item_template *template,
> > +			       struct rte_flow_error *error);
> > +
> > +/**
> > + * Opaque type returned after successfull creation of action template.
> 
> Single "l" in "successful".
Ditto.

> > + * This handle can be used to manage the created action template.
> > + */
> > +struct rte_flow_action_template;
> > +
> > +__extension__
> > +struct rte_flow_action_template_attr {
> > +	/**
> > +	 * Version of the struct layout, should be 0.
> > +	 */
> > +	uint32_t version;
> > +	/* No attributes so far. */
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create action template.
> > + * The action template holds a list of action types without values.
> > + * For example, the template to change TCP ports is TCP(s_port + d_port),
> > + * while values for each rule will be set during the flow rule creation.
> > + *
> > + * The order of the action in the template must be kept when inserting
> rules.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] attr
> > + *   Template attributes.
> 
> Perhaps add a meaningful prefix to "attr".
Sure thing, will rename all the "attr" to "thing_attr".

> > + * @param[in] actions
> > + *   Associated actions (list terminated by the END action).
> > + *   The spec member is only used if the mask is 1.
> 
> Maybe "its mask is all ones"?
Not necessarily, just non-zero value would do. Will make it clearer.

> > + * @param[in] masks
> > + *   List of actions that marks which of the action's member is constant.
> 
> Consider the following action example:
> 
> struct rte_flow_action_vxlan_encap {
>          struct rte_flow_item *definition;
> };
> 
> So, if "definition" is not NULL, the whole header definition is supposed
> to be constant, right? Or am I missing something?
If definition has non-zero value then the action spec will be used in every rule created with this template.
In this particular example, yes, this definition is going to be a constant header for all the rules.


> > + *   A mask has the same format as the corresponding action.
> > + *   If the action field in @p masks is not 0,
> > + *   the corresponding value in an action from @p actions will be the part
> > + *   of the template and used in all flow rules.
> > + *   The order of actions in @p masks is the same as in @p actions.
> > + *   In case of indirect actions present in @p actions,
> > + *   the actual action type should be present in @p mask.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   handle on success, NULL otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_action_template *
> > +rte_flow_action_template_create(uint16_t port_id,
> > +			const struct rte_flow_action_template_attr *attr,
> > +			const struct rte_flow_action actions[],
> > +			const struct rte_flow_action masks[],
> > +			struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Destroy action template.
> > + * This function may be called only when
> > + * there are no more tables referencing this template.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template
> > + *   Handle to the template to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_action_template_destroy(uint16_t port_id,
> > +			const struct rte_flow_action_template *template,
> > +			struct rte_flow_error *error);
> > +
> > +
> > +/**
> > + * Opaque type returned after successfull creation of table.
> 
> Redundant "l" in "successful".
Consider this fixed.

> > + * This handle can be used to manage the created table.
> > + */
> > +struct rte_flow_table;
> > +
> > +enum rte_flow_table_mode {
> > +	/**
> > +	 * Fixed size, the number of flow rules will be limited.
> > +	 * It is possible that some of the rules will not be inserted
> > +	 * due to conflicts/lack of space.
> > +	 * When rule insertion fails with try again error,
> > +	 * the application may use one of the following ways
> > +	 * to address this state:
> > +	 * 1. Keep this rule processing in the software.
> > +	 * 2. Try to offload this rule at a later time,
> > +	 *    after some rules have been removed from the hardware.
> > +	 * 3. Create a new table and add this rule to the new table.
> > +	 */
> > +	RTE_FLOW_TABLE_MODE_FIXED,
> > +	/**
> > +	 * Resizable, the PMD/HW will insert all rules.
> > +	 * No try again error will be received in this mode.
> > +	 */
> > +	RTE_FLOW_TABLE_MODE_RESIZABLE,
> > +};
> > +
> > +/**
> > + * Table attributes.
> > + */
> > +struct rte_flow_table_attr {
> > +	/**
> > +	 * Version of the struct layout, should be 0.
> > +	 */
> > +	uint32_t version;
> > +	/**
> > +	 * Flow attributes that will be used in the table.
> > +	 */
> > +	struct rte_flow_attr attr;
> 
> Perhaps, "flow_attr" then?
As we agreed.

> > +	/**
> > +	 * Maximum number of flow rules that this table holds.
> > +	 * It can be hard or soft limit depending on the mode.
> > +	 */
> > +	uint32_t max_rules;
> 
> How about "nb_flows_max"?
Just nb_flows maybe?

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates
  2021-10-13  1:25     ` Alexander Kozyrev
@ 2021-10-13  2:26       ` Ajit Khaparde
  2021-10-13  2:38         ` Alexander Kozyrev
  2021-10-13 11:25       ` Ivan Malov
  1 sibling, 1 reply; 220+ messages in thread
From: Ajit Khaparde @ 2021-10-13  2:26 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: Ivan Malov, dev, NBU-Contact-Thomas Monjalon, Ori Kam,
	andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj

[-- Attachment #1: Type: text/plain, Size: 1906 bytes --]

On Tue, Oct 12, 2021 at 6:25 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> > From: Ivan Malov <Ivan.Malov@oktetlabs.ru> On Wednesday, October 6, 2021 13:25
> > On 06/10/2021 07:48, Alexander Kozyrev wrote:
> > > Treating every single flow rule as a completely independent and separate
> > > entity negatively impacts the flow rules insertion rate. Oftentimes in an
> > > application, many flow rules share a common structure (the same item
> > mask
> > > and/or action list) so they can be grouped and classified together.
> > > This knowledge may be used as a source of optimization by a PMD/HW.
> > >
> > > The item template defines common matching fields (the item mask)
> > without
> > > values. The action template holds a list of action types that will be used
> > > together in the same rule. The specific values for items and actions will
> > > be given only during the rule creation.
> > >
> > > A table combines item and action templates along with shared flow rule
> > > attributes (group ID, priority and traffic direction). This way a PMD/HW
> > > can prepare all the resources needed for efficient flow rules creation in
> > > the datapath. To avoid any hiccups due to memory reallocation, the
> > maximum
> > > number of flow rules is defined at table creation time.
> > >
> > > The flow rule creation is done by selecting a table, an item template
> > > and an action template (which are bound to the table), and setting unique
> > > values for the items and actions.

For the life cycle of the template -
Is a template supposed to be destroyed immediately after its use?
Can there be multiple templates active at a time?
In which case will the application maintain the templates?
And how to identify one template from another? Or that will not be needed?


> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > Suggested-by: Ori Kam <orika@nvidia.com>
> > > ---

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates
  2021-10-13  2:26       ` Ajit Khaparde
@ 2021-10-13  2:38         ` Alexander Kozyrev
  0 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2021-10-13  2:38 UTC (permalink / raw)
  To: Ajit Khaparde
  Cc: Ivan Malov, dev, NBU-Contact-Thomas Monjalon, Ori Kam,
	andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj

> From: Ajit Khaparde <ajit.khaparde@broadcom.com> On Tuesday, October 12, 2021 22:27
> To: Alexander Kozyrev <akozyrev@nvidia.com>
> >
> > > From: Ivan Malov <Ivan.Malov@oktetlabs.ru> On Wednesday, October 6,
> 2021 13:25
> > > On 06/10/2021 07:48, Alexander Kozyrev wrote:
> > > > Treating every single flow rule as a completely independent and
> separate
> > > > entity negatively impacts the flow rules insertion rate. Oftentimes in an
> > > > application, many flow rules share a common structure (the same item
> > > mask
> > > > and/or action list) so they can be grouped and classified together.
> > > > This knowledge may be used as a source of optimization by a PMD/HW.
> > > >
> > > > The item template defines common matching fields (the item mask)
> > > without
> > > > values. The action template holds a list of action types that will be used
> > > > together in the same rule. The specific values for items and actions will
> > > > be given only during the rule creation.
> > > >
> > > > A table combines item and action templates along with shared flow rule
> > > > attributes (group ID, priority and traffic direction). This way a PMD/HW
> > > > can prepare all the resources needed for efficient flow rules creation in
> > > > the datapath. To avoid any hiccups due to memory reallocation, the
> > > maximum
> > > > number of flow rules is defined at table creation time.
> > > >
> > > > The flow rule creation is done by selecting a table, an item template
> > > > and an action template (which are bound to the table), and setting
> unique
> > > > values for the items and actions.
> 
> For the life cycle of the template -
> Is a template supposed to be destroyed immediately after its use?
> Can there be multiple templates active at a time?
> In which case will the application maintain the templates?
> And how to identify one template from another? Or that will not be needed?


A template must be active until there are no more tables referencing it.
This, in turn, means that all the rules using it must be destroyed before that as well.
The application gets a template handle and stores it in a table for future usage.
There can be many templates stored in a single/multiple tables as needed.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2021-10-13  4:11   ` Ajit Khaparde
  2021-10-13 13:15     ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Ajit Khaparde @ 2021-10-13  4:11 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Thomas Monjalon, Ori Kam, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob Kollanukkaran

On Tue, Oct 5, 2021 at 9:48 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
>
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones.
This could get tricky. An application can avoid attempts to create
flows for items/actions if it can get a hint that PMD cannot satisfy
some of the hints provided.
Also what if the application tries to configure a higher count than
what the PMD/HW can support?
It will be good if the hints can be negotiated.
Something like this?
Application could start with a set of hints.
PMD can check what can be supported and return an updated set.
Application stays within those values till it needs to resize.

>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Suggested-by: Ori Kam <orika@nvidia.com>
> ---
>  lib/ethdev/rte_flow.h | 70 +++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 70 insertions(+)
>
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 7b1ed7f110..c69d503b90 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4288,6 +4288,76 @@ rte_flow_tunnel_item_release(uint16_t port_id,
>                              struct rte_flow_item *items,
>                              uint32_t num_of_items,
>                              struct rte_flow_error *error);
> +
> +/**
> + * Flow engine configuration.
> + */
> +__extension__
> +struct rte_flow_port_attr {
> +       /**
> +        * Version of the struct layout, should be 0.
> +        */
> +       uint32_t version;
> +       /**
> +        * Memory size allocated for the flow rules management.
> +        * If set to 0, memory is allocated dynamically.
> +        */
> +       uint32_t mem_size;
> +       /**
> +        * Number of counter actions pre-configured.
> +        * If set to 0, PMD will allocate counters dynamically.
> +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> +        */
> +       uint32_t nb_counters;
> +       /**
> +        * Number of aging actions pre-configured.
> +        * If set to 0, PMD will allocate aging dynamically.
> +        * @see RTE_FLOW_ACTION_TYPE_AGE
> +        */
> +       uint32_t nb_aging;
> +       /**
> +        * Number of traffic metering actions pre-configured.
> +        * If set to 0, PMD will allocate meters dynamically.
> +        * @see RTE_FLOW_ACTION_TYPE_METER
> +        */
> +       uint32_t nb_meters;
> +       /**
> +        * Resources reallocation strategy.
> +        * If set to 1, PMD is not allowed to allocate more resources on demand.
> +        * An application can only allocate more resources by calling the
> +        * configure API again with new values (may not be supported by PMD).
> +        */
> +       uint32_t fixed_resource_size:1;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Configure flow rules module.
> + * To pre-allocate resources as per the flow port attributes,
> + * this configuration function must be called before any flow rule is created.
> + * No other rte_flow function should be called while this function is invoked.
> + * This function can be called again to change the configuration.
> + * Some PMDs may not support re-configuration at all,
> + * or may only allow increasing the number of resources allocated.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] port_attr
> + *   Port configuration attributes.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_configure(uint16_t port_id,
> +                  const struct rte_flow_port_attr *port_attr,
> +                  struct rte_flow_error *error);
>  #ifdef __cplusplus
>  }
>  #endif
> --
> 2.18.2
>

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations Alexander Kozyrev
  2021-10-06 16:24   ` Ivan Malov
@ 2021-10-13  4:57   ` Ajit Khaparde
  2021-10-13 13:17     ` Ori Kam
  1 sibling, 1 reply; 220+ messages in thread
From: Ajit Khaparde @ 2021-10-13  4:57 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Thomas Monjalon, Ori Kam, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob Kollanukkaran

On Tue, Oct 5, 2021 at 9:49 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and queue-based
> operations can be safely invoked without any locks from a single thread.
>
> The rte_flow_q_flow_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_q_dequeue() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_q_flow_destroy() function
> enqueues a flow destruction to the requested queue.
>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Suggested-by: Ori Kam <orika@nvidia.com>
> ---
>  lib/ethdev/rte_flow.h | 288 ++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 288 insertions(+)
>
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index ba3204b17e..8cdffd8d2e 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4298,6 +4298,13 @@ struct rte_flow_port_attr {
>          * Version of the struct layout, should be 0.
>          */
>         uint32_t version;
> +       /**
> +        * Number of flow queues to be configured.
> +        * Flow queues are used for asyncronous flow rule creation/destruction.
> +        * The order of operations is not guaranteed inside a queue.
> +        * Flow queues are not thread-safe.
> +        */
> +       uint16_t nb_queues;
Will it matter if PMD can create a smaller set of queues? Or may be just one?
Should the application set this based on get_infos_get() or some other
mechanism?

::::

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates
  2021-10-13  1:25     ` Alexander Kozyrev
  2021-10-13  2:26       ` Ajit Khaparde
@ 2021-10-13 11:25       ` Ivan Malov
  1 sibling, 0 replies; 220+ messages in thread
From: Ivan Malov @ 2021-10-13 11:25 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon, Ori Kam, andrew.rybchenko,
	ferruh.yigit, mohammad.abdul.awal, qi.z.zhang, jerinj,
	ajit.khaparde

Hi,

On 13/10/2021 04:25, Alexander Kozyrev wrote:
>> From: Ivan Malov <Ivan.Malov@oktetlabs.ru> On Wednesday, October 6, 2021 13:25
>> On 06/10/2021 07:48, Alexander Kozyrev wrote:
>>> Treating every single flow rule as a completely independent and separate
>>> entity negatively impacts the flow rules insertion rate. Oftentimes in an
>>> application, many flow rules share a common structure (the same item
>> mask
>>> and/or action list) so they can be grouped and classified together.
>>> This knowledge may be used as a source of optimization by a PMD/HW.
>>>
>>> The item template defines common matching fields (the item mask)
>> without
>>> values. The action template holds a list of action types that will be used
>>> together in the same rule. The specific values for items and actions will
>>> be given only during the rule creation.
>>>
>>> A table combines item and action templates along with shared flow rule
>>> attributes (group ID, priority and traffic direction). This way a PMD/HW
>>> can prepare all the resources needed for efficient flow rules creation in
>>> the datapath. To avoid any hiccups due to memory reallocation, the
>> maximum
>>> number of flow rules is defined at table creation time.
>>>
>>> The flow rule creation is done by selecting a table, an item template
>>> and an action template (which are bound to the table), and setting unique
>>> values for the items and actions.
>>>
>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>> Suggested-by: Ori Kam <orika@nvidia.com>
>>> ---
>>>    lib/ethdev/rte_flow.h | 268
>> ++++++++++++++++++++++++++++++++++++++++++
>>>    1 file changed, 268 insertions(+)
>>>
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>> index c69d503b90..ba3204b17e 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -4358,6 +4358,274 @@ int
>>>    rte_flow_configure(uint16_t port_id,
>>>    		   const struct rte_flow_port_attr *port_attr,
>>>    		   struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * Opaque type returned after successfull creation of item template.
>>
>> Typo: "successfull" --> "successful".
> Thanks for noticing, will correct.
> 
>>> + * This handle can be used to manage the created item template.
>>> + */
>>> +struct rte_flow_item_template;
>>> +
>>> +__extension__
>>> +struct rte_flow_item_template_attr {
>>> +	/**
>>> +	 * Version of the struct layout, should be 0.
>>> +	 */
>>> +	uint32_t version;
>>> +	/* No attributes so far. */
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Create item template.
>>> + * The item template defines common matching fields (item mask)
>> without values.
>>> + * For example, matching on 5 tuple TCP flow, the template will be
>>> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
>>> + * while values for each rule will be set during the flow rule creation.
>>> + * The order of items in the template must be the same at rule insertion.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] attr
>>> + *   Item template attributes.
>>
>> Please consider adding meaningful prefixes to "attr" here and below.
>> This is needed to avoid confusion with "struct rte_flow_attr".
>>
>> Example: "template_attr".
> No problem.
> 
>>> + * @param[in] items
>>> + *   Pattern specification (list terminated by the END pattern item).
>>> + *   The spec member of an item is not used unless the end member is
>> used.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   Handle on success, NULL otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +struct rte_flow_item_template *
>>> +rte_flow_item_template_create(uint16_t port_id,
>>> +			      const struct rte_flow_item_template_attr *attr,
>>> +			      const struct rte_flow_item items[],
>>> +			      struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Destroy item template.
>>> + * This function may be called only when
>>> + * there are no more tables referencing this template.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] template
>>> + *   Handle to the template to be destroyed.
>>
>> Perhaps "handle OF the template"?
> You are right.
> 
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_flow_item_template_destroy(uint16_t port_id,
>>> +			       struct rte_flow_item_template *template,
>>> +			       struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * Opaque type returned after successfull creation of action template.
>>
>> Single "l" in "successful".
> Ditto.
> 
>>> + * This handle can be used to manage the created action template.
>>> + */
>>> +struct rte_flow_action_template;
>>> +
>>> +__extension__
>>> +struct rte_flow_action_template_attr {
>>> +	/**
>>> +	 * Version of the struct layout, should be 0.
>>> +	 */
>>> +	uint32_t version;
>>> +	/* No attributes so far. */
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Create action template.
>>> + * The action template holds a list of action types without values.
>>> + * For example, the template to change TCP ports is TCP(s_port + d_port),
>>> + * while values for each rule will be set during the flow rule creation.
>>> + *
>>> + * The order of the action in the template must be kept when inserting
>> rules.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] attr
>>> + *   Template attributes.
>>
>> Perhaps add a meaningful prefix to "attr".
> Sure thing, will rename all the "attr" to "thing_attr".
> 
>>> + * @param[in] actions
>>> + *   Associated actions (list terminated by the END action).
>>> + *   The spec member is only used if the mask is 1.
>>
>> Maybe "its mask is all ones"?
> Not necessarily, just non-zero value would do. Will make it clearer.
> 
>>> + * @param[in] masks
>>> + *   List of actions that marks which of the action's member is constant.
>>
>> Consider the following action example:
>>
>> struct rte_flow_action_vxlan_encap {
>>           struct rte_flow_item *definition;
>> };
>>
>> So, if "definition" is not NULL, the whole header definition is supposed
>> to be constant, right? Or am I missing something?
> If definition has non-zero value then the action spec will be used in every rule created with this template.
> In this particular example, yes, this definition is going to be a constant header for all the rules.
> 
> 
>>> + *   A mask has the same format as the corresponding action.
>>> + *   If the action field in @p masks is not 0,
>>> + *   the corresponding value in an action from @p actions will be the part
>>> + *   of the template and used in all flow rules.
>>> + *   The order of actions in @p masks is the same as in @p actions.
>>> + *   In case of indirect actions present in @p actions,
>>> + *   the actual action type should be present in @p mask.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   handle on success, NULL otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +struct rte_flow_action_template *
>>> +rte_flow_action_template_create(uint16_t port_id,
>>> +			const struct rte_flow_action_template_attr *attr,
>>> +			const struct rte_flow_action actions[],
>>> +			const struct rte_flow_action masks[],
>>> +			struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Destroy action template.
>>> + * This function may be called only when
>>> + * there are no more tables referencing this template.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] template
>>> + *   Handle to the template to be destroyed.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_flow_action_template_destroy(uint16_t port_id,
>>> +			const struct rte_flow_action_template *template,
>>> +			struct rte_flow_error *error);
>>> +
>>> +
>>> +/**
>>> + * Opaque type returned after successfull creation of table.
>>
>> Redundant "l" in "successful".
> Consider this fixed.
> 
>>> + * This handle can be used to manage the created table.
>>> + */
>>> +struct rte_flow_table;
>>> +
>>> +enum rte_flow_table_mode {
>>> +	/**
>>> +	 * Fixed size, the number of flow rules will be limited.
>>> +	 * It is possible that some of the rules will not be inserted
>>> +	 * due to conflicts/lack of space.
>>> +	 * When rule insertion fails with try again error,
>>> +	 * the application may use one of the following ways
>>> +	 * to address this state:
>>> +	 * 1. Keep this rule processing in the software.
>>> +	 * 2. Try to offload this rule at a later time,
>>> +	 *    after some rules have been removed from the hardware.
>>> +	 * 3. Create a new table and add this rule to the new table.
>>> +	 */
>>> +	RTE_FLOW_TABLE_MODE_FIXED,
>>> +	/**
>>> +	 * Resizable, the PMD/HW will insert all rules.
>>> +	 * No try again error will be received in this mode.
>>> +	 */
>>> +	RTE_FLOW_TABLE_MODE_RESIZABLE,
>>> +};
>>> +
>>> +/**
>>> + * Table attributes.
>>> + */
>>> +struct rte_flow_table_attr {
>>> +	/**
>>> +	 * Version of the struct layout, should be 0.
>>> +	 */
>>> +	uint32_t version;
>>> +	/**
>>> +	 * Flow attributes that will be used in the table.
>>> +	 */
>>> +	struct rte_flow_attr attr;
>>
>> Perhaps, "flow_attr" then?
> As we agreed.
> 
>>> +	/**
>>> +	 * Maximum number of flow rules that this table holds.
>>> +	 * It can be hard or soft limit depending on the mode.
>>> +	 */
>>> +	uint32_t max_rules;
>>
>> How about "nb_flows_max"?
> Just nb_flows maybe?
> 

Probabably OK, too, but I don't have a strong opinion.
One more option: "table_size".

-- 
Ivan M

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints
  2021-10-13  4:11   ` Ajit Khaparde
@ 2021-10-13 13:15     ` Ori Kam
  2021-10-31 17:27       ` Ajit Khaparde
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2021-10-13 13:15 UTC (permalink / raw)
  To: Ajit Khaparde, Alexander Kozyrev
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob Kollanukkaran

Hi Ajit,

> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Wednesday, October 13, 2021 7:11 AM
> Subject: Re: [PATCH 1/3] ethdev: introduce flow pre-configuration hints
> 
> On Tue, Oct 5, 2021 at 9:48 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> >
> > The flow rules creation/destruction at a large scale incurs a
> > performance penalty and may negatively impact the packet processing
> > when used as part of the datapath logic. This is mainly because
> > software/hardware resources are allocated and prepared during the flow rule creation.
> >
> > In order to optimize the insertion rate, PMD may use some hints
> > provided by the application at the initialization phase. The
> > rte_flow_configure() function allows to pre-allocate all the needed resources beforehand.
> > These resources can be used at a later stage without costly allocations.
> > Every PMD may use only the subset of hints and ignore unused ones.
> This could get tricky. An application can avoid attempts to create flows for items/actions if it can get a
> hint that PMD cannot satisfy some of the hints provided.
> Also what if the application tries to configure a higher count than what the PMD/HW can support?
> It will be good if the hints can be negotiated.
> Something like this?
> Application could start with a set of hints.
> PMD can check what can be supported and return an updated set.
> Application stays within those values till it needs to resize.
> 

I don't like the negotaion approach since soon it will be impossible to maintain.
I don't know how many possible option there will and what each PMD will implement.
Also negotiation means that the PMD think he knows what is best and this is the opposite of
this feature as giving the application as much control as possible.

Just like configure ethdev may fail and the application just get the error message.
What application should do with the hits is PMD dependent.

For example application request 1M counter, this can even be system specific (to much memory)
if PMD can't allocate such number of counter it should fail with error message that lets the application
know if possible the max number of counters.

Best,
Ori
 
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Suggested-by: Ori Kam <orika@nvidia.com>
> > ---
> >  lib/ethdev/rte_flow.h | 70
> > +++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 70 insertions(+)
> >
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > 7b1ed7f110..c69d503b90 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4288,6 +4288,76 @@ rte_flow_tunnel_item_release(uint16_t port_id,
> >                              struct rte_flow_item *items,
> >                              uint32_t num_of_items,
> >                              struct rte_flow_error *error);
> > +
> > +/**
> > + * Flow engine configuration.
> > + */
> > +__extension__
> > +struct rte_flow_port_attr {
> > +       /**
> > +        * Version of the struct layout, should be 0.
> > +        */
> > +       uint32_t version;
> > +       /**
> > +        * Memory size allocated for the flow rules management.
> > +        * If set to 0, memory is allocated dynamically.
> > +        */
> > +       uint32_t mem_size;
> > +       /**
> > +        * Number of counter actions pre-configured.
> > +        * If set to 0, PMD will allocate counters dynamically.
> > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > +        */
> > +       uint32_t nb_counters;
> > +       /**
> > +        * Number of aging actions pre-configured.
> > +        * If set to 0, PMD will allocate aging dynamically.
> > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > +        */
> > +       uint32_t nb_aging;
> > +       /**
> > +        * Number of traffic metering actions pre-configured.
> > +        * If set to 0, PMD will allocate meters dynamically.
> > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > +        */
> > +       uint32_t nb_meters;
> > +       /**
> > +        * Resources reallocation strategy.
> > +        * If set to 1, PMD is not allowed to allocate more resources on demand.
> > +        * An application can only allocate more resources by calling the
> > +        * configure API again with new values (may not be supported by PMD).
> > +        */
> > +       uint32_t fixed_resource_size:1; };
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Configure flow rules module.
> > + * To pre-allocate resources as per the flow port attributes,
> > + * this configuration function must be called before any flow rule is created.
> > + * No other rte_flow function should be called while this function is invoked.
> > + * This function can be called again to change the configuration.
> > + * Some PMDs may not support re-configuration at all,
> > + * or may only allow increasing the number of resources allocated.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] port_attr
> > + *   Port configuration attributes.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_configure(uint16_t port_id,
> > +                  const struct rte_flow_port_attr *port_attr,
> > +                  struct rte_flow_error *error);
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > --
> > 2.18.2
> >

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations
  2021-10-13  4:57   ` Ajit Khaparde
@ 2021-10-13 13:17     ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2021-10-13 13:17 UTC (permalink / raw)
  To: Ajit Khaparde, Alexander Kozyrev
  Cc: dpdk-dev, NBU-Contact-Thomas Monjalon, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob Kollanukkaran

Hi Ajit,

> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Wednesday, October 13, 2021 7:58 AM
> Subject: Re: [PATCH 3/3] ethdev: add async queue-based flow rules operations
> 
> On Tue, Oct 5, 2021 at 9:49 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> >
> > A new, faster, queue-based flow rules management mechanism is needed
> > for applications offloading rules inside the datapath. This
> > asynchronous and lockless mechanism frees the CPU for further packet
> > processing and reduces the performance impact of the flow rules
> > creation/destruction on the datapath. Note that queues are not
> > thread-safe and queue-based operations can be safely invoked without any locks from a single
> thread.
> >
> > The rte_flow_q_flow_create() function enqueues a flow creation to the
> > requested queue. It benefits from already configured resources and
> > sets unique values on top of item and action templates. A flow rule is
> > enqueued on the specified flow queue and offloaded asynchronously to the hardware.
> > The function returns immediately to spare CPU for further packet
> > processing. The application must invoke the rte_flow_q_dequeue()
> > function to complete the flow rule operation offloading, to clear the
> > queue, and to receive the operation status. The
> > rte_flow_q_flow_destroy() function enqueues a flow destruction to the requested queue.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Suggested-by: Ori Kam <orika@nvidia.com>
> > ---
> >  lib/ethdev/rte_flow.h | 288
> > ++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 288 insertions(+)
> >
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > ba3204b17e..8cdffd8d2e 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4298,6 +4298,13 @@ struct rte_flow_port_attr {
> >          * Version of the struct layout, should be 0.
> >          */
> >         uint32_t version;
> > +       /**
> > +        * Number of flow queues to be configured.
> > +        * Flow queues are used for asyncronous flow rule creation/destruction.
> > +        * The order of operations is not guaranteed inside a queue.
> > +        * Flow queues are not thread-safe.
> > +        */
> > +       uint16_t nb_queues;
> Will it matter if PMD can create a smaller set of queues? Or may be just one?
> Should the application set this based on get_infos_get() or some other mechanism?
> 
This is the number of queues from application point of view.
PMD can implement just one queue using locks.

Best,
Ori
> ::::

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints
  2021-10-13 13:15     ` Ori Kam
@ 2021-10-31 17:27       ` Ajit Khaparde
  2021-11-01 10:40         ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Ajit Khaparde @ 2021-10-31 17:27 UTC (permalink / raw)
  To: Ori Kam
  Cc: Alexander Kozyrev, dpdk-dev, NBU-Contact-Thomas Monjalon,
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob Kollanukkaran

On Wed, Oct 13, 2021 at 6:15 AM Ori Kam <orika@nvidia.com> wrote:
>
> Hi Ajit,
>
> > -----Original Message-----
> > From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > Sent: Wednesday, October 13, 2021 7:11 AM
> > Subject: Re: [PATCH 1/3] ethdev: introduce flow pre-configuration hints
> >
> > On Tue, Oct 5, 2021 at 9:48 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > >
> > > The flow rules creation/destruction at a large scale incurs a
> > > performance penalty and may negatively impact the packet processing
> > > when used as part of the datapath logic. This is mainly because
> > > software/hardware resources are allocated and prepared during the flow rule creation.
> > >
> > > In order to optimize the insertion rate, PMD may use some hints
> > > provided by the application at the initialization phase. The
> > > rte_flow_configure() function allows to pre-allocate all the needed resources beforehand.
> > > These resources can be used at a later stage without costly allocations.
> > > Every PMD may use only the subset of hints and ignore unused ones.
> > This could get tricky. An application can avoid attempts to create flows for items/actions if it can get a
> > hint that PMD cannot satisfy some of the hints provided.
> > Also what if the application tries to configure a higher count than what the PMD/HW can support?
> > It will be good if the hints can be negotiated.
> > Something like this?
> > Application could start with a set of hints.
> > PMD can check what can be supported and return an updated set.
> > Application stays within those values till it needs to resize.
> >
>
> I don't like the negotaion approach since soon it will be impossible to maintain.
> I don't know how many possible option there will and what each PMD will implement.
> Also negotiation means that the PMD think he knows what is best and this is the opposite of
> this feature as giving the application as much control as possible.
>
> Just like configure ethdev may fail and the application just get the error message.
> What application should do with the hits is PMD dependent.
>
> For example application request 1M counter, this can even be system specific (to much memory)
> if PMD can't allocate such number of counter it should fail with error message that lets the application
> know if possible the max number of counters.
How will the application know that the failure was because of counters
and not because of meters or something else?

>
> Best,
> Ori
>
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > Suggested-by: Ori Kam <orika@nvidia.com>
> > > ---
> > >  lib/ethdev/rte_flow.h | 70
> > > +++++++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 70 insertions(+)
> > >
> > > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > > 7b1ed7f110..c69d503b90 100644
> > > --- a/lib/ethdev/rte_flow.h
> > > +++ b/lib/ethdev/rte_flow.h
> > > @@ -4288,6 +4288,76 @@ rte_flow_tunnel_item_release(uint16_t port_id,
> > >                              struct rte_flow_item *items,
> > >                              uint32_t num_of_items,
> > >                              struct rte_flow_error *error);
> > > +
> > > +/**
> > > + * Flow engine configuration.
> > > + */
> > > +__extension__
> > > +struct rte_flow_port_attr {
> > > +       /**
> > > +        * Version of the struct layout, should be 0.
> > > +        */
> > > +       uint32_t version;
> > > +       /**
> > > +        * Memory size allocated for the flow rules management.
> > > +        * If set to 0, memory is allocated dynamically.
> > > +        */
> > > +       uint32_t mem_size;
> > > +       /**
> > > +        * Number of counter actions pre-configured.
> > > +        * If set to 0, PMD will allocate counters dynamically.
> > > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > > +        */
> > > +       uint32_t nb_counters;
> > > +       /**
> > > +        * Number of aging actions pre-configured.
> > > +        * If set to 0, PMD will allocate aging dynamically.
> > > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > > +        */
> > > +       uint32_t nb_aging;
> > > +       /**
> > > +        * Number of traffic metering actions pre-configured.
> > > +        * If set to 0, PMD will allocate meters dynamically.
> > > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > > +        */
> > > +       uint32_t nb_meters;
> > > +       /**
> > > +        * Resources reallocation strategy.
> > > +        * If set to 1, PMD is not allowed to allocate more resources on demand.
> > > +        * An application can only allocate more resources by calling the
> > > +        * configure API again with new values (may not be supported by PMD).
> > > +        */
> > > +       uint32_t fixed_resource_size:1; };
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Configure flow rules module.
> > > + * To pre-allocate resources as per the flow port attributes,
> > > + * this configuration function must be called before any flow rule is created.
> > > + * No other rte_flow function should be called while this function is invoked.
> > > + * This function can be called again to change the configuration.
> > > + * Some PMDs may not support re-configuration at all,
> > > + * or may only allow increasing the number of resources allocated.
> > > + *
> > > + * @param port_id
> > > + *   Port identifier of Ethernet device.
> > > + * @param[in] port_attr
> > > + *   Port configuration attributes.
> > > + * @param[out] error
> > > + *   Perform verbose error reporting if not NULL.
> > > + *   PMDs initialize this structure in case of error only.
> > > + *
> > > + * @return
> > > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > > + */
> > > +__rte_experimental
> > > +int
> > > +rte_flow_configure(uint16_t port_id,
> > > +                  const struct rte_flow_port_attr *port_attr,
> > > +                  struct rte_flow_error *error);
> > >  #ifdef __cplusplus
> > >  }
> > >  #endif
> > > --
> > > 2.18.2
> > >

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints
  2021-10-31 17:27       ` Ajit Khaparde
@ 2021-11-01 10:40         ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2021-11-01 10:40 UTC (permalink / raw)
  To: Ajit Khaparde
  Cc: Alexander Kozyrev, dpdk-dev, NBU-Contact-Thomas Monjalon,
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob Kollanukkaran

Hi Ajit,

> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Sunday, October 31, 2021 7:27 PM
> Subject: Re: [PATCH 1/3] ethdev: introduce flow pre-configuration hints
> 
> On Wed, Oct 13, 2021 at 6:15 AM Ori Kam <orika@nvidia.com> wrote:
> >
> > Hi Ajit,
> >
> > > -----Original Message-----
> > > From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > > Sent: Wednesday, October 13, 2021 7:11 AM
> > > Subject: Re: [PATCH 1/3] ethdev: introduce flow pre-configuration hints
> > >
> > > On Tue, Oct 5, 2021 at 9:48 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > >
> > > > The flow rules creation/destruction at a large scale incurs a
> > > > performance penalty and may negatively impact the packet processing
> > > > when used as part of the datapath logic. This is mainly because
> > > > software/hardware resources are allocated and prepared during the flow rule creation.
> > > >
> > > > In order to optimize the insertion rate, PMD may use some hints
> > > > provided by the application at the initialization phase. The
> > > > rte_flow_configure() function allows to pre-allocate all the needed resources beforehand.
> > > > These resources can be used at a later stage without costly allocations.
> > > > Every PMD may use only the subset of hints and ignore unused ones.
> > > This could get tricky. An application can avoid attempts to create flows for items/actions if it can get
> a
> > > hint that PMD cannot satisfy some of the hints provided.
> > > Also what if the application tries to configure a higher count than what the PMD/HW can support?
> > > It will be good if the hints can be negotiated.
> > > Something like this?
> > > Application could start with a set of hints.
> > > PMD can check what can be supported and return an updated set.
> > > Application stays within those values till it needs to resize.
> > >
> >
> > I don't like the negotaion approach since soon it will be impossible to maintain.
> > I don't know how many possible option there will and what each PMD will implement.
> > Also negotiation means that the PMD think he knows what is best and this is the opposite of
> > this feature as giving the application as much control as possible.
> >
> > Just like configure ethdev may fail and the application just get the error message.
> > What application should do with the hits is PMD dependent.
> >
> > For example application request 1M counter, this can even be system specific (to much memory)
> > if PMD can't allocate such number of counter it should fail with error message that lets the
> application
> > know if possible the max number of counters.
> How will the application know that the failure was because of counters
> and not because of meters or something else?
> 
He can check the error description, 

Even now when you insert rule you don't know why it failed.
I assume that if only some of the parameters are not supported the
error returned will be EINVAL.

> >
> > Best,
> > Ori
> >
> > > >
> > > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > > Suggested-by: Ori Kam <orika@nvidia.com>
> > > > ---
> > > >  lib/ethdev/rte_flow.h | 70
> > > > +++++++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 70 insertions(+)
> > > >
> > > > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > > > 7b1ed7f110..c69d503b90 100644
> > > > --- a/lib/ethdev/rte_flow.h
> > > > +++ b/lib/ethdev/rte_flow.h
> > > > @@ -4288,6 +4288,76 @@ rte_flow_tunnel_item_release(uint16_t port_id,
> > > >                              struct rte_flow_item *items,
> > > >                              uint32_t num_of_items,
> > > >                              struct rte_flow_error *error);
> > > > +
> > > > +/**
> > > > + * Flow engine configuration.
> > > > + */
> > > > +__extension__
> > > > +struct rte_flow_port_attr {
> > > > +       /**
> > > > +        * Version of the struct layout, should be 0.
> > > > +        */
> > > > +       uint32_t version;
> > > > +       /**
> > > > +        * Memory size allocated for the flow rules management.
> > > > +        * If set to 0, memory is allocated dynamically.
> > > > +        */
> > > > +       uint32_t mem_size;
> > > > +       /**
> > > > +        * Number of counter actions pre-configured.
> > > > +        * If set to 0, PMD will allocate counters dynamically.
> > > > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > > > +        */
> > > > +       uint32_t nb_counters;
> > > > +       /**
> > > > +        * Number of aging actions pre-configured.
> > > > +        * If set to 0, PMD will allocate aging dynamically.
> > > > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > > > +        */
> > > > +       uint32_t nb_aging;
> > > > +       /**
> > > > +        * Number of traffic metering actions pre-configured.
> > > > +        * If set to 0, PMD will allocate meters dynamically.
> > > > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > > > +        */
> > > > +       uint32_t nb_meters;
> > > > +       /**
> > > > +        * Resources reallocation strategy.
> > > > +        * If set to 1, PMD is not allowed to allocate more resources on demand.
> > > > +        * An application can only allocate more resources by calling the
> > > > +        * configure API again with new values (may not be supported by PMD).
> > > > +        */
> > > > +       uint32_t fixed_resource_size:1; };
> > > > +
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > > + *
> > > > + * Configure flow rules module.
> > > > + * To pre-allocate resources as per the flow port attributes,
> > > > + * this configuration function must be called before any flow rule is created.
> > > > + * No other rte_flow function should be called while this function is invoked.
> > > > + * This function can be called again to change the configuration.
> > > > + * Some PMDs may not support re-configuration at all,
> > > > + * or may only allow increasing the number of resources allocated.
> > > > + *
> > > > + * @param port_id
> > > > + *   Port identifier of Ethernet device.
> > > > + * @param[in] port_attr
> > > > + *   Port configuration attributes.
> > > > + * @param[out] error
> > > > + *   Perform verbose error reporting if not NULL.
> > > > + *   PMDs initialize this structure in case of error only.
> > > > + *
> > > > + * @return
> > > > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > > > + */
> > > > +__rte_experimental
> > > > +int
> > > > +rte_flow_configure(uint16_t port_id,
> > > > +                  const struct rte_flow_port_attr *port_attr,
> > > > +                  struct rte_flow_error *error);
> > > >  #ifdef __cplusplus
> > > >  }
> > > >  #endif
> > > > --
> > > > 2.18.2
> > > >

^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v2 00/10] ethdev: datapath-focused flow rules management
  2021-10-06  4:48 [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management Alexander Kozyrev
                   ` (2 preceding siblings ...)
  2021-10-06  4:48 ` [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations Alexander Kozyrev
@ 2022-01-18 15:30 ` Alexander Kozyrev
  2022-01-18 15:30   ` [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
                     ` (11 more replies)
  3 siblings, 12 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:30 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>

---
v2: fixed patch series thread

Alexander Kozyrev (10):
  ethdev: introduce flow pre-configuration hints
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  app/testpmd: implement rte flow configure
  app/testpmd: implement rte flow item/action template
  app/testpmd: implement rte flow table
  app/testpmd: implement rte flow queue create flow
  app/testpmd: implement rte flow queue drain
  app/testpmd: implement rte flow queue dequeue
  app/testpmd: implement rte flow queue indirect action

 app/test-pmd/cmdline_flow.c                   | 1484 ++++++++++++++++-
 app/test-pmd/config.c                         |  731 ++++++++
 app/test-pmd/testpmd.h                        |   61 +
 doc/guides/prog_guide/img/rte_flow_q_init.svg |   71 +
 .../prog_guide/img/rte_flow_q_usage.svg       |   60 +
 doc/guides/prog_guide/rte_flow.rst            |  319 ++++
 doc/guides/rel_notes/release_22_03.rst        |   19 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  350 +++-
 lib/ethdev/rte_flow.c                         |  332 ++++
 lib/ethdev/rte_flow.h                         |  680 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  103 ++
 lib/ethdev/version.map                        |   16 +
 12 files changed, 4203 insertions(+), 23 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-01-18 15:30   ` Alexander Kozyrev
  2022-01-24 14:36     ` Jerin Jacob
  2022-01-18 15:30   ` [PATCH v2 02/10] ethdev: add flow item/action templates Alexander Kozyrev
                     ` (10 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:30 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 37 +++++++++++++++
 doc/guides/rel_notes/release_22_03.rst |  2 +
 lib/ethdev/rte_flow.c                  | 20 ++++++++
 lib/ethdev/rte_flow.h                  | 63 ++++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  5 ++
 lib/ethdev/version.map                 |  3 ++
 6 files changed, 130 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b4aa9c47c2..86f8c8bda2 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3589,6 +3589,43 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Rules management configuration
+------------------------------
+
+Configure flow rules management.
+
+An application may provide some hints at the initialization phase about
+rules management configuration and/or expected flow rules characteristics.
+These hints may be used by PMD to pre-allocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow rules management configuration and
+pre-allocates needed resources beforehand to avoid costly allocations later.
+Hints about the expected number of counters or meters in an application,
+for example, allow PMD to prepare and optimize NIC memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                     const struct rte_flow_port_attr *port_attr,
+                     struct rte_flow_error *error);
+
+Arguments:
+
+- ``port_id``: port identifier of Ethernet device.
+- ``port_attr``: port attributes for flow management library.
+- ``error``: perform verbose error reporting if not NULL. PMDs initialize
+  this structure in case of error only.
+
+Return values:
+
+- 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 16c66c0641..71b3f0a651 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,8 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+  library, allowing to pre-allocate some resources for better performance.
 
 Removed Items
 -------------
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index a93f68abbc..5b78780ef9 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1391,3 +1391,23 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		return flow_err(port_id,
+				ops->configure(dev, port_attr, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1031fb246b..e145e68525 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4853,6 +4853,69 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine port configuration attributes.
+ */
+__extension__
+struct rte_flow_port_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Number of counter actions pre-configured.
+	 * If set to 0, PMD will allocate counters dynamically.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging actions pre-configured.
+	 * If set to 0, PMD will allocate aging dynamically.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging;
+	/**
+	 * Number of traffic metering actions pre-configured.
+	 * If set to 0, PMD will allocate meters dynamically.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure flow rules module.
+ * To pre-allocate resources as per the flow port attributes
+ * this configuration function must be called before any flow rule is created.
+ * Must be called only after Ethernet device is configured, but may be called
+ * before or after the device is started as long as there are no flow rules.
+ * No other rte_flow function should be called while this function is invoked.
+ * This function can be called again to change the configuration.
+ * Some PMDs may not support re-configuration at all,
+ * or may only allow increasing the number of resources allocated.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..5f722f1a39 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,11 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..7645796739 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,9 @@ EXPERIMENTAL {
 	rte_flow_flex_item_create;
 	rte_flow_flex_item_release;
 	rte_flow_pick_transfer_proxy;
+
+	# added in 22.03
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v2 02/10] ethdev: add flow item/action templates
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-01-18 15:30   ` [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-01-18 15:30   ` Alexander Kozyrev
  2022-01-18 15:30   ` [PATCH v2 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:30 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The item template defines common matching fields (the item mask) without
values. The action template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines item and action templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at table creation time.

The flow rule creation is done by selecting a table, an item template
and an action template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 124 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 141 +++++++++++++
 lib/ethdev/rte_flow.h                  | 269 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 585 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 86f8c8bda2..aa9d4e9573 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3626,6 +3626,130 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, an item template
+and an action template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Item templates
+^^^^^^^^^^^^^^
+
+The item template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The item template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_item_template *
+	rte_flow_item_template_create(uint16_t port_id,
+				const struct rte_flow_item_template_attr *it_attr,
+				const struct rte_flow_item items[],
+				struct rte_flow_error *error);
+
+For example, to create an item template to match on the destination MAC:
+
+.. code-block:: c
+
+	struct rte_flow_item root_items[2] = {{0}};
+	struct rte_flow_item_eth eth_m = {0};
+	items[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+	eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	items[0].mask = &eth_m;
+	items[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+	struct rte_flow_item_template *it =
+		rte_flow_item_template_create(port, &itr, &items, &error);
+
+The concrete value to match on will be provided at the rule creation.
+
+Action templates
+^^^^^^^^^^^^^^^^
+
+The action template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The action template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_action_template *
+	rte_flow_action_template_create(uint16_t port_id,
+				const struct rte_flow_action_template_attr *at_attr,
+				const struct rte_flow_action actions[],
+				const struct rte_flow_action masks[],
+				struct rte_flow_error *error);
+
+For example, to create an action template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	struct rte_flow_action actions[] = {
+		/* Mark ID is constant (4) for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action masks[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+
+	struct rte_flow_action_template *at =
+		rte_flow_action_template_create(port, &atr, &actions, &masks, &error);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Flow table
+^^^^^^^^^^
+
+A table combines a number of item and action templates along with shared flow
+rule attributes (group ID, priority and traffic direction). This way a PMD/HW
+can prepare all the resources needed for efficient flow rules creation in
+the datapath. To avoid any hiccups due to memory reallocation, the maximum
+number of flow rules is defined at table creation time. Any flow rule
+creation beyond the maximum table size is rejected. Application may create
+another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_table *
+	rte_flow_table_create(uint16_t port_id,
+				const struct rte_flow_table_attr *table_attr,
+				struct rte_flow_item_template *item_templates[],
+				uint8_t nb_item_templates,
+				struct rte_flow_action_template *action_templates[],
+				uint8_t nb_action_templates,
+				struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and item and action templates are created.
+
+.. code-block:: c
+
+	rte_flow_configure(port, *port_attr, *error);
+
+	struct rte_flow_item_template *it[0] =
+		rte_flow_item_template_create(port, &itr, &items, &error);
+	struct rte_flow_action_template *at[0] =
+		rte_flow_action_template_create(port, &atr, &actions, &masks, &error);
+
+	struct rte_flow_table *table =
+		rte_flow_table_create(port, *table_attr,
+				*it, nb_item_templates,
+				*at, nb_action_templates,
+				*error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 71b3f0a651..af56f54bc4 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -58,6 +58,14 @@ New Features
 * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
   library, allowing to pre-allocate some resources for better performance.
 
+* ethdev: Added ``rte_flow_table_create`` API to group flow rules with
+  the same flow attributes and common matching patterns and actions
+  defined by ``rte_flow_item_template_create`` and
+  ``rte_flow_action_template_create`` respectively.
+  Corresponding functions to destroy these entities are:
+  ``rte_flow_table_destroy``, ``rte_flow_item_template_destroy``
+  and ``rte_flow_action_template_destroy`` respectively.
+
 Removed Items
 -------------
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 5b78780ef9..20613f6bed 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1411,3 +1411,144 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_item_template *
+rte_flow_item_template_create(uint16_t port_id,
+			const struct rte_flow_item_template_attr *it_attr,
+			const struct rte_flow_item items[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_item_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->item_template_create)) {
+		template = ops->item_template_create(dev, it_attr,
+						     items, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_item_template_destroy(uint16_t port_id,
+			struct rte_flow_item_template *it,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->item_template_destroy)) {
+		return flow_err(port_id,
+				ops->item_template_destroy(dev, it, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_action_template *
+rte_flow_action_template_create(uint16_t port_id,
+			const struct rte_flow_action_template_attr *at_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->action_template_create)) {
+		template = ops->action_template_create(dev, at_attr,
+						       actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_action_template_destroy(uint16_t port_id,
+			struct rte_flow_action_template *at,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->action_template_destroy)) {
+		return flow_err(port_id,
+				ops->action_template_destroy(dev, at, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_table *
+rte_flow_table_create(uint16_t port_id,
+		      const struct rte_flow_table_attr *table_attr,
+		      struct rte_flow_item_template *item_templates[],
+		      uint8_t nb_item_templates,
+		      struct rte_flow_action_template *action_templates[],
+		      uint8_t nb_action_templates,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->table_create)) {
+		table = ops->table_create(dev, table_attr,
+					  item_templates, nb_item_templates,
+					  action_templates, nb_action_templates,
+					  error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_table_destroy(uint16_t port_id,
+		       struct rte_flow_table *table,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->table_destroy)) {
+		return flow_err(port_id,
+				ops->table_destroy(dev, table, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e145e68525..2e54e9d0e3 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4916,6 +4916,275 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of item template.
+ * This handle can be used to manage the created item template.
+ */
+struct rte_flow_item_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow item template attributes.
+ */
+__extension__
+struct rte_flow_item_template_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Relaxed matching policy, PMD may match only on items
+	 * with mask member set and skip matching on protocol
+	 * layers specified without any masks.
+	 * If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets,
+	 * starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create item template.
+ * The item template defines common matching fields (item mask) without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] it_attr
+ *   Item template attributes.
+ * @param[in] items
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_item_template *
+rte_flow_item_template_create(uint16_t port_id,
+			const struct rte_flow_item_template_attr *it_attr,
+			const struct rte_flow_item items[],
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy item template.
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] it
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_item_template_destroy(uint16_t port_id,
+			struct rte_flow_item_template *it,
+			struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of action template.
+ * This handle can be used to manage the created action template.
+ */
+struct rte_flow_action_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow action template attributes.
+ */
+struct rte_flow_action_template_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/* No attributes so far. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create action template.
+ * The action template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] at_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_action_template *
+rte_flow_action_template_create(uint16_t port_id,
+			const struct rte_flow_action_template_attr *at_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy action template.
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] at
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_action_template_destroy(uint16_t port_id,
+			struct rte_flow_action_template *at,
+			struct rte_flow_error *error);
+
+
+/**
+ * Opaque type returned after successful creation of table.
+ * This handle can be used to manage the created table.
+ */
+struct rte_flow_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_table_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Flow attributes that will be used in the table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create table.
+ * Table is a group of flow rules with the same flow attributes
+ * (group ID, priority and traffic direction) defined for it.
+ * The table holds multiple item and action templates to build a flow rule.
+ * Each rule is free to use any combination of item and action templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Table attributes.
+ * @param[in] item_templates
+ *   Array of item templates to be used in this table.
+ * @param[in] nb_item_templates
+ *   The number of item templates in the item_templates array.
+ * @param[in] action_templates
+ *   Array of action templates to be used in this table.
+ * @param[in] nb_action_templates
+ *   The number of action templates in the action_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_table *
+rte_flow_table_create(uint16_t port_id,
+		      const struct rte_flow_table_attr *table_attr,
+		      struct rte_flow_item_template *item_templates[],
+		      uint8_t nb_item_templates,
+		      struct rte_flow_action_template *action_templates[],
+		      uint8_t nb_action_templates,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy table.
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_table_destroy(uint16_t port_id,
+		       struct rte_flow_table *table,
+		       struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 5f722f1a39..cda021c302 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -157,6 +157,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_item_template_create() */
+	struct rte_flow_item_template *(*item_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_item_template_attr *it_attr,
+		 const struct rte_flow_item items[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_item_template_destroy() */
+	int (*item_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_item_template *it,
+		 struct rte_flow_error *err);
+	/** See rte_flow_action_template_create() */
+	struct rte_flow_action_template *(*action_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_action_template_attr *at_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_action_template_destroy() */
+	int (*action_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_action_template *at,
+		 struct rte_flow_error *err);
+	/** See rte_flow_table_create() */
+	struct rte_flow_table *(*table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_table_attr *table_attr,
+		 struct rte_flow_item_template *item_templates[],
+		 uint8_t nb_item_templates,
+		 struct rte_flow_action_template *action_templates[],
+		 uint8_t nb_action_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_table_destroy() */
+	int (*table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_table *table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 7645796739..cfd5e2a3e4 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -259,6 +259,12 @@ EXPERIMENTAL {
 
 	# added in 22.03
 	rte_flow_configure;
+	rte_flow_item_template_create;
+	rte_flow_item_template_destroy;
+	rte_flow_action_template_create;
+	rte_flow_action_template_destroy;
+	rte_flow_table_create;
+	rte_flow_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v2 03/10] ethdev: bring in async queue-based flow rules operations
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-01-18 15:30   ` [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2022-01-18 15:30   ` [PATCH v2 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-01-18 15:30   ` Alexander Kozyrev
  2022-01-18 15:30   ` [PATCH v2 04/10] app/testpmd: implement rte flow configure Alexander Kozyrev
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:30 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_q_flow_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_q_dequeue() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_q_flow_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 doc/guides/prog_guide/img/rte_flow_q_init.svg |  71 ++++
 .../prog_guide/img/rte_flow_q_usage.svg       |  60 +++
 doc/guides/prog_guide/rte_flow.rst            | 158 ++++++++
 doc/guides/rel_notes/release_22_03.rst        |   9 +
 lib/ethdev/rte_flow.c                         | 173 ++++++++-
 lib/ethdev/rte_flow.h                         | 348 ++++++++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  61 +++
 lib/ethdev/version.map                        |   7 +
 8 files changed, 886 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
new file mode 100644
index 0000000000..994e85521c
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
@@ -0,0 +1,71 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485" height="535"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:xlink="http://www.w3.org/1999/xlink"
+   overflow="hidden">
+   <defs>
+      <clipPath id="clip0">
+         <rect x="0" y="0" width="485" height="535"/>
+      </clipPath>
+   </defs>
+   <g clip-path="url(#clip0)">
+      <rect x="0" y="0" width="485" height="535" fill="#FFFFFF"/>
+      <rect x="0.500053" y="79.5001" width="482" height="59"
+         stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8"
+         fill="#A6A6A6"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(121.6 116)">
+         rte_eth_dev_configure
+         <tspan font-size="24" x="224.007" y="0">()</tspan>
+      </text>
+      <rect x="0.500053" y="158.5" width="482" height="59" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(140.273 195)">
+         rte_flow_configure()
+      </text>
+      <rect x="0.500053" y="236.5" width="482" height="60" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(77.4259 274)">
+         rte_flow_item_template_create()
+      </text>
+      <rect x="0.500053" y="316.5" width="482" height="59" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(69.3792 353)">
+         rte_flow_action_template_create
+         <tspan font-size="24" x="328.447" y="0">(</tspan>)
+      </text>
+      <rect x="0.500053" y="0.500053" width="482" height="60" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#A6A6A6"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(177.233 37)">
+         rte_eal_init
+         <tspan font-size="24" x="112.74" y="0">()</tspan>
+      </text>
+      <path d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z" transform="matrix(-1 0 0 1 241 60)"/>
+      <path d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z" transform="matrix(-1 0 0 1 241 138)"/>
+      <path d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z" transform="matrix(-1 0 0 1 241 217)"/>
+      <rect x="0.500053" y="395.5" width="482" height="59" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(124.989 432)">
+         rte_flow_table_create
+         <tspan font-size="24" x="217.227" y="0">(</tspan>
+         <tspan font-size="24" x="224.56" y="0">)</tspan>
+      </text>
+      <path d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z" transform="matrix(-1 0 0 1 241 296)"/>
+      <path d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"/>
+      <rect x="0.500053" y="473.5" width="482" height="60" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#A6A6A6"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(145.303 511)">
+         rte_eth_dev_start()</text>
+      <path d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"/>
+   </g>
+</svg>
\ No newline at end of file
diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
new file mode 100644
index 0000000000..14447ef8eb
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
@@ -0,0 +1,60 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880" height="610"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:xlink="http://www.w3.org/1999/xlink"
+   overflow="hidden">
+   <defs>
+      <clipPath id="clip0">
+         <rect x="0" y="0" width="880" height="610"/>
+      </clipPath>
+   </defs>
+   <g clip-path="url(#clip0)">
+      <rect x="0" y="0" width="880" height="610" fill="#FFFFFF"/>
+      <rect x="333.5" y="0.500053" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#A6A6A6"/>
+      <text font-family="Consolas,Consolas_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(357.196 29)">rte_eth_rx_burst()</text>
+      <rect x="333.5" y="63.5001" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(394.666 91)">analyze <tspan font-size="19" x="60.9267" y="0">packet </tspan></text>
+      <rect x="572.5" y="279.5" width="234" height="46" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(591.429 308)">rte_flow_q_create_flow()</text>
+      <path d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9" fill-rule="evenodd"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(430.069 378)">more <tspan font-size="19" x="-12.94" y="23">packets?</tspan></text>
+      <path d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"/>
+      <path d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"/>
+      <path d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"/>
+      <path d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"/>
+      <path d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z" transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"/>
+      <path d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9" fill-rule="evenodd"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(417.576 155)">add new <tspan font-size="19" x="13.2867" y="23">rule?</tspan></text>
+      <path d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"/>
+      <rect x="602.5" y="127.5" width="46" height="30" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(611.34 148)">yes</text>
+      <rect x="254.5" y="126.5" width="46" height="31" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(267.182 147)">no</text>
+      <path d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z" transform="matrix(1 0 0 -1 567.5 383.495)"/>
+      <path d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9" fill-rule="evenodd"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(159.155 208)">destroy the <tspan font-size="19" x="24.0333" y="23">rule?</tspan></text>
+      <path d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z" transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"/>
+      <rect x="81.5001" y="280.5" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(96.2282 308)">rte_flow_q_destroy_flow()</text>
+      <path d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z" transform="matrix(-1 0 0 1 319.915 213.5)"/>
+      <path d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"/>
+      <path d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z" transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"/>
+      <rect x="334.5" y="540.5" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(365.083 569)">rte_flow_q_dequeue<tspan font-size="19" x="160.227" y="0">()</tspan></text>
+      <rect x="334.5" y="462.5" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(379.19 491)">rte_flow_q<tspan font-size="19" x="83.56" y="0">_drain</tspan>()</text>
+      <path d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"/>
+      <rect x="0.500053" y="287.5" width="46" height="30" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(12.8617 308)">no</text>
+      <rect x="357.5" y="223.5" width="47" height="31" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(367.001 244)">yes</text>
+      <rect x="469.5" y="421.5" width="46" height="30" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(481.872 442)">no</text>
+      <rect x="832.5" y="223.5" width="46" height="31" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(841.777 244)">yes</text>
+   </g>
+</svg>
\ No newline at end of file
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index aa9d4e9573..b004811a20 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3607,18 +3607,22 @@ Hints about the expected number of counters or meters in an application,
 for example, allow PMD to prepare and optimize NIC memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                      const struct rte_flow_port_attr *port_attr,
+                     const struct rte_flow_queue_attr *queue_attr[],
                      struct rte_flow_error *error);
 
 Arguments:
 
 - ``port_id``: port identifier of Ethernet device.
 - ``port_attr``: port attributes for flow management library.
+- ``queue_attr``: queue attributes for asynchronous operations.
 - ``error``: perform verbose error reporting if not NULL. PMDs initialize
   this structure in case of error only.
 
@@ -3750,6 +3754,160 @@ and item and action templates are created.
 				*at, nb_action_templates,
 				*error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules creation/destruction can be done by using lockless flow queues.
+An application configures the number of queues during the initialization stage.
+Then create/destroy operations are enqueued without any locks asynchronously.
+By adopting an asynchronous queue-based approach, the packet processing can
+continue with handling next packets while insertion/destruction of a flow rule
+is processed inside the hardware. The application is expected to poll for
+results later to see if the flow rule is successfully inserted/destroyed.
+User data is returned as part of the result to identify the enqueued operation.
+Polling must be done before the queue is overflowed or periodically.
+Operations can be reordered inside a queue, so the result of the rule creation
+needs to be polled first before enqueueing the destroy operation for the rule.
+Flow handle is valid once the create operation is enqueued and must be
+destroyed even if the operation is not successful and the rule is not inserted.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_q_init:
+
+.. figure:: img/rte_flow_q_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_q_usage:
+
+.. figure:: img/rte_flow_q_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_q_flow_create(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow_table *table,
+				const struct rte_flow_item items[],
+				uint8_t item_template_index,
+				const struct rte_flow_action actions[],
+				uint8_t action_template_index,
+				struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_flow_destroy(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow *flow,
+				struct rte_flow_error *error);
+
+Drain a queue
+~~~~~~~~~~~~~
+
+Function to drain the queue and push all internally stored rules to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_drain(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_error *error);
+
+There is the drain attribute in the queue operation attributes.
+When set, the requested operation must be sent to the HW without any delay.
+If not, multiple operations can be bulked together and not sent to HW right
+away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in the latter case.
+
+Dequeue operations
+~~~~~~~~~~~~~~~~~~
+
+Dequeue rte flow operations.
+
+The application must invoke this function in order to complete the asynchronous
+flow rule operation and to receive the flow rule operation status.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_dequeue(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_q_op_res res[],
+			uint16_t n_res,
+			struct rte_flow_error *error);
+
+Multiple outstanding operations can be dequeued simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_q_action_handle_create(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			const struct rte_flow_indir_action_conf *indir_action_conf,
+			const struct rte_flow_action *action,
+			struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_update(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			const void *update,
+			struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index af56f54bc4..7ccac912a3 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -66,6 +66,15 @@ New Features
   ``rte_flow_table_destroy``, ``rte_flow_item_template_destroy``
   and ``rte_flow_action_template_destroy`` respectively.
 
+* ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` API
+  to enqueue flow creaion/destruction operations asynchronously as well as
+  ``rte_flow_q_dequeue`` to poll results of these operations and
+  ``rte_flow_q_drain`` to drain the flow queue and pass all operations to NIC.
+  Introduced asynchronous API for indirect actions management as well:
+  ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` and
+  ``rte_flow_q_action_handle_update``.
+
+
 Removed Items
 -------------
 
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 20613f6bed..6da899c5df 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1395,6 +1395,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1404,7 +1405,8 @@ rte_flow_configure(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
 		return flow_err(port_id,
-				ops->configure(dev, port_attr, error),
+				ops->configure(dev, port_attr,
+					       queue_attr, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1552,3 +1554,172 @@ rte_flow_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_table *table,
+		       const struct rte_flow_item items[],
+		       uint8_t item_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t action_template_index,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->q_flow_create)) {
+		flow = ops->q_flow_create(dev, queue_id, q_ops_attr, table,
+					  items, item_template_index,
+					  actions, action_template_index,
+					  error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_flow_destroy)) {
+		return flow_err(port_id,
+				ops->q_flow_destroy(dev, queue_id,
+						    q_ops_attr, flow, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->q_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_drain(uint16_t port_id,
+		 uint32_t queue_id,
+		 struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_drain)) {
+		return flow_err(port_id,
+				ops->q_drain(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_q_dequeue(uint16_t port_id,
+		   uint32_t queue_id,
+		   struct rte_flow_q_op_res res[],
+		   uint16_t n_res,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_dequeue)) {
+		ret = ops->q_dequeue(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 2e54e9d0e3..07193090f2 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4865,6 +4865,13 @@ struct rte_flow_port_attr {
 	 * Version of the struct layout, should be 0.
 	 */
 	uint32_t version;
+	/**
+	 * Number of flow queues to be configured.
+	 * Flow queues are used for asynchronous flow rule operations.
+	 * The order of operations is not guaranteed inside a queue.
+	 * Flow queues are not thread-safe.
+	 */
+	uint16_t nb_queues;
 	/**
 	 * Number of counter actions pre-configured.
 	 * If set to 0, PMD will allocate counters dynamically.
@@ -4885,6 +4892,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meters;
 };
 
+/**
+ * Flow engine queue configuration.
+ */
+__extension__
+struct rte_flow_queue_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4903,6 +4925,9 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4914,6 +4939,7 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5185,6 +5211,328 @@ rte_flow_table_destroy(uint16_t port_id,
 		       struct rte_flow_table *table,
 		       struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+struct rte_flow_q_ops_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+	 /**
+	  * When set, the requested action must be sent to the HW without
+	  * any delay. Any prior requests must be also sent to the HW.
+	  * If this bit is cleared, the application must call the
+	  * rte_flow_queue_drain API to actually send the request to the HW.
+	  */
+	uint32_t drain:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] table
+ *   Table to select templates from.
+ * @param[in] items
+ *   List of pattern items to be used.
+ *   The list order should match the order in the item template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] item_template_index
+ *   Item template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the action template.
+ * @param[in] action_template_index
+ *   Action template index in the table.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule was offloaded.
+ *   Only completion result indicates that the rule was offloaded.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_table *table,
+		       const struct rte_flow_item items[],
+		       uint8_t item_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t action_template_index,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Drain the queue and push all internally stored rules to the HW.
+ * Non-drained rules are rules that were inserted without the drain flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be drained.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *    0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_drain(uint16_t port_id,
+		 uint32_t queue_id,
+		 struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Dequeue a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to receive the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to dequeue the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were dequeued,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_dequeue(uint16_t port_id,
+		   uint32_t queue_id,
+		   struct rte_flow_q_op_res res[],
+		   uint16_t n_res,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index cda021c302..d1cfdd2d75 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -156,6 +156,7 @@ struct rte_flow_ops {
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_item_template_create() */
 	struct rte_flow_item_template *(*item_template_create)
@@ -194,6 +195,66 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_table *table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_create() */
+	struct rte_flow *(*q_flow_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_table *table,
+		 const struct rte_flow_item items[],
+		 uint8_t item_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t action_template_index,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_destroy() */
+	int (*q_flow_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_update() */
+	int (*q_flow_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_create() */
+	struct rte_flow_action_handle *(*q_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_destroy() */
+	int (*q_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_action_handle_update() */
+	int (*q_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_drain() */
+	int (*q_drain)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_dequeue() */
+	int (*q_dequeue)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index cfd5e2a3e4..d705e36c90 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -265,6 +265,13 @@ EXPERIMENTAL {
 	rte_flow_action_template_destroy;
 	rte_flow_table_create;
 	rte_flow_table_destroy;
+	rte_flow_q_flow_create;
+	rte_flow_q_flow_destroy;
+	rte_flow_q_action_handle_create;
+	rte_flow_q_action_handle_destroy;
+	rte_flow_q_action_handle_update;
+	rte_flow_q_drain;
+	rte_flow_q_dequeue;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v2 04/10] app/testpmd: implement rte flow configure
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (2 preceding siblings ...)
  2022-01-18 15:30   ` [PATCH v2 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-01-18 15:30   ` Alexander Kozyrev
  2022-01-18 15:33   ` [v2,05/10] app/testpmd: implement rte flow item/action template Alexander Kozyrev
                     ` (7 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:30 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 109 +++++++++++++++++++-
 app/test-pmd/config.c                       |  29 ++++++
 app/test-pmd/testpmd.h                      |   5 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  34 +++++-
 4 files changed, 174 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5c2bba48ad..ea4af8dd45 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,7 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +123,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -846,6 +854,10 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -927,6 +939,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1962,6 +1984,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2187,7 +2212,8 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2202,6 +2228,56 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow rules",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_queues)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_COUNTERS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7465,6 +7541,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8691,6 +8794,10 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case CONFIGURE:
+		port_flow_configure(in->port, &in->args.configure.port_attr,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 1722d6c8f8..85d31de7f7 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1595,6 +1595,35 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[port_attr->nb_queues];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = port_attr->nb_queues;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < port_attr->nb_queues; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, port_attr->nb_queues, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..ce80a00193 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,9 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 94792d88cc..8af28bd3b3 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3285,8 +3285,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3309,6 +3309,14 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Configure flow management::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3368,6 +3376,28 @@ following sections.
 
    flow tunnel list {port_id}
 
+Configuring flow management library
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [v2,05/10] app/testpmd: implement rte flow item/action template
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (3 preceding siblings ...)
  2022-01-18 15:30   ` [PATCH v2 04/10] app/testpmd: implement rte flow configure Alexander Kozyrev
@ 2022-01-18 15:33   ` Alexander Kozyrev
  2022-01-18 15:34   ` [v2,06/10] app/testpmd: implement rte flow table Alexander Kozyrev
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:33 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_item_template and
rte_flow_action_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow item_template 0 create item_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow action_template 0 create action_template_id 4
           template drop / end mask drop / end
  testpmd> flow action_template 0 destroy action_template 4
  testpmd> flow item_template 0 destroy item_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 376 +++++++++++++++++++-
 app/test-pmd/config.c                       | 204 +++++++++++
 app/test-pmd/testpmd.h                      |  22 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  97 +++++
 4 files changed, 697 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index ea4af8dd45..fb27a97855 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_ITEM_TEMPLATE_ID,
+	COMMON_ACTION_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -73,6 +75,8 @@ enum index {
 	FLOW,
 	/* Sub-level commands. */
 	CONFIGURE,
+	ITEM_TEMPLATE,
+	ACTION_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -91,6 +95,22 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Item template arguments. */
+	ITEM_TEMPLATE_CREATE,
+	ITEM_TEMPLATE_DESTROY,
+	ITEM_TEMPLATE_CREATE_ID,
+	ITEM_TEMPLATE_DESTROY_ID,
+	ITEM_TEMPLATE_RELAXED_MATCHING,
+	ITEM_TEMPLATE_SPEC,
+
+	/* Action template arguments. */
+	ACTION_TEMPLATE_CREATE,
+	ACTION_TEMPLATE_DESTROY,
+	ACTION_TEMPLATE_CREATE_ID,
+	ACTION_TEMPLATE_DESTROY_ID,
+	ACTION_TEMPLATE_SPEC,
+	ACTION_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -858,6 +878,10 @@ struct buffer {
 			struct rte_flow_port_attr port_attr;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -866,10 +890,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t it_id;
+			uint32_t at_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -949,6 +976,43 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_it_subcmd[] = {
+	ITEM_TEMPLATE_CREATE,
+	ITEM_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_it_attr[] = {
+	ITEM_TEMPLATE_CREATE_ID,
+	ITEM_TEMPLATE_RELAXED_MATCHING,
+	ITEM_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_it_destroy_attr[] = {
+	ITEM_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTION_TEMPLATE_CREATE,
+	ACTION_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTION_TEMPLATE_CREATE_ID,
+	ACTION_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTION_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1987,6 +2051,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2056,6 +2126,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_item_template_id(struct context *, const struct token *,
+				 unsigned int, char *, unsigned int);
+static int comp_action_template_id(struct context *, const struct token *,
+				   unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2206,6 +2280,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_ITEM_TEMPLATE_ID] = {
+		.name = "{item_template_id}",
+		.type = "ITEM_TEMPLATE_ID",
+		.help = "item template id",
+		.call = parse_int,
+		.comp = comp_item_template_id,
+	},
+	[COMMON_ACTION_TEMPLATE_ID] = {
+		.name = "{action_template_id}",
+		.type = "ACTION_TEMPLATE_ID",
+		.help = "action template id",
+		.call = parse_int,
+		.comp = comp_action_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2213,6 +2301,8 @@ static const struct token token_list[] = {
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
 			     (CONFIGURE,
+			      ITEM_TEMPLATE,
+			      ACTION_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2278,6 +2368,112 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[ITEM_TEMPLATE] = {
+		.name = "item_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage item templates",
+		.next = NEXT(next_it_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ITEM_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create item template",
+		.next = NEXT(next_it_attr),
+		.call = parse_template,
+	},
+	[ITEM_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy item template",
+		.next = NEXT(NEXT_ENTRY(ITEM_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Item arguments. */
+	[ITEM_TEMPLATE_CREATE_ID] = {
+		.name = "item_template_id",
+		.help = "specify a item template id to create",
+		.next = NEXT(next_it_attr,
+			     NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.it_id)),
+	},
+	[ITEM_TEMPLATE_DESTROY_ID] = {
+		.name = "item_template",
+		.help = "specify an item template id to destroy",
+		.next = NEXT(next_it_destroy_attr,
+			     NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ITEM_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_it_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[ITEM_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create item template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTION_TEMPLATE] = {
+		.name = "action_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage action templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTION_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create action template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTION_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy action template",
+		.next = NEXT(NEXT_ENTRY(ACTION_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Action  arguments. */
+	[ACTION_TEMPLATE_CREATE_ID] = {
+		.name = "action_template_id",
+		.help = "specify a action template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTION_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTION_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.at_id)),
+	},
+	[ACTION_TEMPLATE_DESTROY_ID] = {
+		.name = "action_template",
+		.help = "specify an action template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTION_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create action template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTION_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create action template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2600,7 +2796,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5704,7 +5900,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != ITEM_TEMPLATE_CREATE &&
+		    ctx->curr != ACTION_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7568,6 +7766,114 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != ITEM_TEMPLATE &&
+		    ctx->curr != ACTION_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case ITEM_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.it_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTION_TEMPLATE_CREATE:
+		out->args.vc.at_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTION_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTION_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == ITEM_TEMPLATE ||
+		out->command == ACTION_TEMPLATE) {
+		if (ctx->curr != ITEM_TEMPLATE_DESTROY &&
+			ctx->curr != ACTION_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8535,6 +8841,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available item template IDs. */
+static int
+comp_item_template_id(struct context *ctx, const struct token *token,
+		      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->item_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available iaction template IDs. */
+static int
+comp_action_template_id(struct context *ctx, const struct token *token,
+			unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->action_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -8798,6 +9152,24 @@ cmd_flow_parsed(const struct buffer *in)
 		port_flow_configure(in->port, &in->args.configure.port_attr,
 				    &in->args.configure.queue_attr);
 		break;
+	case ITEM_TEMPLATE_CREATE:
+		port_flow_item_template_create(in->port, in->args.vc.it_id,
+				in->args.vc.attr.reserved, in->args.vc.pattern);
+		break;
+	case ITEM_TEMPLATE_DESTROY:
+		port_flow_item_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTION_TEMPLATE_CREATE:
+		port_flow_action_template_create(in->port, in->args.vc.at_id,
+				in->args.vc.actions, in->args.vc.masks);
+		break;
+	case ACTION_TEMPLATE_DESTROY:
+		port_flow_action_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 85d31de7f7..80678d851f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1595,6 +1595,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest item template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Configure flow management resources. */
 int
 port_flow_configure(portid_t port_id,
@@ -2039,6 +2082,167 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create item template */
+int
+port_flow_item_template_create(portid_t port_id, uint32_t id, bool relaxed,
+			       const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_item_template_attr attr = {
+					.relaxed_matching = relaxed };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->item_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.itempl = rte_flow_item_template_create(port_id,
+						&attr, pattern, &error);
+	if (!pit->template.itempl) {
+		uint32_t destroy_id = pit->id;
+		port_flow_item_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Item template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy item template */
+int
+port_flow_item_template_destroy(portid_t port_id, uint32_t n,
+				const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->item_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.itempl &&
+			    rte_flow_item_template_destroy(port_id,
+							   pit->template.itempl,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Item template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create action template */
+int
+port_flow_action_template_create(portid_t port_id, uint32_t id,
+				 const struct rte_flow_action *actions,
+				 const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_action_template_attr attr = { 0 };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->action_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.atempl = rte_flow_action_template_create(port_id,
+						&attr, actions, masks, &error);
+	if (!pat->template.atempl) {
+		uint32_t destroy_id = pat->id;
+		port_flow_action_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Action template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy action template */
+int
+port_flow_action_template_destroy(portid_t port_id, uint32_t n,
+				  const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->action_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.atempl &&
+			    rte_flow_action_template_destroy(port_id,
+					pat->template.atempl, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Action template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index ce80a00193..4befa6d7a4 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_item_template *itempl;
+		struct rte_flow_action_template *atempl;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *item_templ_list; /**< Item templates. */
+	struct port_template    *action_templ_list; /**< Action templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -890,6 +903,15 @@ int port_action_handle_update(portid_t port_id, uint32_t id,
 int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_item_template_create(portid_t port_id, uint32_t id, bool relaxed,
+				   const struct rte_flow_item *pattern);
+int port_flow_item_template_destroy(portid_t port_id, uint32_t n,
+				    const uint32_t *template);
+int port_flow_action_template_create(portid_t port_id, uint32_t id,
+				     const struct rte_flow_action *actions,
+				     const struct rte_flow_action *masks);
+int port_flow_action_template_destroy(portid_t port_id, uint32_t n,
+				      const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 8af28bd3b3..d23cfa6572 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3317,6 +3317,24 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create an item template::
+   flow item_template {port_id} create [item_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+- Destroy an item template::
+
+   flow item_template {port_id} destroy item_template {id} [...]
+
+- Create an action template::
+
+   flow action_template {port_id} create [action_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an action template::
+
+   flow action_template {port_id} destroy action_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3398,6 +3416,85 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating item templates
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow item_template create`` creates the specified item template.
+It is bound to ``rte_flow_item_template_create()``::
+
+   flow item_template {port_id} create [item_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Item template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying item templates
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow item_template destroy`` destroys one or more item templates
+from their template ID (as returned by ``flow item_template create``),
+this command calls ``rte_flow_item_template_destroy()`` as many
+times as necessary::
+
+   flow item_template {port_id} destroy item_template {id} [...]
+
+If successful, it will show::
+
+   Item template #[...] destroyed
+
+It does not report anything for item template IDs that do not exist.
+The usual error message is shown when an item template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating action templates
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow action_template create`` creates the specified action template.
+It is bound to ``rte_flow_action_template_create()``::
+
+   flow action_template {port_id} create [action_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Action template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying action templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow action_template destroy`` destroys one or more action templates
+from their template ID (as returned by ``flow action_template create``),
+this command calls ``rte_flow_action_template_destroy()`` as many
+times as necessary::
+
+   flow action_template {port_id} destroy action_template {id} [...]
+
+If successful, it will show::
+
+   Action template #[...] destroyed
+
+It does not report anything for item template IDs that do not exist.
+The usual error message is shown when an item template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [v2,06/10] app/testpmd: implement rte flow table
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (4 preceding siblings ...)
  2022-01-18 15:33   ` [v2,05/10] app/testpmd: implement rte flow item/action template Alexander Kozyrev
@ 2022-01-18 15:34   ` Alexander Kozyrev
  2022-01-18 15:35   ` [v2,07/10] app/testpmd: implement rte flow queue create flow Alexander Kozyrev
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:34 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 item_template 2 action_template 4
  testpmd> flow table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 168 +++++++++++
 app/test-pmd/testpmd.h                      |  15 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 551 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb27a97855..4dc2a2aaeb 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_ITEM_TEMPLATE_ID,
 	COMMON_ACTION_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -77,6 +78,7 @@ enum index {
 	CONFIGURE,
 	ITEM_TEMPLATE,
 	ACTION_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -111,6 +113,20 @@ enum index {
 	ACTION_TEMPLATE_SPEC,
 	ACTION_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_ITEM_TEMPLATE,
+	TABLE_ACTION_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -882,6 +898,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_table_attr attr;
+			uint32_t *item_id;
+			uint32_t item_id_n;
+			uint32_t *action_id;
+			uint32_t action_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1013,6 +1041,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_ITEM_TEMPLATE,
+	TABLE_ACTION_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2057,6 +2111,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2130,6 +2189,8 @@ static int comp_item_template_id(struct context *, const struct token *,
 				 unsigned int, char *, unsigned int);
 static int comp_action_template_id(struct context *, const struct token *,
 				   unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2294,6 +2355,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_action_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2303,6 +2371,7 @@ static const struct token token_list[] = {
 			     (CONFIGURE,
 			      ITEM_TEMPLATE,
 			      ACTION_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2474,6 +2543,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_ITEM_TEMPLATE] = {
+		.name = "item_template",
+		.help = "specify item template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.item_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTION_TEMPLATE] = {
+		.name = "action_template",
+		.help = "specify action template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.action_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7874,6 +8041,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_ITEM_TEMPLATE:
+		out->args.table.item_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.item_id
+				+ out->args.table.item_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTION_TEMPLATE:
+		out->args.table.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.item_id +
+						out->args.table.item_id_n),
+					       sizeof(double));
+		template_id = out->args.table.action_id
+				+ out->args.table.action_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8889,6 +9169,30 @@ comp_action_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9170,6 +9474,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.item_id_n,
+			in->args.table.item_id, in->args.table.action_id_n,
+			in->args.table.action_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 80678d851f..07582fa552 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1638,6 +1638,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Configure flow management resources. */
 int
 port_flow_configure(portid_t port_id,
@@ -2243,6 +2286,131 @@ port_flow_action_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_table_create(portid_t port_id, uint32_t id,
+		       const struct rte_flow_table_attr *table_attr,
+		       uint32_t nb_item_templates, uint32_t *item_templates,
+		       uint32_t nb_action_templates, uint32_t *action_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_item_template
+			*flow_item_templates[nb_item_templates];
+	struct rte_flow_action_template
+			*flow_action_templates[nb_action_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_item_templates; ++i) {
+		bool found = false;
+		temp = port->item_templ_list;
+		while (temp) {
+			if (item_templates[i] == temp->id) {
+				flow_item_templates[i] = temp->template.itempl;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Item template #%u is invalid\n",
+			       item_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_action_templates; ++i) {
+		bool found = false;
+		temp = port->action_templ_list;
+		while (temp) {
+			if (action_templates[i] == temp->id) {
+				flow_action_templates[i] =
+					temp->template.atempl;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Action template #%u is invalid\n",
+			       action_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_table_create(port_id, table_attr,
+		      flow_item_templates, nb_item_templates,
+		      flow_action_templates, nb_action_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_table_destroy(portid_t port_id,
+			uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_table_destroy(port_id,
+						   pt->table,
+						   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 4befa6d7a4..b8655b9987 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,14 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	struct rte_flow_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +267,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *item_templ_list; /**< Item templates. */
 	struct port_template    *action_templ_list; /**< Action templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -912,6 +921,12 @@ int port_flow_action_template_create(portid_t port_id, uint32_t id,
 				     const struct rte_flow_action *masks);
 int port_flow_action_template_destroy(portid_t port_id, uint32_t n,
 				      const uint32_t *template);
+int port_flow_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_table_attr *table_attr,
+		   uint32_t nb_item_templates, uint32_t *item_templates,
+		   uint32_t nb_action_templates, uint32_t *action_templates);
+int port_flow_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index d23cfa6572..f8a87564be 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3335,6 +3335,19 @@ following sections.
 
    flow action_template {port_id} destroy action_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       item_template {item_template_id}
+       action_template {action_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3495,6 +3508,46 @@ The usual error message is shown when an item template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating flow table
+~~~~~~~~~~~~~~~~~~~
+
+``flow table create`` creates the specified flow table.
+It is bound to ``rte_flow_table_create()``::
+
+   flow table {port_id} create
+       [table_id {id}] [group {group_id}]
+	   [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       item_template {item_template_id}
+       action_template {action_template_id}
+
+If successful, it will show::
+
+   Table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow table destroy`` destroys one or more flow tables
+from their table ID (as returned by ``flow table create``),
+this command calls ``rte_flow_table_destroy()`` as many
+times as necessary::
+
+   flow table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [v2,07/10] app/testpmd: implement rte flow queue create flow
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (5 preceding siblings ...)
  2022-01-18 15:34   ` [v2,06/10] app/testpmd: implement rte flow table Alexander Kozyrev
@ 2022-01-18 15:35   ` Alexander Kozyrev
  2022-01-18 15:35   ` [v2,08/10] app/testpmd: implement rte flow queue drain Alexander Kozyrev
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:35 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow creation/destruction
operations. Usage example:
  testpmd> flow queue 0 create 0 drain yes table 6
           item_template 0 action_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 drain yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 266 +++++++++++++++++++-
 app/test-pmd/config.c                       | 153 +++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  55 ++++
 4 files changed, 480 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4dc2a2aaeb..6a8e6fc683 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_ITEM_TEMPLATE_ID,
 	COMMON_ACTION_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -91,6 +92,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -113,6 +115,22 @@ enum index {
 	ACTION_TEMPLATE_SPEC,
 	ACTION_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_DRAIN,
+	QUEUE_TABLE,
+	QUEUE_ITEM_TEMPLATE,
+	QUEUE_ACTION_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_DRAIN,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -889,6 +907,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool drain; /** Drain the queue on async oparation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -918,6 +938,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t it_id;
 			uint32_t at_id;
 			struct rte_flow_attr attr;
@@ -1067,6 +1088,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2116,6 +2149,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2191,6 +2230,8 @@ static int comp_action_template_id(struct context *, const struct token *,
 				   unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2362,6 +2403,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2383,7 +2431,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2641,6 +2690,83 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TABLE), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TABLE] = {
+		.name = "table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ITEM_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ITEM_TEMPLATE] = {
+		.name = "item_template",
+		.help = "specify item template id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTION_TEMPLATE),
+			     NEXT_ENTRY(COMMON_ITEM_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.it_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTION_TEMPLATE] = {
+		.name = "action_template",
+		.help = "specify action template id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_DRAIN),
+			     NEXT_ENTRY(COMMON_ACTION_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.at_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_DRAIN] = {
+		.name = "drain",
+		.help = "drain queue immediately",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, drain)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_DRAIN] = {
+		.name = "drain",
+		.help = "drain queue immediately",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, drain)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8154,6 +8280,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TABLE:
+	case QUEUE_ITEM_TEMPLATE:
+	case QUEUE_ACTION_TEMPLATE:
+	case QUEUE_CREATE_DRAIN:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_DRAIN:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9193,6 +9424,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9485,6 +9738,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->drain,
+				       in->args.vc.table_id, in->args.vc.it_id,
+				       in->args.vc.at_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->drain,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 07582fa552..31164d6bf6 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2411,6 +2411,159 @@ port_flow_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool drain, uint32_t table_id,
+		       uint32_t item_id, uint32_t action_id,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .drain = drain };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error;
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr,
+		pt->table, pattern, item_id, actions, action_id, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_q_dequeue(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to poll queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool drain, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .drain = drain };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr,
+						    pf->flow, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_q_dequeue(port_id, queue_id,
+							 &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to poll queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index b8655b9987..99845b9e2f 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -927,6 +927,13 @@ int port_flow_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_action_templates, uint32_t *action_templates);
 int port_flow_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool drain, uint32_t table_id,
+			   uint32_t item_id, uint32_t action_id,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool drain, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f8a87564be..eb9dff7221 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3355,6 +3355,19 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id} [drain {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [drain {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3654,6 +3667,29 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_q_flow_create()``::
+
+   flow queue {port_id} create {queue_id} [drain {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4368,6 +4404,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [drain {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [v2,08/10] app/testpmd: implement rte flow queue drain
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (6 preceding siblings ...)
  2022-01-18 15:35   ` [v2,07/10] app/testpmd: implement rte flow queue create flow Alexander Kozyrev
@ 2022-01-18 15:35   ` Alexander Kozyrev
  2022-01-18 15:36   ` [v2,09/10] app/testpmd: implement rte flow queue dequeue Alexander Kozyrev
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:35 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_drain API.
Provide the command line interface for the queue draining.
Usage example: flow queue 0 drain 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 6a8e6fc683..e94c01cf75 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -93,6 +93,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	DRAIN,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -131,6 +132,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_DRAIN,
 
+	/* Drain arguments. */
+	DRAIN_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2155,6 +2159,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_drain(struct context *, const struct token *,
+		       const char *, unsigned int,
+		       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2432,7 +2439,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      DRAIN)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2767,6 +2775,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[DRAIN] = {
+		.name = "drain",
+		.help = "drain a flow queue",
+		.next = NEXT(NEXT_ENTRY(DRAIN_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_drain,
+	},
+	/* Sub-level commands. */
+	[DRAIN_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8385,6 +8408,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for drain queue command. */
+static int
+parse_drain(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != DRAIN)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9749,6 +9800,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case DRAIN:
+		port_queue_flow_drain(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31164d6bf6..c6469dd06f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2564,6 +2564,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Drain all the queue operations down the queue. */
+int
+port_queue_flow_drain(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_q_drain(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to drain queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u drained\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 99845b9e2f..bf4597e7ba 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -934,6 +934,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool drain, uint32_t n, const uint32_t *rule);
+int port_queue_flow_drain(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index eb9dff7221..2ff4e4aef1 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3368,6 +3368,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [drain {boolean}] rule {rule_id} [...]
 
+- Drain a queue::
+
+   flow drain {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3561,6 +3565,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Draining a flow queue
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow drain`` drains the specific queue to push all the
+outstanding queued operations to the underlying device immediately.
+It is bound to ``rte_flow_q_drain()``::
+
+   flow drain {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] drained
+
+The usual error message is shown when a queue cannot be drained::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [v2,09/10] app/testpmd: implement rte flow queue dequeue
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (7 preceding siblings ...)
  2022-01-18 15:35   ` [v2,08/10] app/testpmd: implement rte flow queue drain Alexander Kozyrev
@ 2022-01-18 15:36   ` Alexander Kozyrev
  2022-01-18 15:37   ` [v2,10/10] app/testpmd: implement rte flow queue indirect action Alexander Kozyrev
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:36 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_dequeue API.
Provide the command line interface for operations dequeue.
Usage example: flow dequeue 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 54 +++++++++++++++
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 126 insertions(+), 28 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index e94c01cf75..507eb87984 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -93,6 +93,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	DEQUEUE,
 	DRAIN,
 
 	/* Flex arguments */
@@ -132,6 +133,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_DRAIN,
 
+	/* Dequeue arguments. */
+	DEQUEUE_QUEUE,
+
 	/* Drain arguments. */
 	DRAIN_QUEUE,
 
@@ -2159,6 +2163,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_dequeue(struct context *, const struct token *,
+			 const char *, unsigned int,
+			 void *, unsigned int);
 static int parse_drain(struct context *, const struct token *,
 		       const char *, unsigned int,
 		       void *, unsigned int);
@@ -2440,6 +2447,7 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
+			      DEQUEUE,
 			      DRAIN)),
 		.call = parse_init,
 	},
@@ -2775,6 +2783,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[DEQUEUE] = {
+		.name = "dequeue",
+		.help = "dequeue flow operations",
+		.next = NEXT(NEXT_ENTRY(DEQUEUE_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_dequeue,
+	},
+	/* Sub-level commands. */
+	[DEQUEUE_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[DRAIN] = {
 		.name = "drain",
 		.help = "drain a flow queue",
@@ -8408,6 +8431,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for dequeue command. */
+static int
+parse_dequeue(struct context *ctx, const struct token *token,
+	      const char *str, unsigned int len,
+	      void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != DEQUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 /** Parse tokens for drain queue command. */
 static int
 parse_drain(struct context *ctx, const struct token *token,
@@ -9800,6 +9851,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case DEQUEUE:
+		port_queue_flow_dequeue(in->port, in->queue);
+		break;
 	case DRAIN:
 		port_queue_flow_drain(in->port, in->queue);
 		break;
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index c6469dd06f..5d23edf562 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2420,14 +2420,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .drain = drain };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error;
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2477,16 +2475,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_q_dequeue(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to poll queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2501,7 +2489,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool drain, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .drain = drain };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2537,21 +2524,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_q_dequeue(port_id, queue_id,
-							 &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to poll queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2592,6 +2564,52 @@ port_queue_flow_drain(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Dequeue a queue operation from the queue. */
+int
+port_queue_flow_dequeue(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = malloc(sizeof(struct rte_flow_q_op_res) * port->queue_sz);
+	if (!res) {
+		printf("Failed to allocate memory for dequeue results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_q_dequeue(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to dequeue a queue\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u dequeued %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index bf4597e7ba..3cf336dbae 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -935,6 +935,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool drain, uint32_t n, const uint32_t *rule);
 int port_queue_flow_drain(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_dequeue(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2ff4e4aef1..fff4de8f00 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3372,6 +3372,10 @@ following sections.
 
    flow drain {port_id} queue {queue_id}
 
+- Dequeue all operations from a queue::
+
+   flow dequeue {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3582,6 +3586,23 @@ The usual error message is shown when a queue cannot be drained::
 
    Caught error type [...] ([...]): [...]
 
+Dequeueing flow operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow dequeue`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_q_dequeue()``::
+
+   flow dequeue {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] dequeued #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when a queue cannot be drained::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3711,6 +3732,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue dequeue`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4444,6 +4467,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue dequeue`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [v2,10/10] app/testpmd: implement rte flow queue indirect action
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (8 preceding siblings ...)
  2022-01-18 15:36   ` [v2,09/10] app/testpmd: implement rte flow queue dequeue Alexander Kozyrev
@ 2022-01-18 15:37   ` Alexander Kozyrev
  2022-01-19  7:16   ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Suanming Mou
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-18 15:37 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress drain yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 507eb87984..50b6424933 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -120,6 +120,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -133,6 +134,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_DRAIN,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_DRAIN,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_DRAIN,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_DRAIN,
+
 	/* Dequeue arguments. */
 	DEQUEUE_QUEUE,
 
@@ -1099,6 +1120,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1108,6 +1130,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_DRAIN,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_DRAIN,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_DRAIN,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2163,6 +2215,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_dequeue(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
@@ -2729,6 +2787,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TABLE] = {
 		.name = "table",
@@ -2782,6 +2847,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_DRAIN] = {
+		.name = "drain",
+		.help = "drain operation immediately",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, drain)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_DRAIN] = {
+		.name = "drain",
+		.help = "drain operation immediately",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, drain)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_DRAIN] = {
+		.name = "drain",
+		.help = "drain operation immediately",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, drain)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[DEQUEUE] = {
 		.name = "dequeue",
@@ -6181,6 +6330,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_DRAIN:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_DRAIN:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -9857,6 +10110,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case DRAIN:
 		port_queue_flow_drain(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->drain,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->drain,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->drain,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 5d23edf562..634174eec6 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2536,6 +2536,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation*/
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool drain, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .drain = drain};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr,
+						      conf, action, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 drain, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation*/
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool drain,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .drain = drain};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_q_action_handle_destroy(port_id, queue_id,
+						&attr, pia->handle, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation*/
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool drain, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .drain = drain};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_q_action_handle_update(port_id, queue_id, &attr,
+					    action_handle, action, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Drain all the queue operations down the queue. */
 int
 port_queue_flow_drain(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 3cf336dbae..eeaf1864cd 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -934,6 +934,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool drain, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool drain, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool drain,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool drain, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_drain(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_dequeue(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index fff4de8f00..dfb81d56d8 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4728,6 +4728,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [drain {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue dequeue`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4757,6 +4782,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [drain {boolean}] action {action} / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue dequeue`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4780,6 +4824,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_q_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [drain {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue dequeue`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 00/10] ethdev: datapath-focused flow rules management
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (9 preceding siblings ...)
  2022-01-18 15:37   ` [v2,10/10] app/testpmd: implement rte flow queue indirect action Alexander Kozyrev
@ 2022-01-19  7:16   ` Suanming Mou
  2022-01-24 15:10     ` Ori Kam
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
  11 siblings, 1 reply; 220+ messages in thread
From: Suanming Mou @ 2022-01-19  7:16 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde



> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Tuesday, January 18, 2022 11:30 PM
> To: dev@dpdk.org
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> andrew.rybchenko@oktetlabs.ru; ferruh.yigit@intel.com;
> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com; jerinj@marvell.com;
> ajit.khaparde@broadcom.com
> Subject: [PATCH v2 00/10] ethdev: datapath-focused flow rules management
> 
> Three major changes to a generic RTE Flow API were implemented in order to
> speed up flow rule insertion/destruction and adapt the API to the needs of a
> datapath-focused flow rules management applications:
> 
> 1. Pre-configuration hints.
> Application may give us some hints on what type of resources are needed.
> Introduce the configuration routine to prepare all the needed resources inside a
> PMD/HW before any flow rules are created at the init stage.
> 
> 2. Flow grouping using templates.
> Use the knowledge about which flow rules are to be used in an application and
> prepare item and action templates for them in advance. Group flow rules with
> common patterns and actions together for better resource management.
> 
> 3. Queue-based flow management.
> Perform flow rule insertion/destruction asynchronously to spare the datapath
> from blocking on RTE Flow API and allow it to continue with packet processing.
> Enqueue flow rules operations and poll for the results later.
> 
> testpmd examples are part of the patch series. PMD changes will follow.
> 
> RFC:
> https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-
> akozyrev@nvidia.com/
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
> 
> ---
> v2: fixed patch series thread
> 
> Alexander Kozyrev (10):
>   ethdev: introduce flow pre-configuration hints
>   ethdev: add flow item/action templates
>   ethdev: bring in async queue-based flow rules operations
>   app/testpmd: implement rte flow configure
>   app/testpmd: implement rte flow item/action template
>   app/testpmd: implement rte flow table
>   app/testpmd: implement rte flow queue create flow
>   app/testpmd: implement rte flow queue drain
>   app/testpmd: implement rte flow queue dequeue
>   app/testpmd: implement rte flow queue indirect action
> 
>  app/test-pmd/cmdline_flow.c                   | 1484 ++++++++++++++++-
>  app/test-pmd/config.c                         |  731 ++++++++
>  app/test-pmd/testpmd.h                        |   61 +
>  doc/guides/prog_guide/img/rte_flow_q_init.svg |   71 +
>  .../prog_guide/img/rte_flow_q_usage.svg       |   60 +
>  doc/guides/prog_guide/rte_flow.rst            |  319 ++++
>  doc/guides/rel_notes/release_22_03.rst        |   19 +
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  350 +++-
>  lib/ethdev/rte_flow.c                         |  332 ++++
>  lib/ethdev/rte_flow.h                         |  680 ++++++++
>  lib/ethdev/rte_flow_driver.h                  |  103 ++
>  lib/ethdev/version.map                        |   16 +
>  12 files changed, 4203 insertions(+), 23 deletions(-)  create mode 100644
> doc/guides/prog_guide/img/rte_flow_q_init.svg
>  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
> 
> --
> 2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-18 15:30   ` [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-01-24 14:36     ` Jerin Jacob
  2022-01-24 17:35       ` Thomas Monjalon
  2022-01-24 17:40       ` Ajit Khaparde
  0 siblings, 2 replies; 220+ messages in thread
From: Jerin Jacob @ 2022-01-24 14:36 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Ori Kam, Thomas Monjalon, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde

On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
>
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flow engine port configuration attributes.
> + */
> +__extension__

Is this __extension__ required ?


> +struct rte_flow_port_attr {
> +       /**
> +        * Version of the struct layout, should be 0.
> +        */
> +       uint32_t version;

Why version number? Across DPDK, we are using dynamic function
versioning, I think, that would
 be sufficient for ABI versioning

> +       /**
> +        * Number of counter actions pre-configured.
> +        * If set to 0, PMD will allocate counters dynamically.
> +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> +        */
> +       uint32_t nb_counters;
> +       /**
> +        * Number of aging actions pre-configured.
> +        * If set to 0, PMD will allocate aging dynamically.
> +        * @see RTE_FLOW_ACTION_TYPE_AGE
> +        */
> +       uint32_t nb_aging;
> +       /**
> +        * Number of traffic metering actions pre-configured.
> +        * If set to 0, PMD will allocate meters dynamically.
> +        * @see RTE_FLOW_ACTION_TYPE_METER
> +        */
> +       uint32_t nb_meters;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Configure flow rules module.
> + * To pre-allocate resources as per the flow port attributes
> + * this configuration function must be called before any flow rule is created.
> + * Must be called only after Ethernet device is configured, but may be called
> + * before or after the device is started as long as there are no flow rules.
> + * No other rte_flow function should be called while this function is invoked.
> + * This function can be called again to change the configuration.
> + * Some PMDs may not support re-configuration at all,
> + * or may only allow increasing the number of resources allocated.

Following comment from Ivan looks good to me

* Pre-configure the port's flow API engine.
*
* This API can only be invoked before the application
* starts using the rest of the flow library functions.
*
* The API can be invoked multiple times to change the
* settings. The port, however, may reject the changes.

> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] port_attr
> + *   Port configuration attributes.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_configure(uint16_t port_id,

Should we couple, setting resource limit hint to configure function as
if we add future items in
configuration, we may pain to manage all state. Instead how about,
rte_flow_resource_reserve_hint_set()?


> +                  const struct rte_flow_port_attr *port_attr,
> +                  struct rte_flow_error *error);

I think, we should have _get function to get those limit numbers otherwise,
we can not write portable applications as the return value is  kind of
boolean now if
don't define exact values for rte_errno for reasons.



> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> index f691b04af4..5f722f1a39 100644
> --- a/lib/ethdev/rte_flow_driver.h
> +++ b/lib/ethdev/rte_flow_driver.h
> @@ -152,6 +152,11 @@ struct rte_flow_ops {
>                 (struct rte_eth_dev *dev,
>                  const struct rte_flow_item_flex_handle *handle,
>                  struct rte_flow_error *error);
> +       /** See rte_flow_configure() */
> +       int (*configure)
> +               (struct rte_eth_dev *dev,
> +                const struct rte_flow_port_attr *port_attr,
> +                struct rte_flow_error *err);
>  };
>
>  /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index c2fb0669a4..7645796739 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -256,6 +256,9 @@ EXPERIMENTAL {
>         rte_flow_flex_item_create;
>         rte_flow_flex_item_release;
>         rte_flow_pick_transfer_proxy;
> +
> +       # added in 22.03
> +       rte_flow_configure;
>  };
>
>  INTERNAL {
> --
> 2.18.2
>

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 00/10] ethdev: datapath-focused flow rules management
  2022-01-19  7:16   ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Suanming Mou
@ 2022-01-24 15:10     ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-01-24 15:10 UTC (permalink / raw)
  To: Suanming Mou, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alex,

> -----Original Message-----
> From: Suanming Mou <suanmingm@nvidia.com>
> Subject: RE: [PATCH v2 00/10] ethdev: datapath-focused flow rules management
> 
> 
> 
> > -----Original Message-----
> > From: Alexander Kozyrev <akozyrev@nvidia.com>
> > Sent: Tuesday, January 18, 2022 11:30 PM
> > To: dev@dpdk.org
> > Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> > <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> > andrew.rybchenko@oktetlabs.ru; ferruh.yigit@intel.com;
> > mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com; jerinj@marvell.com;
> > ajit.khaparde@broadcom.com
> > Subject: [PATCH v2 00/10] ethdev: datapath-focused flow rules management
> >
> > Three major changes to a generic RTE Flow API were implemented in order to
> > speed up flow rule insertion/destruction and adapt the API to the needs of a
> > datapath-focused flow rules management applications:
> >
> > 1. Pre-configuration hints.
> > Application may give us some hints on what type of resources are needed.
> > Introduce the configuration routine to prepare all the needed resources inside a
> > PMD/HW before any flow rules are created at the init stage.
> >
> > 2. Flow grouping using templates.
> > Use the knowledge about which flow rules are to be used in an application and
> > prepare item and action templates for them in advance. Group flow rules with
> > common patterns and actions together for better resource management.
> >
> > 3. Queue-based flow management.
> > Perform flow rule insertion/destruction asynchronously to spare the datapath
> > from blocking on RTE Flow API and allow it to continue with packet processing.
> > Enqueue flow rules operations and poll for the results later.
> >
> > testpmd examples are part of the patch series. PMD changes will follow.
> >
> > RFC:
> > https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-
> > akozyrev@nvidia.com/
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Reviewed-by: Suanming Mou <suanmingm@nvidia.com>
> >
> > ---
> > v2: fixed patch series thread
> >
> > Alexander Kozyrev (10):
> >   ethdev: introduce flow pre-configuration hints
> >   ethdev: add flow item/action templates
> >   ethdev: bring in async queue-based flow rules operations
> >   app/testpmd: implement rte flow configure
> >   app/testpmd: implement rte flow item/action template
> >   app/testpmd: implement rte flow table
> >   app/testpmd: implement rte flow queue create flow
> >   app/testpmd: implement rte flow queue drain
> >   app/testpmd: implement rte flow queue dequeue
> >   app/testpmd: implement rte flow queue indirect action
> >
> >  app/test-pmd/cmdline_flow.c                   | 1484 ++++++++++++++++-
> >  app/test-pmd/config.c                         |  731 ++++++++
> >  app/test-pmd/testpmd.h                        |   61 +
> >  doc/guides/prog_guide/img/rte_flow_q_init.svg |   71 +
> >  .../prog_guide/img/rte_flow_q_usage.svg       |   60 +
> >  doc/guides/prog_guide/rte_flow.rst            |  319 ++++
> >  doc/guides/rel_notes/release_22_03.rst        |   19 +
> >  doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  350 +++-
> >  lib/ethdev/rte_flow.c                         |  332 ++++
> >  lib/ethdev/rte_flow.h                         |  680 ++++++++
> >  lib/ethdev/rte_flow_driver.h                  |  103 ++
> >  lib/ethdev/version.map                        |   16 +
> >  12 files changed, 4203 insertions(+), 23 deletions(-)  create mode 100644
> > doc/guides/prog_guide/img/rte_flow_q_init.svg
> >  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
> >
> > --
> > 2.18.2
Series-acked-by:  Ori Kam <orika@nvidia.com>

Thanks,
Ori



^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 14:36     ` Jerin Jacob
@ 2022-01-24 17:35       ` Thomas Monjalon
  2022-01-24 17:46         ` Jerin Jacob
  2022-01-24 17:40       ` Ajit Khaparde
  1 sibling, 1 reply; 220+ messages in thread
From: Thomas Monjalon @ 2022-01-24 17:35 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Alexander Kozyrev, dev, Ori Kam, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde, bruce.richardson, david.marchand, olivier.matz,
	stephen

24/01/2022 15:36, Jerin Jacob:
> On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > +struct rte_flow_port_attr {
> > +       /**
> > +        * Version of the struct layout, should be 0.
> > +        */
> > +       uint32_t version;
> 
> Why version number? Across DPDK, we are using dynamic function
> versioning, I think, that would be sufficient for ABI versioning

Function versioning is not ideal when the structure is accessed
in many places like many drivers and library functions.

The idea of this version field (which can be a bitfield)
is to update it when some new features are added,
so the users of the struct can check if a feature is there
before trying to use it.
It means a bit more code in the functions, but avoid duplicating functions
as in function versioning.

Another approach was suggested by Bruce, and applied to dmadev.
It is assuming we only add new fields at the end (no removal),
and focus on the size of the struct.
By passing sizeof as an extra parameter, the function knows
which fields are OK to use.
Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476



^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 14:36     ` Jerin Jacob
  2022-01-24 17:35       ` Thomas Monjalon
@ 2022-01-24 17:40       ` Ajit Khaparde
  2022-01-25  1:28         ` Alexander Kozyrev
  1 sibling, 1 reply; 220+ messages in thread
From: Ajit Khaparde @ 2022-01-24 17:40 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Alexander Kozyrev, dpdk-dev, Ori Kam, Thomas Monjalon,
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob

On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> >
> > The flow rules creation/destruction at a large scale incurs a performance
> > penalty and may negatively impact the packet processing when used
> > as part of the datapath logic. This is mainly because software/hardware
> > resources are allocated and prepared during the flow rule creation.
> >
> > In order to optimize the insertion rate, PMD may use some hints provided
> > by the application at the initialization phase. The rte_flow_configure()
> > function allows to pre-allocate all the needed resources beforehand.
> > These resources can be used at a later stage without costly allocations.
> > Every PMD may use only the subset of hints and ignore unused ones or
> > fail in case the requested configuration is not supported.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > ---
>
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Flow engine port configuration attributes.
> > + */
> > +__extension__
>
> Is this __extension__ required ?
>
>
> > +struct rte_flow_port_attr {
> > +       /**
> > +        * Version of the struct layout, should be 0.
> > +        */
> > +       uint32_t version;
>
> Why version number? Across DPDK, we are using dynamic function
> versioning, I think, that would
>  be sufficient for ABI versioning
>
> > +       /**
> > +        * Number of counter actions pre-configured.
> > +        * If set to 0, PMD will allocate counters dynamically.
> > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > +        */
> > +       uint32_t nb_counters;
> > +       /**
> > +        * Number of aging actions pre-configured.
> > +        * If set to 0, PMD will allocate aging dynamically.
> > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > +        */
> > +       uint32_t nb_aging;
> > +       /**
> > +        * Number of traffic metering actions pre-configured.
> > +        * If set to 0, PMD will allocate meters dynamically.
> > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > +        */
> > +       uint32_t nb_meters;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Configure flow rules module.
> > + * To pre-allocate resources as per the flow port attributes
> > + * this configuration function must be called before any flow rule is created.
> > + * Must be called only after Ethernet device is configured, but may be called
> > + * before or after the device is started as long as there are no flow rules.
> > + * No other rte_flow function should be called while this function is invoked.
> > + * This function can be called again to change the configuration.
> > + * Some PMDs may not support re-configuration at all,
> > + * or may only allow increasing the number of resources allocated.
>
> Following comment from Ivan looks good to me
>
> * Pre-configure the port's flow API engine.
> *
> * This API can only be invoked before the application
> * starts using the rest of the flow library functions.
> *
> * The API can be invoked multiple times to change the
> * settings. The port, however, may reject the changes.
>
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] port_attr
> > + *   Port configuration attributes.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_configure(uint16_t port_id,
>
> Should we couple, setting resource limit hint to configure function as
> if we add future items in
> configuration, we may pain to manage all state. Instead how about,
> rte_flow_resource_reserve_hint_set()?
+1

>
>
> > +                  const struct rte_flow_port_attr *port_attr,
> > +                  struct rte_flow_error *error);
>
> I think, we should have _get function to get those limit numbers otherwise,
> we can not write portable applications as the return value is  kind of
> boolean now if
> don't define exact values for rte_errno for reasons.
+1

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 17:35       ` Thomas Monjalon
@ 2022-01-24 17:46         ` Jerin Jacob
  2022-01-24 18:08           ` Bruce Richardson
  0 siblings, 1 reply; 220+ messages in thread
From: Jerin Jacob @ 2022-01-24 17:46 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Alexander Kozyrev, dpdk-dev, Ori Kam, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde, Richardson, Bruce, David Marchand,
	Olivier Matz, Stephen Hemminger

On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 24/01/2022 15:36, Jerin Jacob:
> > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > +struct rte_flow_port_attr {
> > > +       /**
> > > +        * Version of the struct layout, should be 0.
> > > +        */
> > > +       uint32_t version;
> >
> > Why version number? Across DPDK, we are using dynamic function
> > versioning, I think, that would be sufficient for ABI versioning
>
> Function versioning is not ideal when the structure is accessed
> in many places like many drivers and library functions.
>
> The idea of this version field (which can be a bitfield)
> is to update it when some new features are added,
> so the users of the struct can check if a feature is there
> before trying to use it.
> It means a bit more code in the functions, but avoid duplicating functions
> as in function versioning.
>
> Another approach was suggested by Bruce, and applied to dmadev.
> It is assuming we only add new fields at the end (no removal),
> and focus on the size of the struct.
> By passing sizeof as an extra parameter, the function knows
> which fields are OK to use.
> Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476

+ @Richardson, Bruce
Either approach is fine, No strong opinion.  We can have one approach
and use it across DPDK for consistency.

>
>

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 17:46         ` Jerin Jacob
@ 2022-01-24 18:08           ` Bruce Richardson
  2022-01-25  1:14             ` Alexander Kozyrev
  2022-01-25 15:58             ` Ori Kam
  0 siblings, 2 replies; 220+ messages in thread
From: Bruce Richardson @ 2022-01-24 18:08 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Thomas Monjalon, Alexander Kozyrev, dpdk-dev, Ori Kam,
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde, David Marchand,
	Olivier Matz, Stephen Hemminger

On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 24/01/2022 15:36, Jerin Jacob:
> > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > > +struct rte_flow_port_attr {
> > > > +       /**
> > > > +        * Version of the struct layout, should be 0.
> > > > +        */
> > > > +       uint32_t version;
> > >
> > > Why version number? Across DPDK, we are using dynamic function
> > > versioning, I think, that would be sufficient for ABI versioning
> >
> > Function versioning is not ideal when the structure is accessed
> > in many places like many drivers and library functions.
> >
> > The idea of this version field (which can be a bitfield)
> > is to update it when some new features are added,
> > so the users of the struct can check if a feature is there
> > before trying to use it.
> > It means a bit more code in the functions, but avoid duplicating functions
> > as in function versioning.
> >
> > Another approach was suggested by Bruce, and applied to dmadev.
> > It is assuming we only add new fields at the end (no removal),
> > and focus on the size of the struct.
> > By passing sizeof as an extra parameter, the function knows
> > which fields are OK to use.
> > Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> 
> + @Richardson, Bruce
> Either approach is fine, No strong opinion.  We can have one approach
> and use it across DPDK for consistency.
> 

In general I prefer the size-based approach, mainly because of its
simplicity. However, some other reasons why we may want to choose it:

* It's completely hidden from the end user, and there is no need for an
  extra struct field that needs to be filled in

* Related to that, for the version-field approach, if the field is present
  in a user-allocated struct, then you probably need to start preventing user
  error via:
   - having the external struct not have the field and use a separate
     internal struct to add in the version info after the fact in the
     versioned function. Alternatively,
   - provide a separate init function for each structure to fill in the
     version field appropriately

* In general, using the size-based approach like in the linked example is
  more resilient since it's compiler-inserted, so there is reduced chance
  of error.

* A sizeof field allows simple-enough handling in the drivers - especially
  since it does not allow removed fields. Each driver only needs to check
  that the size passed in is greater than that expected, thereby allowing
  us to have both updated and non-updated drivers co-existing simultaneously.
  [For a version field, the same scheme could also work if we keep the
  no-delete rule, but for a bitmask field, I believe things may get more
  complex in terms of checking]

In terms of the limitations of using sizeof - requiring new fields to
always go on the end, and preventing shrinking the struct - I think that the
simplicity gains far outweigh the impact of these strictions.

* Adding fields to struct is far more common than wanting to remove one

* So long as the added field is at the end, even if the struct size doesn't
  change the scheme can still work as the versioned function for the old
  struct can ensure that the extra field is appropriately zeroed (rather than
  random) on entry into any driver function

* If we do want to remove a field, the space simply needs to be marked as
  reserved in the struct, until the next ABI break release, when it can be
  compacted. Again, function versioning can take care of appropriately
  zeroing this field on return, if necessary.

My 2c from considering this for the implementation in dmadev. :-)

/Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 18:08           ` Bruce Richardson
@ 2022-01-25  1:14             ` Alexander Kozyrev
  2022-01-25 15:58             ` Ori Kam
  1 sibling, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-25  1:14 UTC (permalink / raw)
  To: Bruce Richardson, Jerin Jacob
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	dpdk-dev, Ori Kam, Ivan Malov, Andrew Rybchenko, Ferruh Yigit,
	mohammad.abdul.awal, Qi Zhang, Jerin Jacob, Ajit Khaparde,
	David Marchand, Olivier Matz, Stephen Hemminger

On Monday, January 24, 2022 13:09 Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> <thomas@monjalon.net> wrote:
> > >
> > > 24/01/2022 15:36, Jerin Jacob:
> > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> <akozyrev@nvidia.com> wrote:
> > > > > +struct rte_flow_port_attr {
> > > > > +       /**
> > > > > +        * Version of the struct layout, should be 0.
> > > > > +        */
> > > > > +       uint32_t version;
> > > >
> > > > Why version number? Across DPDK, we are using dynamic function
> > > > versioning, I think, that would be sufficient for ABI versioning
> > >
> > > Function versioning is not ideal when the structure is accessed
> > > in many places like many drivers and library functions.
> > >
> > > The idea of this version field (which can be a bitfield)
> > > is to update it when some new features are added,
> > > so the users of the struct can check if a feature is there
> > > before trying to use it.
> > > It means a bit more code in the functions, but avoid duplicating functions
> > > as in function versioning.
> > >
> > > Another approach was suggested by Bruce, and applied to dmadev.
> > > It is assuming we only add new fields at the end (no removal),
> > > and focus on the size of the struct.
> > > By passing sizeof as an extra parameter, the function knows
> > > which fields are OK to use.
> > > Example:
> http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> >
> > + @Richardson, Bruce
> > Either approach is fine, No strong opinion.  We can have one approach
> > and use it across DPDK for consistency.
> >
> 
> In general I prefer the size-based approach, mainly because of its
> simplicity. However, some other reasons why we may want to choose it:
> 
> * It's completely hidden from the end user, and there is no need for an
>   extra struct field that needs to be filled in
> 
> * Related to that, for the version-field approach, if the field is present
>   in a user-allocated struct, then you probably need to start preventing user
>   error via:
>    - having the external struct not have the field and use a separate
>      internal struct to add in the version info after the fact in the
>      versioned function. Alternatively,
>    - provide a separate init function for each structure to fill in the
>      version field appropriately
> 
> * In general, using the size-based approach like in the linked example is
>   more resilient since it's compiler-inserted, so there is reduced chance
>   of error.
> 
> * A sizeof field allows simple-enough handling in the drivers - especially
>   since it does not allow removed fields. Each driver only needs to check
>   that the size passed in is greater than that expected, thereby allowing
>   us to have both updated and non-updated drivers co-existing
> simultaneously.
>   [For a version field, the same scheme could also work if we keep the
>   no-delete rule, but for a bitmask field, I believe things may get more
>   complex in terms of checking]
> 
> In terms of the limitations of using sizeof - requiring new fields to
> always go on the end, and preventing shrinking the struct - I think that the
> simplicity gains far outweigh the impact of these strictions.
> 
> * Adding fields to struct is far more common than wanting to remove one
> 
> * So long as the added field is at the end, even if the struct size doesn't
>   change the scheme can still work as the versioned function for the old
>   struct can ensure that the extra field is appropriately zeroed (rather than
>   random) on entry into any driver function
> 
> * If we do want to remove a field, the space simply needs to be marked as
>   reserved in the struct, until the next ABI break release, when it can be
>   compacted. Again, function versioning can take care of appropriately
>   zeroing this field on return, if necessary.
> 
> My 2c from considering this for the implementation in dmadev. :-)
> 
> /Bruce

Thank you for the suggestions. I have no objections in adopting a size-based approach.
I can keep versions or switch to sizeof as long as we can agree on some uniform way.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 17:40       ` Ajit Khaparde
@ 2022-01-25  1:28         ` Alexander Kozyrev
  2022-01-25 18:44           ` Jerin Jacob
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-25  1:28 UTC (permalink / raw)
  To: Ajit Khaparde, Jerin Jacob
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob

On Monday, January 24, 2022 12:41 Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
> On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com>
> wrote:
> >
> > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> <akozyrev@nvidia.com> wrote:
> > >
> > > The flow rules creation/destruction at a large scale incurs a performance
> > > penalty and may negatively impact the packet processing when used
> > > as part of the datapath logic. This is mainly because software/hardware
> > > resources are allocated and prepared during the flow rule creation.
> > >
> > > In order to optimize the insertion rate, PMD may use some hints
> provided
> > > by the application at the initialization phase. The rte_flow_configure()
> > > function allows to pre-allocate all the needed resources beforehand.
> > > These resources can be used at a later stage without costly allocations.
> > > Every PMD may use only the subset of hints and ignore unused ones or
> > > fail in case the requested configuration is not supported.
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > ---
> >
> > >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Flow engine port configuration attributes.
> > > + */
> > > +__extension__
> >
> > Is this __extension__ required ?
No, it is not longer required as I removed bitfield from this structure. Thanks for catching.

> >
> > > +struct rte_flow_port_attr {
> > > +       /**
> > > +        * Version of the struct layout, should be 0.
> > > +        */
> > > +       uint32_t version;
> >
> > Why version number? Across DPDK, we are using dynamic function
> > versioning, I think, that would
> >  be sufficient for ABI versioning
> >
> > > +       /**
> > > +        * Number of counter actions pre-configured.
> > > +        * If set to 0, PMD will allocate counters dynamically.
> > > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > > +        */
> > > +       uint32_t nb_counters;
> > > +       /**
> > > +        * Number of aging actions pre-configured.
> > > +        * If set to 0, PMD will allocate aging dynamically.
> > > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > > +        */
> > > +       uint32_t nb_aging;
> > > +       /**
> > > +        * Number of traffic metering actions pre-configured.
> > > +        * If set to 0, PMD will allocate meters dynamically.
> > > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > > +        */
> > > +       uint32_t nb_meters;
> > > +};
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Configure flow rules module.
> > > + * To pre-allocate resources as per the flow port attributes
> > > + * this configuration function must be called before any flow rule is
> created.
> > > + * Must be called only after Ethernet device is configured, but may be
> called
> > > + * before or after the device is started as long as there are no flow rules.
> > > + * No other rte_flow function should be called while this function is
> invoked.
> > > + * This function can be called again to change the configuration.
> > > + * Some PMDs may not support re-configuration at all,
> > > + * or may only allow increasing the number of resources allocated.
> >
> > Following comment from Ivan looks good to me
> >
> > * Pre-configure the port's flow API engine.
> > *
> > * This API can only be invoked before the application
> > * starts using the rest of the flow library functions.
> > *
> > * The API can be invoked multiple times to change the
> > * settings. The port, however, may reject the changes.
Ok, I'll adopt this wording in the v3.

> > > + *
> > > + * @param port_id
> > > + *   Port identifier of Ethernet device.
> > > + * @param[in] port_attr
> > > + *   Port configuration attributes.
> > > + * @param[out] error
> > > + *   Perform verbose error reporting if not NULL.
> > > + *   PMDs initialize this structure in case of error only.
> > > + *
> > > + * @return
> > > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > > + */
> > > +__rte_experimental
> > > +int
> > > +rte_flow_configure(uint16_t port_id,
> >
> > Should we couple, setting resource limit hint to configure function as
> > if we add future items in
> > configuration, we may pain to manage all state. Instead how about,
> > rte_flow_resource_reserve_hint_set()?
> +1
Port attributes are the hints, PMD can safely ignore anything that is not supported/deemed unreasonable.
Having several functions to call instead of one configuration function seems like a burden to me.

> 
> >
> >
> > > +                  const struct rte_flow_port_attr *port_attr,
> > > +                  struct rte_flow_error *error);
> >
> > I think, we should have _get function to get those limit numbers otherwise,
> > we can not write portable applications as the return value is  kind of
> > boolean now if
> > don't define exact values for rte_errno for reasons.
> +1
We had this discussion in RFC. The limits will vary from NIC to NIC and from system to
system, depending on hardware capabilities and amount of free memory for example.
It is easier to reject a configuration with a clear error description as we do for flow creation.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-24 18:08           ` Bruce Richardson
  2022-01-25  1:14             ` Alexander Kozyrev
@ 2022-01-25 15:58             ` Ori Kam
  2022-01-25 18:09               ` Bruce Richardson
  1 sibling, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-01-25 15:58 UTC (permalink / raw)
  To: Bruce Richardson, Jerin Jacob
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger

Hi Bruce,

> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Monday, January 24, 2022 8:09 PM
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> 
> On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > 24/01/2022 15:36, Jerin Jacob:
> > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > > > +struct rte_flow_port_attr {
> > > > > +       /**
> > > > > +        * Version of the struct layout, should be 0.
> > > > > +        */
> > > > > +       uint32_t version;
> > > >
> > > > Why version number? Across DPDK, we are using dynamic function
> > > > versioning, I think, that would be sufficient for ABI versioning
> > >
> > > Function versioning is not ideal when the structure is accessed
> > > in many places like many drivers and library functions.
> > >
> > > The idea of this version field (which can be a bitfield)
> > > is to update it when some new features are added,
> > > so the users of the struct can check if a feature is there
> > > before trying to use it.
> > > It means a bit more code in the functions, but avoid duplicating functions
> > > as in function versioning.
> > >
> > > Another approach was suggested by Bruce, and applied to dmadev.
> > > It is assuming we only add new fields at the end (no removal),
> > > and focus on the size of the struct.
> > > By passing sizeof as an extra parameter, the function knows
> > > which fields are OK to use.
> > > Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> >
> > + @Richardson, Bruce
> > Either approach is fine, No strong opinion.  We can have one approach
> > and use it across DPDK for consistency.
> >
> 
> In general I prefer the size-based approach, mainly because of its
> simplicity. However, some other reasons why we may want to choose it:
> 
> * It's completely hidden from the end user, and there is no need for an
>   extra struct field that needs to be filled in
> 
> * Related to that, for the version-field approach, if the field is present
>   in a user-allocated struct, then you probably need to start preventing user
>   error via:
>    - having the external struct not have the field and use a separate
>      internal struct to add in the version info after the fact in the
>      versioned function. Alternatively,
>    - provide a separate init function for each structure to fill in the
>      version field appropriately
> 
> * In general, using the size-based approach like in the linked example is
>   more resilient since it's compiler-inserted, so there is reduced chance
>   of error.
> 
> * A sizeof field allows simple-enough handling in the drivers - especially
>   since it does not allow removed fields. Each driver only needs to check
>   that the size passed in is greater than that expected, thereby allowing
>   us to have both updated and non-updated drivers co-existing simultaneously.
>   [For a version field, the same scheme could also work if we keep the
>   no-delete rule, but for a bitmask field, I believe things may get more
>   complex in terms of checking]
> 
> In terms of the limitations of using sizeof - requiring new fields to
> always go on the end, and preventing shrinking the struct - I think that the
> simplicity gains far outweigh the impact of these strictions.
> 
> * Adding fields to struct is far more common than wanting to remove one
> 
> * So long as the added field is at the end, even if the struct size doesn't
>   change the scheme can still work as the versioned function for the old
>   struct can ensure that the extra field is appropriately zeroed (rather than
>   random) on entry into any driver function
> 

Zero can be a valid value so this is may result in an issue.

> * If we do want to remove a field, the space simply needs to be marked as
>   reserved in the struct, until the next ABI break release, when it can be
>   compacted. Again, function versioning can take care of appropriately
>   zeroing this field on return, if necessary.
> 

This means that PMD will have to change just for removal of a field
I would say removal is not allowed.

> My 2c from considering this for the implementation in dmadev. :-)

Some concerns I have about your suggestion:
1. The size of the struct is dependent on the system, for example
Assume this struct 
{
Uint16_t a;
Uint32_t b;
Uint8_t c;
Uint32_t d;
}
Incase of 32 bit machine the size will be 128 bytes, while in 64 machine it will be 96

2. ABI breakage, as far as I know changing size of a struct is ABI breakage, since if 
the application got the size from previous version and for example created array
or allocated memory then using the new structure will result in memory override.

I know that flags/version is not easy since it means creating new 
Structure for each change. I prefer to declare that size can change between
DPDK releases is allowd but as long as we say ABI breakage is forbidden then I don't think your
solution is valid.
And we must go with the version/flags and create new structure for each change.

Best,
Ori
> 
> /Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-25 15:58             ` Ori Kam
@ 2022-01-25 18:09               ` Bruce Richardson
  2022-01-25 18:14                 ` Bruce Richardson
  0 siblings, 1 reply; 220+ messages in thread
From: Bruce Richardson @ 2022-01-25 18:09 UTC (permalink / raw)
  To: Ori Kam
  Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger

On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> Hi Bruce,
> 
> > -----Original Message-----
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > Sent: Monday, January 24, 2022 8:09 PM
> > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> > 
> > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > >
> > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > > > > +struct rte_flow_port_attr {
> > > > > > +       /**
> > > > > > +        * Version of the struct layout, should be 0.
> > > > > > +        */
> > > > > > +       uint32_t version;
> > > > >
> > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > versioning, I think, that would be sufficient for ABI versioning
> > > >
> > > > Function versioning is not ideal when the structure is accessed
> > > > in many places like many drivers and library functions.
> > > >
> > > > The idea of this version field (which can be a bitfield)
> > > > is to update it when some new features are added,
> > > > so the users of the struct can check if a feature is there
> > > > before trying to use it.
> > > > It means a bit more code in the functions, but avoid duplicating functions
> > > > as in function versioning.
> > > >
> > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > It is assuming we only add new fields at the end (no removal),
> > > > and focus on the size of the struct.
> > > > By passing sizeof as an extra parameter, the function knows
> > > > which fields are OK to use.
> > > > Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > >
> > > + @Richardson, Bruce
> > > Either approach is fine, No strong opinion.  We can have one approach
> > > and use it across DPDK for consistency.
> > >
> > 
> > In general I prefer the size-based approach, mainly because of its
> > simplicity. However, some other reasons why we may want to choose it:
> > 
> > * It's completely hidden from the end user, and there is no need for an
> >   extra struct field that needs to be filled in
> > 
> > * Related to that, for the version-field approach, if the field is present
> >   in a user-allocated struct, then you probably need to start preventing user
> >   error via:
> >    - having the external struct not have the field and use a separate
> >      internal struct to add in the version info after the fact in the
> >      versioned function. Alternatively,
> >    - provide a separate init function for each structure to fill in the
> >      version field appropriately
> > 
> > * In general, using the size-based approach like in the linked example is
> >   more resilient since it's compiler-inserted, so there is reduced chance
> >   of error.
> > 
> > * A sizeof field allows simple-enough handling in the drivers - especially
> >   since it does not allow removed fields. Each driver only needs to check
> >   that the size passed in is greater than that expected, thereby allowing
> >   us to have both updated and non-updated drivers co-existing simultaneously.
> >   [For a version field, the same scheme could also work if we keep the
> >   no-delete rule, but for a bitmask field, I believe things may get more
> >   complex in terms of checking]
> > 
> > In terms of the limitations of using sizeof - requiring new fields to
> > always go on the end, and preventing shrinking the struct - I think that the
> > simplicity gains far outweigh the impact of these strictions.
> > 
> > * Adding fields to struct is far more common than wanting to remove one
> > 
> > * So long as the added field is at the end, even if the struct size doesn't
> >   change the scheme can still work as the versioned function for the old
> >   struct can ensure that the extra field is appropriately zeroed (rather than
> >   random) on entry into any driver function
> > 
> 
> Zero can be a valid value so this is may result in an issue.
> 

In this instance, I was using zero as a neutral, default-option value. If
having zero as the default causes problems, we can always make the
structure size change to force a new size value.

> > * If we do want to remove a field, the space simply needs to be marked as
> >   reserved in the struct, until the next ABI break release, when it can be
> >   compacted. Again, function versioning can take care of appropriately
> >   zeroing this field on return, if necessary.
> > 
> 
> This means that PMD will have to change just for removal of a field
> I would say removal is not allowed.
> 
> > My 2c from considering this for the implementation in dmadev. :-)
> 
> Some concerns I have about your suggestion:
> 1. The size of the struct is dependent on the system, for example
> Assume this struct 
> {
> Uint16_t a;
> Uint32_t b;
> Uint8_t c;
> Uint32_t d;
> }
> Incase of 32 bit machine the size will be 128 bytes, while in 64 machine it will be 96

Actually, I believe that in just about every system we support it will be
4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In any
case, the actual size value doesn't matter in practice, since all sizes
should be computed by the compiler using sizeof, rather than hard-coded.

> 
> 2. ABI breakage, as far as I know changing size of a struct is ABI breakage, since if 
> the application got the size from previous version and for example created array
> or allocated memory then using the new structure will result in memory override.
> 
> I know that flags/version is not easy since it means creating new 
> Structure for each change. I prefer to declare that size can change between
> DPDK releases is allowd but as long as we say ABI breakage is forbidden then I don't think your
> solution is valid.
> And we must go with the version/flags and create new structure for each change.
> 

whatever approach is taken for this, I believe we will always need to
create a new structure for the changes. This is because only functions can
be versioned, not structures. The only question therefore becomes how to
pass ABI version information, and therefore by extension structure version
information across a library to driver boundary. This has to be an extra
field somewhere, either in a structure or as a function parameter. I'd
prefer not in the structure as it exposes it to the user. In terms of the
field value, it can either be explicit version info as version number or
version flags, or implicit versioning via "size". Based off the "YAGNI"
principle, I really would prefer just using sizes, as it's far easier to
manage and work with for all concerned, and requires no additional
documentation for the programmer or driver developer to understand.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-25 18:09               ` Bruce Richardson
@ 2022-01-25 18:14                 ` Bruce Richardson
  2022-01-26  9:45                   ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Bruce Richardson @ 2022-01-25 18:14 UTC (permalink / raw)
  To: Ori Kam
  Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger

On Tue, Jan 25, 2022 at 06:09:42PM +0000, Bruce Richardson wrote:
> On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> > Hi Bruce,
> > 
> > > -----Original Message----- From: Bruce Richardson
> > > <bruce.richardson@intel.com> Sent: Monday, January 24, 2022 8:09 PM
> > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow
> > > pre-configuration hints
> > > 
> > > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> > > > <thomas@monjalon.net> wrote:
> > > > >
> > > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> > > > > > <akozyrev@nvidia.com> wrote:
> > > > > > > +struct rte_flow_port_attr { +       /** +        * Version
> > > > > > > of the struct layout, should be 0.  +        */ +
> > > > > > > uint32_t version;
> > > > > >
> > > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > > versioning, I think, that would be sufficient for ABI
> > > > > > versioning
> > > > >
> > > > > Function versioning is not ideal when the structure is accessed
> > > > > in many places like many drivers and library functions.
> > > > >
> > > > > The idea of this version field (which can be a bitfield) is to
> > > > > update it when some new features are added, so the users of the
> > > > > struct can check if a feature is there before trying to use it.
> > > > > It means a bit more code in the functions, but avoid duplicating
> > > > > functions as in function versioning.
> > > > >
> > > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > > It is assuming we only add new fields at the end (no removal),
> > > > > and focus on the size of the struct.  By passing sizeof as an
> > > > > extra parameter, the function knows which fields are OK to use.
> > > > > Example:
> > > > > http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > > >
> > > > + @Richardson, Bruce Either approach is fine, No strong opinion.
> > > > We can have one approach and use it across DPDK for consistency.
> > > >
> > > 
> > > In general I prefer the size-based approach, mainly because of its
> > > simplicity. However, some other reasons why we may want to choose it:
> > > 
> > > * It's completely hidden from the end user, and there is no need for
> > > an extra struct field that needs to be filled in
> > > 
> > > * Related to that, for the version-field approach, if the field is
> > > present in a user-allocated struct, then you probably need to start
> > > preventing user error via: - having the external struct not have the
> > > field and use a separate internal struct to add in the version info
> > > after the fact in the versioned function. Alternatively, - provide a
> > > separate init function for each structure to fill in the version
> > > field appropriately
> > > 
> > > * In general, using the size-based approach like in the linked
> > > example is more resilient since it's compiler-inserted, so there is
> > > reduced chance of error.
> > > 
> > > * A sizeof field allows simple-enough handling in the drivers -
> > > especially since it does not allow removed fields. Each driver only
> > > needs to check that the size passed in is greater than that expected,
> > > thereby allowing us to have both updated and non-updated drivers
> > > co-existing simultaneously.  [For a version field, the same scheme
> > > could also work if we keep the no-delete rule, but for a bitmask
> > > field, I believe things may get more complex in terms of checking]
> > > 
> > > In terms of the limitations of using sizeof - requiring new fields to
> > > always go on the end, and preventing shrinking the struct - I think
> > > that the simplicity gains far outweigh the impact of these
> > > strictions.
> > > 
> > > * Adding fields to struct is far more common than wanting to remove
> > > one
> > > 
> > > * So long as the added field is at the end, even if the struct size
> > > doesn't change the scheme can still work as the versioned function
> > > for the old struct can ensure that the extra field is appropriately
> > > zeroed (rather than random) on entry into any driver function
> > > 
> > 
> > Zero can be a valid value so this is may result in an issue.
> > 
> 
> In this instance, I was using zero as a neutral, default-option value. If
> having zero as the default causes problems, we can always make the
> structure size change to force a new size value.
> 
> > > * If we do want to remove a field, the space simply needs to be
> > > marked as reserved in the struct, until the next ABI break release,
> > > when it can be compacted. Again, function versioning can take care of
> > > appropriately zeroing this field on return, if necessary.
> > > 
> > 
> > This means that PMD will have to change just for removal of a field I
> > would say removal is not allowed.
> > 
> > > My 2c from considering this for the implementation in dmadev. :-)
> > 
> > Some concerns I have about your suggestion: 1. The size of the struct
> > is dependent on the system, for example Assume this struct { Uint16_t
> > a; Uint32_t b; Uint8_t c; Uint32_t d; } Incase of 32 bit machine the
> > size will be 128 bytes, while in 64 machine it will be 96
> 
> Actually, I believe that in just about every system we support it will be
> 4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In
> any case, the actual size value doesn't matter in practice, since all
> sizes should be computed by the compiler using sizeof, rather than
> hard-coded.
> 
> > 
> > 2. ABI breakage, as far as I know changing size of a struct is ABI
> > breakage, since if the application got the size from previous version
> > and for example created array or allocated memory then using the new
> > structure will result in memory override.
> > 
> > I know that flags/version is not easy since it means creating new
> > Structure for each change. I prefer to declare that size can change
> > between DPDK releases is allowd but as long as we say ABI breakage is
> > forbidden then I don't think your solution is valid.  And we must go
> > with the version/flags and create new structure for each change.
> > 
> 
> whatever approach is taken for this, I believe we will always need to
> create a new structure for the changes. This is because only functions
> can be versioned, not structures. The only question therefore becomes how
> to pass ABI version information, and therefore by extension structure
> version information across a library to driver boundary. This has to be
> an extra field somewhere, either in a structure or as a function
> parameter. I'd prefer not in the structure as it exposes it to the user.
> In terms of the field value, it can either be explicit version info as
> version number or version flags, or implicit versioning via "size". Based
> off the "YAGNI" principle, I really would prefer just using sizes, as
> it's far easier to manage and work with for all concerned, and requires
> no additional documentation for the programmer or driver developer to
> understand.
> 
As a third alternative that I would find acceptable, we could also just
take the approach of passing the ABI version explicitly across the function
call i.e. 22 for DPDK_21.11. I'd find this ok too on the basis that it's
largely self explanatory, and can be inserted automatically by the compiler
- again reducing chances of errors. [However, I also believe that using
sizes is still simpler again, which is why it's still my first choice! :-)]

/Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-25  1:28         ` Alexander Kozyrev
@ 2022-01-25 18:44           ` Jerin Jacob
  2022-01-26 22:02             ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Jerin Jacob @ 2022-01-25 18:44 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: Ajit Khaparde, dpdk-dev, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob

On Tue, Jan 25, 2022 at 6:58 AM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> On Monday, January 24, 2022 12:41 Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
> > On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com>
> > wrote:
> > >

> Ok, I'll adopt this wording in the v3.
>
> > > > + *
> > > > + * @param port_id
> > > > + *   Port identifier of Ethernet device.
> > > > + * @param[in] port_attr
> > > > + *   Port configuration attributes.
> > > > + * @param[out] error
> > > > + *   Perform verbose error reporting if not NULL.
> > > > + *   PMDs initialize this structure in case of error only.
> > > > + *
> > > > + * @return
> > > > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > > > + */
> > > > +__rte_experimental
> > > > +int
> > > > +rte_flow_configure(uint16_t port_id,
> > >
> > > Should we couple, setting resource limit hint to configure function as
> > > if we add future items in
> > > configuration, we may pain to manage all state. Instead how about,
> > > rte_flow_resource_reserve_hint_set()?
> > +1
> Port attributes are the hints, PMD can safely ignore anything that is not supported/deemed unreasonable.
> Having several functions to call instead of one configuration function seems like a burden to me.

If we add a lot of features which has different state it will be
difficult to manage.
Since it is the slow path and OPTIONAL API. IMO, it should be fine to
have a separate API for a specific purpose
to have a clean interface.


>
> >
> > >
> > >
> > > > +                  const struct rte_flow_port_attr *port_attr,
> > > > +                  struct rte_flow_error *error);
> > >
> > > I think, we should have _get function to get those limit numbers otherwise,
> > > we can not write portable applications as the return value is  kind of
> > > boolean now if
> > > don't define exact values for rte_errno for reasons.
> > +1
> We had this discussion in RFC. The limits will vary from NIC to NIC and from system to
> system, depending on hardware capabilities and amount of free memory for example.
> It is easier to reject a configuration with a clear error description as we do for flow creation.

In that case, we can return a "defined" return value or "defined"
errno to capture this case so that
the application can make forward progress to differentiate between API
failed vs dont having enough resources
and move on.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-25 18:14                 ` Bruce Richardson
@ 2022-01-26  9:45                   ` Ori Kam
  2022-01-26 10:52                     ` Bruce Richardson
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-01-26  9:45 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger



> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Tuesday, January 25, 2022 8:14 PM
> To: Ori Kam <orika@nvidia.com>
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> 
> On Tue, Jan 25, 2022 at 06:09:42PM +0000, Bruce Richardson wrote:
> > On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> > > Hi Bruce,
> > >
> > > > -----Original Message----- From: Bruce Richardson
> > > > <bruce.richardson@intel.com> Sent: Monday, January 24, 2022 8:09 PM
> > > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow
> > > > pre-configuration hints
> > > >
> > > > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> > > > > <thomas@monjalon.net> wrote:
> > > > > >
> > > > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> > > > > > > <akozyrev@nvidia.com> wrote:
> > > > > > > > +struct rte_flow_port_attr { +       /** +        * Version
> > > > > > > > of the struct layout, should be 0.  +        */ +
> > > > > > > > uint32_t version;
> > > > > > >
> > > > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > > > versioning, I think, that would be sufficient for ABI
> > > > > > > versioning
> > > > > >
> > > > > > Function versioning is not ideal when the structure is accessed
> > > > > > in many places like many drivers and library functions.
> > > > > >
> > > > > > The idea of this version field (which can be a bitfield) is to
> > > > > > update it when some new features are added, so the users of the
> > > > > > struct can check if a feature is there before trying to use it.
> > > > > > It means a bit more code in the functions, but avoid duplicating
> > > > > > functions as in function versioning.
> > > > > >
> > > > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > > > It is assuming we only add new fields at the end (no removal),
> > > > > > and focus on the size of the struct.  By passing sizeof as an
> > > > > > extra parameter, the function knows which fields are OK to use.
> > > > > > Example:
> > > > > > http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > > > >
> > > > > + @Richardson, Bruce Either approach is fine, No strong opinion.
> > > > > We can have one approach and use it across DPDK for consistency.
> > > > >
> > > >
> > > > In general I prefer the size-based approach, mainly because of its
> > > > simplicity. However, some other reasons why we may want to choose it:
> > > >
> > > > * It's completely hidden from the end user, and there is no need for
> > > > an extra struct field that needs to be filled in
> > > >
> > > > * Related to that, for the version-field approach, if the field is
> > > > present in a user-allocated struct, then you probably need to start
> > > > preventing user error via: - having the external struct not have the
> > > > field and use a separate internal struct to add in the version info
> > > > after the fact in the versioned function. Alternatively, - provide a
> > > > separate init function for each structure to fill in the version
> > > > field appropriately
> > > >
> > > > * In general, using the size-based approach like in the linked
> > > > example is more resilient since it's compiler-inserted, so there is
> > > > reduced chance of error.
> > > >
> > > > * A sizeof field allows simple-enough handling in the drivers -
> > > > especially since it does not allow removed fields. Each driver only
> > > > needs to check that the size passed in is greater than that expected,
> > > > thereby allowing us to have both updated and non-updated drivers
> > > > co-existing simultaneously.  [For a version field, the same scheme
> > > > could also work if we keep the no-delete rule, but for a bitmask
> > > > field, I believe things may get more complex in terms of checking]
> > > >
> > > > In terms of the limitations of using sizeof - requiring new fields to
> > > > always go on the end, and preventing shrinking the struct - I think
> > > > that the simplicity gains far outweigh the impact of these
> > > > strictions.
> > > >
> > > > * Adding fields to struct is far more common than wanting to remove
> > > > one
> > > >
> > > > * So long as the added field is at the end, even if the struct size
> > > > doesn't change the scheme can still work as the versioned function
> > > > for the old struct can ensure that the extra field is appropriately
> > > > zeroed (rather than random) on entry into any driver function
> > > >
> > >
> > > Zero can be a valid value so this is may result in an issue.
> > >
> >
> > In this instance, I was using zero as a neutral, default-option value. If
> > having zero as the default causes problems, we can always make the
> > structure size change to force a new size value.
> >
> > > > * If we do want to remove a field, the space simply needs to be
> > > > marked as reserved in the struct, until the next ABI break release,
> > > > when it can be compacted. Again, function versioning can take care of
> > > > appropriately zeroing this field on return, if necessary.
> > > >
> > >
> > > This means that PMD will have to change just for removal of a field I
> > > would say removal is not allowed.
> > >
> > > > My 2c from considering this for the implementation in dmadev. :-)
> > >
> > > Some concerns I have about your suggestion: 1. The size of the struct
> > > is dependent on the system, for example Assume this struct { Uint16_t
> > > a; Uint32_t b; Uint8_t c; Uint32_t d; } Incase of 32 bit machine the
> > > size will be 128 bytes, while in 64 machine it will be 96
> >
> > Actually, I believe that in just about every system we support it will be
> > 4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In
> > any case, the actual size value doesn't matter in practice, since all
> > sizes should be computed by the compiler using sizeof, rather than
> > hard-coded.
> >
You are correct my mistake with the numbers.
I still think there might be some issue but I can't think of anything.
So dropping it.

> > >
> > > 2. ABI breakage, as far as I know changing size of a struct is ABI
> > > breakage, since if the application got the size from previous version
> > > and for example created array or allocated memory then using the new
> > > structure will result in memory override.
> > >
> > > I know that flags/version is not easy since it means creating new
> > > Structure for each change. I prefer to declare that size can change
> > > between DPDK releases is allowd but as long as we say ABI breakage is
> > > forbidden then I don't think your solution is valid.  And we must go
> > > with the version/flags and create new structure for each change.
> > >
> >
> > whatever approach is taken for this, I believe we will always need to
> > create a new structure for the changes. This is because only functions
> > can be versioned, not structures. The only question therefore becomes how
> > to pass ABI version information, and therefore by extension structure
> > version information across a library to driver boundary. This has to be
> > an extra field somewhere, either in a structure or as a function
> > parameter. I'd prefer not in the structure as it exposes it to the user.
> > In terms of the field value, it can either be explicit version info as
> > version number or version flags, or implicit versioning via "size". Based
> > off the "YAGNI" principle, I really would prefer just using sizes, as
> > it's far easier to manage and work with for all concerned, and requires
> > no additional documentation for the programmer or driver developer to
> > understand.
> >
> As a third alternative that I would find acceptable, we could also just
> take the approach of passing the ABI version explicitly across the function
> call i.e. 22 for DPDK_21.11. I'd find this ok too on the basis that it's
> largely self explanatory, and can be inserted automatically by the compiler
> - again reducing chances of errors. [However, I also believe that using
> sizes is still simpler again, which is why it's still my first choice! :-)]
> 

Just to make sure I fully understand your suggestion.
We will create new struct for each change.
The function will  stay the same
For example I had the following:

Struct base {
 Uint32_t x;
}

Function (struct base *input)
{
	Inner_func (input, sizeof(struct base))
}

Now I'm adding new member so it will look like this:
Struct new {
 Uint32_t x;
Uint32_t y;
}

When I want to call the function I need to cast
Function((struct base*) new) 

Right?

This means that in both cases the sizeof wil return the same value,
What am I missing?
 
> /Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-26  9:45                   ` Ori Kam
@ 2022-01-26 10:52                     ` Bruce Richardson
  2022-01-26 11:21                       ` Thomas Monjalon
  0 siblings, 1 reply; 220+ messages in thread
From: Bruce Richardson @ 2022-01-26 10:52 UTC (permalink / raw)
  To: Ori Kam
  Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger

On Wed, Jan 26, 2022 at 09:45:18AM +0000, Ori Kam wrote:
> 
> 
> > -----Original Message-----
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > Sent: Tuesday, January 25, 2022 8:14 PM
> > To: Ori Kam <orika@nvidia.com>
> > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> > 
> > On Tue, Jan 25, 2022 at 06:09:42PM +0000, Bruce Richardson wrote:
> > > On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> > > > Hi Bruce,
> > > >
> > > > > -----Original Message----- From: Bruce Richardson
> > > > > <bruce.richardson@intel.com> Sent: Monday, January 24, 2022 8:09 PM
> > > > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow
> > > > > pre-configuration hints
> > > > >
> > > > > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > > > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> > > > > > <thomas@monjalon.net> wrote:
> > > > > > >
> > > > > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> > > > > > > > <akozyrev@nvidia.com> wrote:
> > > > > > > > > +struct rte_flow_port_attr { +       /** +        * Version
> > > > > > > > > of the struct layout, should be 0.  +        */ +
> > > > > > > > > uint32_t version;
> > > > > > > >
> > > > > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > > > > versioning, I think, that would be sufficient for ABI
> > > > > > > > versioning
> > > > > > >
> > > > > > > Function versioning is not ideal when the structure is accessed
> > > > > > > in many places like many drivers and library functions.
> > > > > > >
> > > > > > > The idea of this version field (which can be a bitfield) is to
> > > > > > > update it when some new features are added, so the users of the
> > > > > > > struct can check if a feature is there before trying to use it.
> > > > > > > It means a bit more code in the functions, but avoid duplicating
> > > > > > > functions as in function versioning.
> > > > > > >
> > > > > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > > > > It is assuming we only add new fields at the end (no removal),
> > > > > > > and focus on the size of the struct.  By passing sizeof as an
> > > > > > > extra parameter, the function knows which fields are OK to use.
> > > > > > > Example:
> > > > > > > http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > > > > >
> > > > > > + @Richardson, Bruce Either approach is fine, No strong opinion.
> > > > > > We can have one approach and use it across DPDK for consistency.
> > > > > >
> > > > >
> > > > > In general I prefer the size-based approach, mainly because of its
> > > > > simplicity. However, some other reasons why we may want to choose it:
> > > > >
> > > > > * It's completely hidden from the end user, and there is no need for
> > > > > an extra struct field that needs to be filled in
> > > > >
> > > > > * Related to that, for the version-field approach, if the field is
> > > > > present in a user-allocated struct, then you probably need to start
> > > > > preventing user error via: - having the external struct not have the
> > > > > field and use a separate internal struct to add in the version info
> > > > > after the fact in the versioned function. Alternatively, - provide a
> > > > > separate init function for each structure to fill in the version
> > > > > field appropriately
> > > > >
> > > > > * In general, using the size-based approach like in the linked
> > > > > example is more resilient since it's compiler-inserted, so there is
> > > > > reduced chance of error.
> > > > >
> > > > > * A sizeof field allows simple-enough handling in the drivers -
> > > > > especially since it does not allow removed fields. Each driver only
> > > > > needs to check that the size passed in is greater than that expected,
> > > > > thereby allowing us to have both updated and non-updated drivers
> > > > > co-existing simultaneously.  [For a version field, the same scheme
> > > > > could also work if we keep the no-delete rule, but for a bitmask
> > > > > field, I believe things may get more complex in terms of checking]
> > > > >
> > > > > In terms of the limitations of using sizeof - requiring new fields to
> > > > > always go on the end, and preventing shrinking the struct - I think
> > > > > that the simplicity gains far outweigh the impact of these
> > > > > strictions.
> > > > >
> > > > > * Adding fields to struct is far more common than wanting to remove
> > > > > one
> > > > >
> > > > > * So long as the added field is at the end, even if the struct size
> > > > > doesn't change the scheme can still work as the versioned function
> > > > > for the old struct can ensure that the extra field is appropriately
> > > > > zeroed (rather than random) on entry into any driver function
> > > > >
> > > >
> > > > Zero can be a valid value so this is may result in an issue.
> > > >
> > >
> > > In this instance, I was using zero as a neutral, default-option value. If
> > > having zero as the default causes problems, we can always make the
> > > structure size change to force a new size value.
> > >
> > > > > * If we do want to remove a field, the space simply needs to be
> > > > > marked as reserved in the struct, until the next ABI break release,
> > > > > when it can be compacted. Again, function versioning can take care of
> > > > > appropriately zeroing this field on return, if necessary.
> > > > >
> > > >
> > > > This means that PMD will have to change just for removal of a field I
> > > > would say removal is not allowed.
> > > >
> > > > > My 2c from considering this for the implementation in dmadev. :-)
> > > >
> > > > Some concerns I have about your suggestion: 1. The size of the struct
> > > > is dependent on the system, for example Assume this struct { Uint16_t
> > > > a; Uint32_t b; Uint8_t c; Uint32_t d; } Incase of 32 bit machine the
> > > > size will be 128 bytes, while in 64 machine it will be 96
> > >
> > > Actually, I believe that in just about every system we support it will be
> > > 4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In
> > > any case, the actual size value doesn't matter in practice, since all
> > > sizes should be computed by the compiler using sizeof, rather than
> > > hard-coded.
> > >
> You are correct my mistake with the numbers.
> I still think there might be some issue but I can't think of anything.
> So dropping it.
> 
> > > >
> > > > 2. ABI breakage, as far as I know changing size of a struct is ABI
> > > > breakage, since if the application got the size from previous version
> > > > and for example created array or allocated memory then using the new
> > > > structure will result in memory override.
> > > >
> > > > I know that flags/version is not easy since it means creating new
> > > > Structure for each change. I prefer to declare that size can change
> > > > between DPDK releases is allowd but as long as we say ABI breakage is
> > > > forbidden then I don't think your solution is valid.  And we must go
> > > > with the version/flags and create new structure for each change.
> > > >
> > >
> > > whatever approach is taken for this, I believe we will always need to
> > > create a new structure for the changes. This is because only functions
> > > can be versioned, not structures. The only question therefore becomes how
> > > to pass ABI version information, and therefore by extension structure
> > > version information across a library to driver boundary. This has to be
> > > an extra field somewhere, either in a structure or as a function
> > > parameter. I'd prefer not in the structure as it exposes it to the user.
> > > In terms of the field value, it can either be explicit version info as
> > > version number or version flags, or implicit versioning via "size". Based
> > > off the "YAGNI" principle, I really would prefer just using sizes, as
> > > it's far easier to manage and work with for all concerned, and requires
> > > no additional documentation for the programmer or driver developer to
> > > understand.
> > >
> > As a third alternative that I would find acceptable, we could also just
> > take the approach of passing the ABI version explicitly across the function
> > call i.e. 22 for DPDK_21.11. I'd find this ok too on the basis that it's
> > largely self explanatory, and can be inserted automatically by the compiler
> > - again reducing chances of errors. [However, I also believe that using
> > sizes is still simpler again, which is why it's still my first choice! :-)]
> > 
> 
> Just to make sure I fully understand your suggestion.
> We will create new struct for each change.
> The function will  stay the same
> For example I had the following:
> 
> Struct base {
>  Uint32_t x;
> }
> 
> Function (struct base *input)
> {
> 	Inner_func (input, sizeof(struct base))
> }
> 
> Now I'm adding new member so it will look like this:
> Struct new {
>  Uint32_t x;
> Uint32_t y;
> }
> 
> When I want to call the function I need to cast
> Function((struct base*) new) 
> 
> Right?
> 
> This means that in both cases the sizeof wil return the same value,
> What am I missing?
>

The scenario is as follows. Suppose we have the initial state as below:

struct x_dev_cfg {
   int x;
};

int
x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
{
   struct x_dev *dev = x_devs[id];
   // some setup/config may go here
   return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
}

Now, supposing we need to add in a new field into the config structure, a
very common occurance. This will indeed break the ABI, so we need to use
ABI versioning, to ensure that apps passing in the old structure, only call
a function which expects the old structure. Therefore, we need a copy of
the old structure, and a function to work on it. This gives this result:

struct x_dev_cfg {
	int x;
	bool flag; // new field;
};

struct x_dev_cfg_v22 { // needed for ABI-versioned function
	int x;
};

/* this function will only be called by *newly-linked* code, which uses
 * the new structure */
int
x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
{
   struct x_dev *dev = x_devs[id];
   // some setup/config may go here
   return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
}

/* this function is called by apps linked against old version */
int
x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
{
   struct x_dev *dev = x_devs[id];
   // some setup/config may go here
   return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
}

With the above library code, we have different functions using the
different structures, so ABI compatibility is preserved - apps passing in a
4-byte struct call a function using the 4-byte struct, while newer apps can
use the 8-byte version.

The final part of the puzzle is then how drivers react to this change.
Originally, all drivers only use "x" in the config structure because that
is all that there is. That will still continue to work fine in the above
case, as both 4-byte and 8-byte structs have the same x value at the same
offset. i.e. no driver updates for x_dev is needed.

On the other hand, if there are drivers that do want/need the new field,
they can also get to use it, but they do need to check for its presence
before they do so, i.e they would work as below:

	if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
		// use flags field
	}

Hope this is clear now.

/Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-26 10:52                     ` Bruce Richardson
@ 2022-01-26 11:21                       ` Thomas Monjalon
  2022-01-26 12:19                         ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Thomas Monjalon @ 2022-01-26 11:21 UTC (permalink / raw)
  To: Ori Kam, Bruce Richardson
  Cc: Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
	Stephen Hemminger

26/01/2022 11:52, Bruce Richardson:
> The scenario is as follows. Suppose we have the initial state as below:
> 
> struct x_dev_cfg {
>    int x;
> };
> 
> int
> x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> {
>    struct x_dev *dev = x_devs[id];
>    // some setup/config may go here
>    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> }
> 
> Now, supposing we need to add in a new field into the config structure, a
> very common occurance. This will indeed break the ABI, so we need to use
> ABI versioning, to ensure that apps passing in the old structure, only call
> a function which expects the old structure. Therefore, we need a copy of
> the old structure, and a function to work on it. This gives this result:
> 
> struct x_dev_cfg {
> 	int x;
> 	bool flag; // new field;
> };
> 
> struct x_dev_cfg_v22 { // needed for ABI-versioned function
> 	int x;
> };
> 
> /* this function will only be called by *newly-linked* code, which uses
>  * the new structure */
> int
> x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> {
>    struct x_dev *dev = x_devs[id];
>    // some setup/config may go here
>    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> }
> 
> /* this function is called by apps linked against old version */
> int
> x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> {
>    struct x_dev *dev = x_devs[id];
>    // some setup/config may go here
>    return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> }
> 
> With the above library code, we have different functions using the
> different structures, so ABI compatibility is preserved - apps passing in a
> 4-byte struct call a function using the 4-byte struct, while newer apps can
> use the 8-byte version.
> 
> The final part of the puzzle is then how drivers react to this change.
> Originally, all drivers only use "x" in the config structure because that
> is all that there is. That will still continue to work fine in the above
> case, as both 4-byte and 8-byte structs have the same x value at the same
> offset. i.e. no driver updates for x_dev is needed.
> 
> On the other hand, if there are drivers that do want/need the new field,
> they can also get to use it, but they do need to check for its presence
> before they do so, i.e they would work as below:
> 
> 	if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> 		// use flags field
> 	}
> 
> Hope this is clear now.

Yes, this is the kind of explanation we need in our guideline doc.
Alternatives can be documented as well.
If we can list pros/cons in the doc, it will be easier to choose
the best approach and to explain the choice during code review.




^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-26 11:21                       ` Thomas Monjalon
@ 2022-01-26 12:19                         ` Ori Kam
  2022-01-26 13:41                           ` Bruce Richardson
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-01-26 12:19 UTC (permalink / raw)
  To: NBU-Contact-Thomas Monjalon (EXTERNAL), Bruce Richardson
  Cc: Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
	Stephen Hemminger



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, January 26, 2022 1:22 PM
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> 
> 26/01/2022 11:52, Bruce Richardson:
> > The scenario is as follows. Suppose we have the initial state as below:
> >
> > struct x_dev_cfg {
> >    int x;
> > };
> >
> > int
> > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > {
> >    struct x_dev *dev = x_devs[id];
> >    // some setup/config may go here
> >    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> > }
> >
> > Now, supposing we need to add in a new field into the config structure, a
> > very common occurance. This will indeed break the ABI, so we need to use
> > ABI versioning, to ensure that apps passing in the old structure, only call
> > a function which expects the old structure. Therefore, we need a copy of
> > the old structure, and a function to work on it. This gives this result:
> >
> > struct x_dev_cfg {
> > 	int x;
> > 	bool flag; // new field;
> > };
> >
> > struct x_dev_cfg_v22 { // needed for ABI-versioned function
> > 	int x;
> > };
> >
> > /* this function will only be called by *newly-linked* code, which uses
> >  * the new structure */
> > int
> > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > {
> >    struct x_dev *dev = x_devs[id];
> >    // some setup/config may go here
> >    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> > }
> >
> > /* this function is called by apps linked against old version */
> > int
> > x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> > {
> >    struct x_dev *dev = x_devs[id];
> >    // some setup/config may go here
> >    return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> > }
> >
> > With the above library code, we have different functions using the
> > different structures, so ABI compatibility is preserved - apps passing in a
> > 4-byte struct call a function using the 4-byte struct, while newer apps can
> > use the 8-byte version.
> >
> > The final part of the puzzle is then how drivers react to this change.
> > Originally, all drivers only use "x" in the config structure because that
> > is all that there is. That will still continue to work fine in the above
> > case, as both 4-byte and 8-byte structs have the same x value at the same
> > offset. i.e. no driver updates for x_dev is needed.
> >
> > On the other hand, if there are drivers that do want/need the new field,
> > they can also get to use it, but they do need to check for its presence
> > before they do so, i.e they would work as below:
> >
> > 	if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> > 		// use flags field
> > 	}
> >
> > Hope this is clear now.
> 
> Yes, this is the kind of explanation we need in our guideline doc.
> Alternatives can be documented as well.
> If we can list pros/cons in the doc, it will be easier to choose
> the best approach and to explain the choice during code review.
> 
> 
Thanks you very much for the clear explanation.

The draw back is that we need also to duplicate the functions.
Using the flags/version we only need to create new structures
and from application point of view it knows what exta fields it gets.
(I agree that application knowledge has downsides but also advantages)

In the case of flags/version your example will look like this (this is for the record and may other
developers are intrested):

struct x_dev_cfg {  //original struct
	int ver;
	int x;
};
 
struct x_dev_cfg_v2 { // new struct
	int ver;
 	int x;
	bool flag; // new field;
 };


The function is always the same function:
 x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
 {
    struct x_dev *dev = x_devs[id];
    // some setup/config may go here
    return dev->configure(cfg); 
 }

When calling this function with old struct:
X_dev_cfg(id, (struct x_dev_cfg *)cfg)

When calling this function with new struct:
X_dev_cfg(id, (struct x_dev_cfg *)cfg_v2)

In PMD:
If (cfg->ver >= 2)
	// version 2 logic
Else If (cfg->v >=0)
	// base version logic


When using flags it gives even more control since pmd can tell exactly what
features are required.

All options have prons/cons
I vote for the version one.

We can have a poll 😊
Or like Thomas said list pros and cons and each subsystem can
have it own selection.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-26 12:19                         ` Ori Kam
@ 2022-01-26 13:41                           ` Bruce Richardson
  2022-01-26 15:12                             ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Bruce Richardson @ 2022-01-26 13:41 UTC (permalink / raw)
  To: Ori Kam
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
	Stephen Hemminger

On Wed, Jan 26, 2022 at 12:19:43PM +0000, Ori Kam wrote:
> 
> 
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Wednesday, January 26, 2022 1:22 PM
> > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> > 
> > 26/01/2022 11:52, Bruce Richardson:
> > > The scenario is as follows. Suppose we have the initial state as below:
> > >
> > > struct x_dev_cfg {
> > >    int x;
> > > };
> > >
> > > int
> > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > {
> > >    struct x_dev *dev = x_devs[id];
> > >    // some setup/config may go here
> > >    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> > > }
> > >
> > > Now, supposing we need to add in a new field into the config structure, a
> > > very common occurance. This will indeed break the ABI, so we need to use
> > > ABI versioning, to ensure that apps passing in the old structure, only call
> > > a function which expects the old structure. Therefore, we need a copy of
> > > the old structure, and a function to work on it. This gives this result:
> > >
> > > struct x_dev_cfg {
> > > 	int x;
> > > 	bool flag; // new field;
> > > };
> > >
> > > struct x_dev_cfg_v22 { // needed for ABI-versioned function
> > > 	int x;
> > > };
> > >
> > > /* this function will only be called by *newly-linked* code, which uses
> > >  * the new structure */
> > > int
> > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > {
> > >    struct x_dev *dev = x_devs[id];
> > >    // some setup/config may go here
> > >    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> > > }
> > >
> > > /* this function is called by apps linked against old version */
> > > int
> > > x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> > > {
> > >    struct x_dev *dev = x_devs[id];
> > >    // some setup/config may go here
> > >    return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> > > }
> > >
> > > With the above library code, we have different functions using the
> > > different structures, so ABI compatibility is preserved - apps passing in a
> > > 4-byte struct call a function using the 4-byte struct, while newer apps can
> > > use the 8-byte version.
> > >
> > > The final part of the puzzle is then how drivers react to this change.
> > > Originally, all drivers only use "x" in the config structure because that
> > > is all that there is. That will still continue to work fine in the above
> > > case, as both 4-byte and 8-byte structs have the same x value at the same
> > > offset. i.e. no driver updates for x_dev is needed.
> > >
> > > On the other hand, if there are drivers that do want/need the new field,
> > > they can also get to use it, but they do need to check for its presence
> > > before they do so, i.e they would work as below:
> > >
> > > 	if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> > > 		// use flags field
> > > 	}
> > >
> > > Hope this is clear now.
> > 
> > Yes, this is the kind of explanation we need in our guideline doc.
> > Alternatives can be documented as well.
> > If we can list pros/cons in the doc, it will be easier to choose
> > the best approach and to explain the choice during code review.
> > 
> > 
> Thanks you very much for the clear explanation.
> 
> The draw back is that we need also to duplicate the functions.
> Using the flags/version we only need to create new structures
> and from application point of view it knows what exta fields it gets.
> (I agree that application knowledge has downsides but also advantages)
> 
> In the case of flags/version your example will look like this (this is for the record and may other
> developers are intrested):
> 
> struct x_dev_cfg {  //original struct
> 	int ver;
> 	int x;
> };
>  
> struct x_dev_cfg_v2 { // new struct
> 	int ver;
>  	int x;
> 	bool flag; // new field;
>  };
> 
> 
> The function is always the same function:
>  x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
>  {
>     struct x_dev *dev = x_devs[id];
>     // some setup/config may go here
>     return dev->configure(cfg); 
>  }
> 
> When calling this function with old struct:
> X_dev_cfg(id, (struct x_dev_cfg *)cfg)
> 
> When calling this function with new struct:
> X_dev_cfg(id, (struct x_dev_cfg *)cfg_v2)
> 
> In PMD:
> If (cfg->ver >= 2)
> 	// version 2 logic
> Else If (cfg->v >=0)
> 	// base version logic
> 
> 
> When using flags it gives even more control since pmd can tell exactly what
> features are required.
> 
> All options have prons/cons
> I vote for the version one.
> 
> We can have a poll 😊
> Or like Thomas said list pros and cons and each subsystem can
> have it own selection.

The biggest issue I have with this version approach is how is the user
meant to know what version number to put into the structure? When the user
upgrades from one version of DPDK to the next, are they manually to update
their version numbers in all their structures? If they don't, they then may
be mistified if they use the newer fields and find that they "don't work"
because they forgot that they need to update the version field to the newer
version at the same time. The reason I prefer the size field is that it is
impossible for the end user to mess things up, and the entirity of the
mechanism is internal, and hidden from the user.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-26 13:41                           ` Bruce Richardson
@ 2022-01-26 15:12                             ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-01-26 15:12 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
	Stephen Hemminger



> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Wednesday, January 26, 2022 3:41 PM
> To: Ori Kam <orika@nvidia.com>
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> 
> On Wed, Jan 26, 2022 at 12:19:43PM +0000, Ori Kam wrote:
> >
> >
> > > -----Original Message-----
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > Sent: Wednesday, January 26, 2022 1:22 PM
> > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> > >
> > > 26/01/2022 11:52, Bruce Richardson:
> > > > The scenario is as follows. Suppose we have the initial state as below:
> > > >
> > > > struct x_dev_cfg {
> > > >    int x;
> > > > };
> > > >
> > > > int
> > > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > > {
> > > >    struct x_dev *dev = x_devs[id];
> > > >    // some setup/config may go here
> > > >    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> > > > }
> > > >
> > > > Now, supposing we need to add in a new field into the config structure, a
> > > > very common occurance. This will indeed break the ABI, so we need to use
> > > > ABI versioning, to ensure that apps passing in the old structure, only call
> > > > a function which expects the old structure. Therefore, we need a copy of
> > > > the old structure, and a function to work on it. This gives this result:
> > > >
> > > > struct x_dev_cfg {
> > > > 	int x;
> > > > 	bool flag; // new field;
> > > > };
> > > >
> > > > struct x_dev_cfg_v22 { // needed for ABI-versioned function
> > > > 	int x;
> > > > };
> > > >
> > > > /* this function will only be called by *newly-linked* code, which uses
> > > >  * the new structure */
> > > > int
> > > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > > {
> > > >    struct x_dev *dev = x_devs[id];
> > > >    // some setup/config may go here
> > > >    return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> > > > }
> > > >
> > > > /* this function is called by apps linked against old version */
> > > > int
> > > > x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> > > > {
> > > >    struct x_dev *dev = x_devs[id];
> > > >    // some setup/config may go here
> > > >    return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> > > > }
> > > >
> > > > With the above library code, we have different functions using the
> > > > different structures, so ABI compatibility is preserved - apps passing in a
> > > > 4-byte struct call a function using the 4-byte struct, while newer apps can
> > > > use the 8-byte version.
> > > >
> > > > The final part of the puzzle is then how drivers react to this change.
> > > > Originally, all drivers only use "x" in the config structure because that
> > > > is all that there is. That will still continue to work fine in the above
> > > > case, as both 4-byte and 8-byte structs have the same x value at the same
> > > > offset. i.e. no driver updates for x_dev is needed.
> > > >
> > > > On the other hand, if there are drivers that do want/need the new field,
> > > > they can also get to use it, but they do need to check for its presence
> > > > before they do so, i.e they would work as below:
> > > >
> > > > 	if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> > > > 		// use flags field
> > > > 	}
> > > >
> > > > Hope this is clear now.
> > >
> > > Yes, this is the kind of explanation we need in our guideline doc.
> > > Alternatives can be documented as well.
> > > If we can list pros/cons in the doc, it will be easier to choose
> > > the best approach and to explain the choice during code review.
> > >
> > >
> > Thanks you very much for the clear explanation.
> >
> > The draw back is that we need also to duplicate the functions.
> > Using the flags/version we only need to create new structures
> > and from application point of view it knows what exta fields it gets.
> > (I agree that application knowledge has downsides but also advantages)
> >
> > In the case of flags/version your example will look like this (this is for the record and may other
> > developers are intrested):
> >
> > struct x_dev_cfg {  //original struct
> > 	int ver;
> > 	int x;
> > };
> >
> > struct x_dev_cfg_v2 { // new struct
> > 	int ver;
> >  	int x;
> > 	bool flag; // new field;
> >  };
> >
> >
> > The function is always the same function:
> >  x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> >  {
> >     struct x_dev *dev = x_devs[id];
> >     // some setup/config may go here
> >     return dev->configure(cfg);
> >  }
> >
> > When calling this function with old struct:
> > X_dev_cfg(id, (struct x_dev_cfg *)cfg)
> >
> > When calling this function with new struct:
> > X_dev_cfg(id, (struct x_dev_cfg *)cfg_v2)
> >
> > In PMD:
> > If (cfg->ver >= 2)
> > 	// version 2 logic
> > Else If (cfg->v >=0)
> > 	// base version logic
> >
> >
> > When using flags it gives even more control since pmd can tell exactly what
> > features are required.
> >
> > All options have prons/cons
> > I vote for the version one.
> >
> > We can have a poll 😊
> > Or like Thomas said list pros and cons and each subsystem can
> > have it own selection.
> 
> The biggest issue I have with this version approach is how is the user
> meant to know what version number to put into the structure? When the user
> upgrades from one version of DPDK to the next, are they manually to update
> their version numbers in all their structures? If they don't, they then may
> be mistified if they use the newer fields and find that they "don't work"
> because they forgot that they need to update the version field to the newer
> version at the same time. The reason I prefer the size field is that it is
> impossible for the end user to mess things up, and the entirity of the
> mechanism is internal, and hidden from the user.
> 

The solution is simple when you define new struct in the struct you write what
should be the version number.
You can also define that 0 is the latest one, so application that are are writing code which
is size agnostic will just set 0 all the time.

 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-25 18:44           ` Jerin Jacob
@ 2022-01-26 22:02             ` Alexander Kozyrev
  2022-01-27  9:34               ` Jerin Jacob
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-26 22:02 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Ajit Khaparde, dpdk-dev, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob

On Tuesday, January 25, 2022 13:44 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> On Tue, Jan 25, 2022 at 6:58 AM Alexander Kozyrev <akozyrev@nvidia.com>
> wrote:
> >
> > On Monday, January 24, 2022 12:41 Ajit Khaparde
> <ajit.khaparde@broadcom.com> wrote:
> > > On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com>
> > > wrote:
> > > >
> 
> > Ok, I'll adopt this wording in the v3.
> >
> > > > > + *
> > > > > + * @param port_id
> > > > > + *   Port identifier of Ethernet device.
> > > > > + * @param[in] port_attr
> > > > > + *   Port configuration attributes.
> > > > > + * @param[out] error
> > > > > + *   Perform verbose error reporting if not NULL.
> > > > > + *   PMDs initialize this structure in case of error only.
> > > > > + *
> > > > > + * @return
> > > > > + *   0 on success, a negative errno value otherwise and rte_errno is
> set.
> > > > > + */
> > > > > +__rte_experimental
> > > > > +int
> > > > > +rte_flow_configure(uint16_t port_id,
> > > >
> > > > Should we couple, setting resource limit hint to configure function as
> > > > if we add future items in
> > > > configuration, we may pain to manage all state. Instead how about,
> > > > rte_flow_resource_reserve_hint_set()?
> > > +1
> > Port attributes are the hints, PMD can safely ignore anything that is not
> supported/deemed unreasonable.
> > Having several functions to call instead of one configuration function seems
> like a burden to me.
> 
> If we add a lot of features which has different state it will be
> difficult to manage.
> Since it is the slow path and OPTIONAL API. IMO, it should be fine to
> have a separate API for a specific purpose
> to have a clean interface.

This approach contradicts to the DPDK way of configuring devices.
It you look at the rte_eth_dev_configure or rte_eth_rx_queue_setup API
you will see that the configuration is propagated via config structures.
I would like to conform to this approach with my new API as well.

Another question is how to deal with interdependencies with separate hints?
There could be some resources that requires other resources to be present.
Or one resource shares the hardware registers with another one and needs to
be accounted for. That is not easy to do with separate function calls.

> >
> > >
> > > >
> > > >
> > > > > +                  const struct rte_flow_port_attr *port_attr,
> > > > > +                  struct rte_flow_error *error);
> > > >
> > > > I think, we should have _get function to get those limit numbers
> otherwise,
> > > > we can not write portable applications as the return value is  kind of
> > > > boolean now if
> > > > don't define exact values for rte_errno for reasons.
> > > +1
> > We had this discussion in RFC. The limits will vary from NIC to NIC and from
> system to
> > system, depending on hardware capabilities and amount of free memory
> for example.
> > It is easier to reject a configuration with a clear error description as we do
> for flow creation.
> 
> In that case, we can return a "defined" return value or "defined"
> errno to capture this case so that
> the application can make forward progress to differentiate between API
> failed vs dont having enough resources
> and move on.

I think you are right and it will be useful to provide some hardware capabilities.
I'll add something like rte_flow_info_get() to obtain available flow rule resources.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-26 22:02             ` Alexander Kozyrev
@ 2022-01-27  9:34               ` Jerin Jacob
  0 siblings, 0 replies; 220+ messages in thread
From: Jerin Jacob @ 2022-01-27  9:34 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: Ajit Khaparde, dpdk-dev, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob

On Thu, Jan 27, 2022 at 3:32 AM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> On Tuesday, January 25, 2022 13:44 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > On Tue, Jan 25, 2022 at 6:58 AM Alexander Kozyrev <akozyrev@nvidia.com>
> > wrote:
> > >
> > > On Monday, January 24, 2022 12:41 Ajit Khaparde
> > <ajit.khaparde@broadcom.com> wrote:
> > > > On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com>
> > > > wrote:
> > > > >
> >
> > > Ok, I'll adopt this wording in the v3.
> > >
> > > > > > + *
> > > > > > + * @param port_id
> > > > > > + *   Port identifier of Ethernet device.
> > > > > > + * @param[in] port_attr
> > > > > > + *   Port configuration attributes.
> > > > > > + * @param[out] error
> > > > > > + *   Perform verbose error reporting if not NULL.
> > > > > > + *   PMDs initialize this structure in case of error only.
> > > > > > + *
> > > > > > + * @return
> > > > > > + *   0 on success, a negative errno value otherwise and rte_errno is
> > set.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +int
> > > > > > +rte_flow_configure(uint16_t port_id,
> > > > >
> > > > > Should we couple, setting resource limit hint to configure function as
> > > > > if we add future items in
> > > > > configuration, we may pain to manage all state. Instead how about,
> > > > > rte_flow_resource_reserve_hint_set()?
> > > > +1
> > > Port attributes are the hints, PMD can safely ignore anything that is not
> > supported/deemed unreasonable.
> > > Having several functions to call instead of one configuration function seems
> > like a burden to me.
> >
> > If we add a lot of features which has different state it will be
> > difficult to manage.
> > Since it is the slow path and OPTIONAL API. IMO, it should be fine to
> > have a separate API for a specific purpose
> > to have a clean interface.
>
> This approach contradicts to the DPDK way of configuring devices.
> It you look at the rte_eth_dev_configure or rte_eth_rx_queue_setup API
> you will see that the configuration is propagated via config structures.
> I would like to conform to this approach with my new API as well.

There is a subtle difference,  those are mandatory APIs. i,e application must
call those API to use the subsequent APIs.

I am OK with introducing rte_flow_configure() for such use cases.
Probably, we can add these parameters in rte_flow_configure() for the
new features.
And make it mandatory API for the next ABI to avoid application breakage.

Also, please change git commit to the description for adding  the
configure state
for rte_flow API.

BTW: Your Queue patch[3/3] probably needs to add the nb_queue
parameter to configure.
So the driver knows, the number queue needed upfront like the ethdev API scheme.


>
> Another question is how to deal with interdependencies with separate hints?
> There could be some resources that requires other resources to be present.
> Or one resource shares the hardware registers with another one and needs to
> be accounted for. That is not easy to do with separate function calls.

I got the use case now.

>
> > >
> > > >
> > > > >
> > > > >
> > > > > > +                  const struct rte_flow_port_attr *port_attr,
> > > > > > +                  struct rte_flow_error *error);
> > > > >
> > > > > I think, we should have _get function to get those limit numbers
> > otherwise,
> > > > > we can not write portable applications as the return value is  kind of
> > > > > boolean now if
> > > > > don't define exact values for rte_errno for reasons.
> > > > +1
> > > We had this discussion in RFC. The limits will vary from NIC to NIC and from
> > system to
> > > system, depending on hardware capabilities and amount of free memory
> > for example.
> > > It is easier to reject a configuration with a clear error description as we do
> > for flow creation.
> >
> > In that case, we can return a "defined" return value or "defined"
> > errno to capture this case so that
> > the application can make forward progress to differentiate between API
> > failed vs dont having enough resources
> > and move on.
>
> I think you are right and it will be useful to provide some hardware capabilities.
> I'll add something like rte_flow_info_get() to obtain available flow rule resources.

Ack.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 00/10] ethdev: datapath-focused flow rules management
  2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (10 preceding siblings ...)
  2022-01-19  7:16   ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Suanming Mou
@ 2022-02-06  3:25   ` Alexander Kozyrev
  2022-02-06  3:25     ` [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
                       ` (9 more replies)
  11 siblings, 10 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>

---
v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (10):
  ethdev: introduce flow pre-configuration hints
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  app/testpmd: implement rte flow configuration
  app/testpmd: implement rte flow template management
  app/testpmd: implement rte flow table management
  app/testpmd: implement rte flow queue flow operations
  app/testpmd: implement rte flow push operations
  app/testpmd: implement rte flow pull operations
  app/testpmd: implement rte flow queue indirect actions

 app/test-pmd/cmdline_flow.c                   | 1495 ++++++++++++++++-
 app/test-pmd/config.c                         |  770 +++++++++
 app/test-pmd/testpmd.h                        |   66 +
 doc/guides/prog_guide/img/rte_flow_q_init.svg |   71 +
 .../prog_guide/img/rte_flow_q_usage.svg       |   60 +
 doc/guides/prog_guide/rte_flow.rst            |  318 ++++
 doc/guides/rel_notes/release_22_03.rst        |   20 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  375 ++++-
 lib/ethdev/rte_flow.c                         |  352 ++++
 lib/ethdev/rte_flow.h                         |  698 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  102 ++
 lib/ethdev/version.map                        |   17 +
 12 files changed, 4324 insertions(+), 20 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:15       ` Ori Kam
  2022-02-07 14:52       ` Jerin Jacob
  2022-02-06  3:25     ` [PATCH v3 02/10] ethdev: add flow item/action templates Alexander Kozyrev
                       ` (8 subsequent siblings)
  9 siblings, 2 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 37 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |  4 ++
 lib/ethdev/rte_flow.c                  | 40 +++++++++++++
 lib/ethdev/rte_flow.h                  | 82 ++++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           | 10 ++++
 lib/ethdev/version.map                 |  4 ++
 6 files changed, 177 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b4aa9c47c2..5b4c5dd609 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3589,6 +3589,43 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some hints at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These hints may be used by PMD to pre-allocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API management configuration and
+pre-allocates needed resources beforehand to avoid costly allocations later.
+Hints about the expected number of counters or meters in an application,
+for example, allow PMD to prepare and optimize NIC memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                     const struct rte_flow_port_attr *port_attr,
+                     struct rte_flow_error *error);
+
+Information about resources that can benefit from pre-allocation can be
+retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
+of pre-configurable resources for a given port on a system.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_attr *port_attr,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index bf2e3f78a9..8593db3f6a 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,10 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+  engine, allowing to pre-allocate some resources for better performance.
+  Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index a93f68abbc..e7e6478bed 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_attr *port_attr,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_attr, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		return flow_err(port_id,
+				ops->configure(dev, port_attr, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1031fb246b..f3c7159484 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4853,6 +4853,88 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Resource pre-allocation settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Number of counter actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging flows actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_flows;
+	/**
+	 * Number of traffic metering actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Retrieve configuration attributes supported by the port.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_attr *port_attr,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pre-configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the
+ * settings. The port, however, may reject the changes.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..503700aec4 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 1f7359c846..59785c3634 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,10 @@ EXPERIMENTAL {
 	rte_flow_flex_item_create;
 	rte_flow_flex_item_release;
 	rte_flow_pick_transfer_proxy;
+
+	# added in 22.03
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 02/10] ethdev: add flow item/action templates
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
  2022-02-06  3:25     ` [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:16       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                       ` (7 subsequent siblings)
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 124 +++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 141 +++++++++++++
 lib/ethdev/rte_flow.h                  | 274 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 590 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 5b4c5dd609..b7799c5abe 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on a system.
                      struct rte_flow_port_attr *port_attr,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+				const struct rte_flow_pattern_template_attr *template_attr,
+				const struct rte_flow_item pattern[],
+				struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	struct rte_flow_item pattern[2] = {{0}};
+	struct rte_flow_item_eth eth_m = {0};
+	pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+	eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	pattern[0].mask = &eth_m;
+	pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+				const struct rte_flow_actions_template_attr *template_attr,
+				const struct rte_flow_action actions[],
+				const struct rte_flow_action masks[],
+				struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	struct rte_flow_action actions[] = {
+		/* Mark ID is constant (4) for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action masks[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+
+	struct rte_flow_actions_template *at =
+		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Flow table
+^^^^^^^^^^
+
+A table combines a number of pattern and actions templates along with shared flow
+rule attributes (group ID, priority and traffic direction). This way a PMD/HW
+can prepare all the resources needed for efficient flow rules creation in
+the datapath. To avoid any hiccups due to memory reallocation, the maximum
+number of flow rules is defined at table creation time. Any flow rule
+creation beyond the maximum table size is rejected. Application may create
+another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_table *
+	rte_flow_table_create(uint16_t port_id,
+				const struct rte_flow_table_attr *table_attr,
+				struct rte_flow_pattern_template *pattern_templates[],
+				uint8_t nb_pattern_templates,
+				struct rte_flow_actions_template *actions_templates[],
+				uint8_t nb_actions_templates,
+				struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_configure(port, *port_attr, *error);
+
+	struct rte_flow_pattern_template *pattern_templates[0] =
+		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
+	struct rte_flow_actions_template *actions_templates[0] =
+		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);
+
+	struct rte_flow_table *table =
+		rte_flow_table_create(port, *table_attr,
+				*pattern_templates, nb_pattern_templates,
+				*actions_templates, nb_actions_templates,
+				*error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 8593db3f6a..d23d1591df 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -59,6 +59,14 @@ New Features
   engine, allowing to pre-allocate some resources for better performance.
   Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
 
+* ethdev: Added ``rte_flow_table_create`` API to group flow rules with
+  the same flow attributes and common matching patterns and actions
+  defined by ``rte_flow_pattern_template_create`` and
+  ``rte_flow_actions_template_create`` respectively.
+  Corresponding functions to destroy these entities are:
+  ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy``
+  and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index e7e6478bed..ab942117d0 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1431,3 +1431,144 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+			const struct rte_flow_pattern_template_attr *template_attr,
+			const struct rte_flow_item pattern[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+						     pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+			struct rte_flow_pattern_template *pattern_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev, pattern_template, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+						       actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev, actions_template, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_table *
+rte_flow_table_create(uint16_t port_id,
+		      const struct rte_flow_table_attr *table_attr,
+		      struct rte_flow_pattern_template *pattern_templates[],
+		      uint8_t nb_pattern_templates,
+		      struct rte_flow_actions_template *actions_templates[],
+		      uint8_t nb_actions_templates,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->table_create)) {
+		table = ops->table_create(dev, table_attr,
+					  pattern_templates, nb_pattern_templates,
+					  actions_templates, nb_actions_templates,
+					  error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_table_destroy(uint16_t port_id,
+		       struct rte_flow_table *table,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->table_destroy)) {
+		return flow_err(port_id,
+				ops->table_destroy(dev, table, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index f3c7159484..a65f5d4e6a 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4935,6 +4935,280 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Relaxed matching policy.
+	 * - PMD may match only on items with mask member set and skip
+	 * matching on protocol layers specified without any masks.
+	 * - If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * - Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets, starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+			const struct rte_flow_pattern_template_attr *template_attr,
+			const struct rte_flow_item pattern[],
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+			struct rte_flow_pattern_template *pattern_template,
+			struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+struct rte_flow_actions_template_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/* No attributes so far. */
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of table.
+ * This handle can be used to manage the created table.
+ */
+struct rte_flow_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_table_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_table *
+rte_flow_table_create(uint16_t port_id,
+		      const struct rte_flow_table_attr *table_attr,
+		      struct rte_flow_pattern_template *pattern_templates[],
+		      uint8_t nb_pattern_templates,
+		      struct rte_flow_actions_template *actions_templates[],
+		      uint8_t nb_actions_templates,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_table_destroy(uint16_t port_id,
+		       struct rte_flow_table *table,
+		       struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 503700aec4..04b0960825 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_table_create() */
+	struct rte_flow_table *(*table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_table_destroy() */
+	int (*table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_table *table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 59785c3634..01c004d491 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -260,6 +260,12 @@ EXPERIMENTAL {
 	# added in 22.03
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_table_create;
+	rte_flow_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
  2022-02-06  3:25     ` [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2022-02-06  3:25     ` [PATCH v3 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:18       ` Ori Kam
  2022-02-08 10:56       ` Jerin Jacob
  2022-02-06  3:25     ` [PATCH v3 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
                       ` (6 subsequent siblings)
  9 siblings, 2 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_q_flow_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_q_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_q_flow_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 doc/guides/prog_guide/img/rte_flow_q_init.svg |  71 ++++
 .../prog_guide/img/rte_flow_q_usage.svg       |  60 +++
 doc/guides/prog_guide/rte_flow.rst            | 159 +++++++-
 doc/guides/rel_notes/release_22_03.rst        |   8 +
 lib/ethdev/rte_flow.c                         | 173 ++++++++-
 lib/ethdev/rte_flow.h                         | 342 ++++++++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  55 +++
 lib/ethdev/version.map                        |   7 +
 8 files changed, 873 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
new file mode 100644
index 0000000000..2080bf4c04
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
@@ -0,0 +1,71 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485" height="535"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:xlink="http://www.w3.org/1999/xlink"
+   overflow="hidden">
+   <defs>
+      <clipPath id="clip0">
+         <rect x="0" y="0" width="485" height="535"/>
+      </clipPath>
+   </defs>
+   <g clip-path="url(#clip0)">
+      <rect x="0" y="0" width="485" height="535" fill="#FFFFFF"/>
+      <rect x="0.500053" y="79.5001" width="482" height="59"
+         stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8"
+         fill="#A6A6A6"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(121.6 116)">
+         rte_eth_dev_configure
+         <tspan font-size="24" x="224.007" y="0">()</tspan>
+      </text>
+      <rect x="0.500053" y="158.5" width="482" height="59" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(140.273 195)">
+         rte_flow_configure()
+      </text>
+      <rect x="0.500053" y="236.5" width="482" height="60" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(77.4259 274)">
+         rte_flow_pattern_template_create()
+      </text>
+      <rect x="0.500053" y="316.5" width="482" height="59" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(69.3792 353)">
+         rte_flow_actions_template_create
+         <tspan font-size="24" x="328.447" y="0">(</tspan>)
+      </text>
+      <rect x="0.500053" y="0.500053" width="482" height="60" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#A6A6A6"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(177.233 37)">
+         rte_eal_init
+         <tspan font-size="24" x="112.74" y="0">()</tspan>
+      </text>
+      <path d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z" transform="matrix(-1 0 0 1 241 60)"/>
+      <path d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z" transform="matrix(-1 0 0 1 241 138)"/>
+      <path d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z" transform="matrix(-1 0 0 1 241 217)"/>
+      <rect x="0.500053" y="395.5" width="482" height="59" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(124.989 432)">
+         rte_flow_table_create
+         <tspan font-size="24" x="217.227" y="0">(</tspan>
+         <tspan font-size="24" x="224.56" y="0">)</tspan>
+      </text>
+      <path d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z" transform="matrix(-1 0 0 1 241 296)"/>
+      <path d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"/>
+      <rect x="0.500053" y="473.5" width="482" height="60" stroke="#000000"
+         stroke-width="1.33333" stroke-miterlimit="8" fill="#A6A6A6"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif"
+         font-weight="400" font-size="24" transform="translate(145.303 511)">
+         rte_eth_dev_start()</text>
+      <path d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"/>
+   </g>
+</svg>
\ No newline at end of file
diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
new file mode 100644
index 0000000000..113da764ba
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
@@ -0,0 +1,60 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880" height="610"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:xlink="http://www.w3.org/1999/xlink"
+   overflow="hidden">
+   <defs>
+      <clipPath id="clip0">
+         <rect x="0" y="0" width="880" height="610"/>
+      </clipPath>
+   </defs>
+   <g clip-path="url(#clip0)">
+      <rect x="0" y="0" width="880" height="610" fill="#FFFFFF"/>
+      <rect x="333.5" y="0.500053" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#A6A6A6"/>
+      <text font-family="Consolas,Consolas_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(357.196 29)">rte_eth_rx_burst()</text>
+      <rect x="333.5" y="63.5001" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(394.666 91)">analyze <tspan font-size="19" x="60.9267" y="0">packet </tspan></text>
+      <rect x="572.5" y="279.5" width="234" height="46" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(591.429 308)">rte_flow_q_create_flow()</text>
+      <path d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9" fill-rule="evenodd"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(430.069 378)">more <tspan font-size="19" x="-12.94" y="23">packets?</tspan></text>
+      <path d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"/>
+      <path d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"/>
+      <path d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"/>
+      <path d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"/>
+      <path d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z" transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"/>
+      <path d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9" fill-rule="evenodd"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(417.576 155)">add new <tspan font-size="19" x="13.2867" y="23">rule?</tspan></text>
+      <path d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"/>
+      <rect x="602.5" y="127.5" width="46" height="30" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(611.34 148)">yes</text>
+      <rect x="254.5" y="126.5" width="46" height="31" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(267.182 147)">no</text>
+      <path d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z" transform="matrix(1 0 0 -1 567.5 383.495)"/>
+      <path d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#D9D9D9" fill-rule="evenodd"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(159.155 208)">destroy the <tspan font-size="19" x="24.0333" y="23">rule?</tspan></text>
+      <path d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z" transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"/>
+      <rect x="81.5001" y="280.5" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(96.2282 308)">rte_flow_q_destroy_flow()</text>
+      <path d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z" transform="matrix(-1 0 0 1 319.915 213.5)"/>
+      <path d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"/>
+      <path d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z" transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"/>
+      <rect x="334.5" y="540.5" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(365.083 569)">rte_flow_q_pull<tspan font-size="19" x="160.227" y="0">()</tspan></text>
+      <rect x="334.5" y="462.5" width="234" height="45" stroke="#000000" stroke-width="1.33333" stroke-miterlimit="8" fill="#FFFFFF"/>
+      <text font-family="Calibri,Calibri_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(379.19 491)">rte_flow_q<tspan font-size="19" x="83.56" y="0">_push</tspan>()</text>
+      <path d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"/>
+      <rect x="0.500053" y="287.5" width="46" height="30" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(12.8617 308)">no</text>
+      <rect x="357.5" y="223.5" width="47" height="31" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(367.001 244)">yes</text>
+      <rect x="469.5" y="421.5" width="46" height="30" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(481.872 442)">no</text>
+      <rect x="832.5" y="223.5" width="46" height="31" stroke="#000000" stroke-width="0.666667" stroke-miterlimit="8" fill="#D9D9D9"/>
+      <text font-family="Trebuchet MS,Trebuchet MS_MSFontService,sans-serif" font-weight="400" font-size="19" transform="translate(841.777 244)">yes</text>
+   </g>
+</svg>
\ No newline at end of file
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b7799c5abe..734294e65d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3607,12 +3607,16 @@ Hints about the expected number of counters or meters in an application,
 for example, allow PMD to prepare and optimize NIC memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                      const struct rte_flow_port_attr *port_attr,
+                     uint16_t nb_queue,
+                     const struct rte_flow_queue_attr *queue_attr[],
                      struct rte_flow_error *error);
 
 Information about resources that can benefit from pre-allocation can be
@@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
 
 .. code-block:: c
 
-	rte_flow_configure(port, *port_attr, *error);
+	rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error);
 
 	struct rte_flow_pattern_template *pattern_templates[0] =
 		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
@@ -3750,6 +3754,159 @@ and pattern and actions templates are created.
 				*actions_templates, nb_actions_templates,
 				*error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+- Operations can thus be invoked by the app's datapath,
+packet processing can continue while queue operations are processed by NIC.
+- The queue number is configured at initialization stage.
+- Available operation types: rule creation, rule destruction,
+indirect rule creation, indirect rule destruction, indirect rule update.
+- Operations may be reordered within a queue.
+- Operations can be postponed and pushed to NIC in batches.
+- Results pulling must be done on time to avoid queue overflows.
+- User data is returned as part of the result to identify an operation.
+- Flow handle is valid once the creation operation is enqueued and must be
+destroyed even if the operation is not successful and the rule is not inserted.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_q_init:
+
+.. figure:: img/rte_flow_q_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_q_usage:
+
+.. figure:: img/rte_flow_q_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_q_flow_create(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow_table *table,
+				const struct rte_flow_item pattern[],
+				uint8_t pattern_template_index,
+				const struct rte_flow_action actions[],
+				uint8_t actions_template_index,
+				struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_flow_destroy(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow *flow,
+				struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_push(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_pull(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_q_op_res res[],
+			uint16_t n_res,
+			struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_q_action_handle_create(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			const struct rte_flow_indir_action_conf *indir_action_conf,
+			const struct rte_flow_action *action,
+			struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_update(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			const void *update,
+			struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index d23d1591df..80a85124e6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -67,6 +67,14 @@ New Features
   ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy``
   and ``rte_flow_actions_template_destroy``.
 
+* ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` API
+  to enqueue flow creaion/destruction operations asynchronously as well as
+  ``rte_flow_q_pull`` to poll and retrieve results of these operations and
+  ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
+  Introduced asynchronous API for indirect actions management as well:
+  ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` and
+  ``rte_flow_q_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index ab942117d0..127dbb13cb 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1424,7 +1426,7 @@ rte_flow_configure(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
 		return flow_err(port_id,
-				ops->configure(dev, port_attr, error),
+				ops->configure(dev, port_attr, nb_queue, queue_attr, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1572,3 +1574,172 @@ rte_flow_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_table *table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->q_flow_create)) {
+		flow = ops->q_flow_create(dev, queue_id, q_ops_attr, table,
+					  pattern, pattern_template_index,
+					  actions, actions_template_index,
+					  error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_flow_destroy)) {
+		return flow_err(port_id,
+				ops->q_flow_destroy(dev, queue_id,
+						    q_ops_attr, flow, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->q_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_push)) {
+		return flow_err(port_id,
+				ops->q_push(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_pull)) {
+		ret = ops->q_pull(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index a65f5d4e6a..25a6ad5b64 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4883,6 +4883,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meters;
 };
 
+/**
+ * Flow engine queue configuration.
+ */
+__extension__
+struct rte_flow_queue_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4922,6 +4937,11 @@ rte_flow_info_get(uint16_t port_id,
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4933,6 +4953,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5209,6 +5231,326 @@ rte_flow_table_destroy(uint16_t port_id,
 		       struct rte_flow_table *table,
 		       struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+struct rte_flow_q_ops_attr {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] table
+ *   Table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule was offloaded.
+ *   Only completion result indicates that the rule was offloaded.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_table *table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *    0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation results.
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Version of the struct layout, should be 0.
+	 */
+	uint32_t version;
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 04b0960825..0edd933bf3 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -161,6 +161,8 @@ struct rte_flow_ops {
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +201,59 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_table *table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_create() */
+	struct rte_flow *(*q_flow_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_table *table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_destroy() */
+	int (*q_flow_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_create() */
+	struct rte_flow_action_handle *(*q_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_destroy() */
+	int (*q_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_action_handle_update() */
+	int (*q_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_push() */
+	int (*q_push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_pull() */
+	int (*q_pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 01c004d491..f431ef0a5d 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -266,6 +266,13 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_table_create;
 	rte_flow_table_destroy;
+	rte_flow_q_flow_create;
+	rte_flow_q_flow_destroy;
+	rte_flow_q_action_handle_create;
+	rte_flow_q_action_handle_destroy;
+	rte_flow_q_action_handle_update;
+	rte_flow_q_push;
+	rte_flow_q_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 04/10] app/testpmd: implement rte flow configuration
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (2 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:19       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
                       ` (5 subsequent siblings)
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  53 ++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  59 ++++++++-
 4 files changed, 242 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index bbaf18d76e..bbf9f137a0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -846,6 +855,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -927,6 +941,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1962,6 +1986,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2187,7 +2214,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2202,6 +2231,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_COUNTERS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging flows",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_flows)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7465,6 +7553,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8693,6 +8808,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 1722d6c8f8..eb3fa8a8cc 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1595,6 +1595,59 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_attr port_attr = { 0 };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	if (rte_flow_info_get(port_id, &port_attr, &error))
+		return port_flow_complain(&error);
+	printf("Pre-configurable resources on port %u:\n"
+	       "Number of counters: %d\n"
+	       "Number of aging flows: %d\n"
+	       "Number of meters: %d\n",
+	       port_id, port_attr.nb_counters,
+	       port_attr.nb_aging_flows, port_attr.nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 94792d88cc..d452fcfce3 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3285,8 +3285,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3309,6 +3309,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3368,6 +3380,49 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Pre-configurable resources on port #[...]:
+   Number of counters: #[...]
+   Number of aging flows: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 05/10] app/testpmd: implement rte flow template management
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (3 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:20       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
                       ` (4 subsequent siblings)
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 376 +++++++++++++++++++-
 app/test-pmd/config.c                       | 204 +++++++++++
 app/test-pmd/testpmd.h                      |  23 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  97 +++++
 4 files changed, 698 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index bbf9f137a0..3f0e73743a 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,22 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -860,6 +880,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -868,10 +892,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -951,6 +978,43 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1989,6 +2053,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2058,6 +2128,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2208,6 +2282,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2216,6 +2304,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2290,6 +2380,112 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2612,7 +2808,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5716,7 +5912,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7580,6 +7778,114 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8549,6 +8855,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -8817,6 +9171,24 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port, in->args.vc.pat_templ_id,
+				in->args.vc.attr.reserved, in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port, in->args.vc.act_templ_id,
+				in->args.vc.actions, in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index eb3fa8a8cc..adc77169af 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1595,6 +1595,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2063,6 +2106,167 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id, bool relaxed,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_pattern_template_attr attr = {
+					.relaxed_matching = relaxed };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						&attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_actions_template_attr attr = { 0 };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						&attr, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..c70b1fa4e8 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,16 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      bool relaxed,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index d452fcfce3..56e821ec5c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3321,6 +3321,24 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3423,6 +3441,85 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 06/10] app/testpmd: implement rte flow table management
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (4 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:22       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
                       ` (3 subsequent siblings)
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 170 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 555 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 3f0e73743a..75bd128e68 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -112,6 +114,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -884,6 +900,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1015,6 +1043,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2059,6 +2113,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2132,6 +2191,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2296,6 +2357,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2306,6 +2374,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2486,6 +2555,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7886,6 +8053,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8903,6 +9183,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9189,6 +9493,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index adc77169af..126bead03e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1638,6 +1638,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2267,6 +2310,133 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_table_create(portid_t port_id, uint32_t id,
+		       const struct rte_flow_table_attr *table_attr,
+		       uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		       uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] = temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_table_destroy(portid_t port_id,
+			uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_table_destroy(port_id,
+						   pt->table,
+						   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c70b1fa4e8..4d85dfdaf6 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -915,6 +926,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 56e821ec5c..cfa9aecdba 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3339,6 +3339,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3520,6 +3533,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating flow table
+~~~~~~~~~~~~~~~~~~~
+
+``flow table create`` creates the specified flow table.
+It is bound to ``rte_flow_table_create()``::
+
+   flow table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow table destroy`` destroys one or more flow tables
+from their table ID (as returned by ``flow table create``),
+this command calls ``rte_flow_table_destroy()`` as many
+times as necessary::
+
+   flow table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (5 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:21       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
                       ` (2 subsequent siblings)
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no table 6
           pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 266 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  55 ++++
 4 files changed, 493 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 75bd128e68..d4c7f9542f 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -114,6 +116,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -890,6 +908,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -920,6 +940,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1069,6 +1090,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2118,6 +2151,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2193,6 +2232,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2364,6 +2405,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2386,7 +2434,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2653,6 +2702,83 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TABLE), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TABLE] = {
+		.name = "table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8166,6 +8292,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9207,6 +9438,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9504,6 +9757,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 126bead03e..1013c4b252 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2437,6 +2437,172 @@ port_flow_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr,
+		pt->table, pattern, pattern_idx, actions, actions_idx, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr,
+						    pf->flow, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_q_pull(port_id, queue_id,
+							 &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 4d85dfdaf6..f574fd77ba 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -932,6 +932,13 @@ int port_flow_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index cfa9aecdba..de46bd00d5 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3359,6 +3359,19 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3679,6 +3692,29 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_q_flow_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4393,6 +4429,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 08/10] app/testpmd: implement rte flow push operations
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (6 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:22       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
  2022-02-06  3:25     ` [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d4c7f9542f..773bf57a14 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -132,6 +133,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2157,6 +2161,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2435,7 +2442,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2779,6 +2787,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8397,6 +8420,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9768,6 +9819,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 1013c4b252..2e6343972b 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2603,6 +2603,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_q_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f574fd77ba..28c6680987 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index de46bd00d5..dd49e4d1bc 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3372,6 +3372,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3586,6 +3590,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_q_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 09/10] app/testpmd: implement rte flow pull operations
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (7 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:23       ` Ori Kam
  2022-02-06  3:25     ` [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 773bf57a14..35eb2a0997 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -136,6 +137,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2164,6 +2168,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2443,7 +2450,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2802,6 +2810,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8448,6 +8471,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9822,6 +9873,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2e6343972b..6cc2c8527e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2446,14 +2446,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2516,16 +2514,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2540,7 +2528,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2576,21 +2563,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_q_pull(port_id, queue_id,
-							 &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2631,6 +2603,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_q_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 28c6680987..8526db6766 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index dd49e4d1bc..419e5805e8 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3376,6 +3376,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3607,6 +3611,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_q_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3736,6 +3757,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4469,6 +4492,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions
  2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
                       ` (8 preceding siblings ...)
  2022-02-06  3:25     ` [PATCH v3 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
@ 2022-02-06  3:25     ` Alexander Kozyrev
  2022-02-07 13:23       ` Ori Kam
  9 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-06  3:25 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde

Add testpmd support for the rte_flow_q_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 35eb2a0997..1eea36d8d0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -121,6 +121,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -134,6 +135,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1101,6 +1122,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1110,6 +1132,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2165,6 +2217,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2741,6 +2799,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TABLE] = {
 		.name = "table",
@@ -2794,6 +2859,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6193,6 +6342,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -9876,6 +10129,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 6cc2c8527e..fbcd42355e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2575,6 +2575,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr,
+						      conf, action, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_q_action_handle_destroy(port_id, queue_id,
+						&attr, pia->handle, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_q_action_handle_update(port_id, queue_id, &attr,
+					    action_handle, action, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 8526db6766..3da5201014 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 419e5805e8..0d04435eb7 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4753,6 +4753,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4782,6 +4807,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4805,6 +4849,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_q_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-06  3:25     ` [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-02-07 13:15       ` Ori Kam
  2022-02-07 14:52       ` Jerin Jacob
  1 sibling, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:15 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints
> 
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
> 
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
> 
> The rte_flow_info_get() is available to retrieve the information about
> supported pre-configurable resources. Both these functions must be called
> before any other usage of the flow API engine.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 02/10] ethdev: add flow item/action templates
  2022-02-06  3:25     ` [PATCH v3 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-07 13:16       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:16 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 02/10] ethdev: add flow item/action templates
> 
> Treating every single flow rule as a completely independent and separate
> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> application, many flow rules share a common structure (the same item mask
> and/or action list) so they can be grouped and classified together.
> This knowledge may be used as a source of optimization by a PMD/HW.
> 
> The pattern template defines common matching fields (the item mask) without
> values. The actions template holds a list of action types that will be used
> together in the same rule. The specific values for items and actions will
> be given only during the rule creation.
> 
> A table combines pattern and actions templates along with shared flow rule
> attributes (group ID, priority and traffic direction). This way a PMD/HW
> can prepare all the resources needed for efficient flow rules creation in
> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> number of flow rules is defined at the table creation time.
> 
> The flow rule creation is done by selecting a table, a pattern template
> and an actions template (which are bound to the table), and setting unique
> values for the items and actions.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-06  3:25     ` [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-07 13:18       ` Ori Kam
  2022-02-08 10:56       ` Jerin Jacob
  1 sibling, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:18 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

HI Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> To: dev@dpdk.org
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru; andrew.rybchenko@oktetlabs.ru;
> ferruh.yigit@intel.com; mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com;
> jerinj@marvell.com; ajit.khaparde@broadcom.com
> Subject: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
> 
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and the queue
> should be accessed from the same thread for all queue operations.
> It is the responsibility of the app to sync the queue functions in case
> of multi-threaded access to the same queue.
> 
> The rte_flow_q_flow_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_q_pull() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_q_flow_destroy() function
> enqueues a flow destruction to the requested queue.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 04/10] app/testpmd: implement rte flow configuration
  2022-02-06  3:25     ` [PATCH v3 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
@ 2022-02-07 13:19       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:19 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 04/10] app/testpmd: implement rte flow configuration
> 
> Add testpmd support for the rte_flow_configure API.
> Provide the command line interface for the Flow management.
> Usage example: flow configure 0 queues_number 8 queues_size 256
> 
> Implement rte_flow_info_get API to get available resources:
> Usage example: flow info 0
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 05/10] app/testpmd: implement rte flow template management
  2022-02-06  3:25     ` [PATCH v3 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
@ 2022-02-07 13:20       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:20 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> To: dev@dpdk.org
> Subject: [PATCH v3 05/10] app/testpmd: implement rte flow template management
> 
> Add testpmd support for the rte_flow_pattern_template and
> rte_flow_actions_template APIs. Provide the command line interface
> for the template creation/destruction. Usage example:
>   testpmd> flow pattern_template 0 create pattern_template_id 2
>            template eth dst is 00:16:3e:31:15:c3 / end
>   testpmd> flow actions_template 0 create actions_template_id 4
>            template drop / end mask drop / end
>   testpmd> flow actions_template 0 destroy actions_template 4
>   testpmd> flow pattern_template 0 destroy pattern_template 2
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations
  2022-02-06  3:25     ` [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
@ 2022-02-07 13:21       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:21 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations
> 
> Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
> Provide the command line interface for enqueueing flow
> creation/destruction operations. Usage example:
>   testpmd> flow queue 0 create 0 postpone no table 6
>            pattern_template 0 actions_template 0
>            pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
>   testpmd> flow queue 0 destroy 0 postpone yes rule 0
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 06/10] app/testpmd: implement rte flow table management
  2022-02-06  3:25     ` [PATCH v3 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
@ 2022-02-07 13:22       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:22 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 06/10] app/testpmd: implement rte flow table management
> 
> Add testpmd support for the rte_flow_table API.
> Provide the command line interface for the flow
> table creation/destruction. Usage example:
>   testpmd> flow table 0 create table_id 6
>     group 9 priority 4 ingress mode 1
>     rules_number 64 pattern_template 2 actions_template 4
>   testpmd> flow table 0 destroy table 6
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> --- 

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 08/10] app/testpmd: implement rte flow push operations
  2022-02-06  3:25     ` [PATCH v3 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
@ 2022-02-07 13:22       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:22 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 08/10] app/testpmd: implement rte flow push operations
> 
> Add testpmd support for the rte_flow_q_push API.
> Provide the command line interface for pushing operations.
> Usage example: flow queue 0 push 0
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 09/10] app/testpmd: implement rte flow pull operations
  2022-02-06  3:25     ` [PATCH v3 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
@ 2022-02-07 13:23       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:23 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 09/10] app/testpmd: implement rte flow pull operations
> 
> Add testpmd support for the rte_flow_q_pull API.
> Provide the command line interface for pulling operations results.
> Usage example: flow pull 0 queue 0
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions
  2022-02-06  3:25     ` [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
@ 2022-02-07 13:23       ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-07 13:23 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde

Hi Alex,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, February 6, 2022 5:25 AM
> Subject: [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions
> 
> Add testpmd support for the rte_flow_q_action_handle API.
> Provide the command line interface for operations dequeue.
> Usage example:
>   flow queue 0 indirect_action 0 create action_id 9
>     ingress postpone yes action rss / end
>   flow queue 0 indirect_action 0 update action_id 9
>     action queue index 0 / end
> flow queue 0 indirect_action 0 destroy action_id 9
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-06  3:25     ` [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2022-02-07 13:15       ` Ori Kam
@ 2022-02-07 14:52       ` Jerin Jacob
  2022-02-07 17:59         ` Alexander Kozyrev
  1 sibling, 1 reply; 220+ messages in thread
From: Jerin Jacob @ 2022-02-07 14:52 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Ori Kam, Thomas Monjalon, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde

On Sun, Feb 6, 2022 at 8:56 AM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
>
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
>
> The rte_flow_info_get() is available to retrieve the information about
> supported pre-configurable resources. Both these functions must be called
> before any other usage of the flow API engine.
>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---
>  doc/guides/prog_guide/rte_flow.rst     | 37 ++++++++++++
>  doc/guides/rel_notes/release_22_03.rst |  4 ++
>  lib/ethdev/rte_flow.c                  | 40 +++++++++++++
>  lib/ethdev/rte_flow.h                  | 82 ++++++++++++++++++++++++++
>  lib/ethdev/rte_flow_driver.h           | 10 ++++
>  lib/ethdev/version.map                 |  4 ++
>  6 files changed, 177 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index b4aa9c47c2..5b4c5dd609 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3589,6 +3589,43 @@ Return values:
>
>  - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
>
> +Flow engine configuration
> +-------------------------
> +
> +Configure flow API management.
> +
> +An application may provide some hints at the initialization phase about
> +rules engine configuration and/or expected flow rules characteristics.

IMO, We can explicitly remove _hint_ in the documentation.
When we add new paramers to configure, it may not be hit.

> +These hints may be used by PMD to pre-allocate resources and configure NIC.

hints->parameters

> +
> +Configuration
> +~~~~~~~~~~~~~
> +
> +This function performs the flow API management configuration and
> +pre-allocates needed resources beforehand to avoid costly allocations later.
> +Hints about the expected number of counters or meters in an application,
> +for example, allow PMD to prepare and optimize NIC memory layout in advance.
> +``rte_flow_configure()`` must be called before any flow rule is created,
> +but after an Ethernet device is configured.
> +
> +.. code-block:: c
> +
> +   int
> +   rte_flow_configure(uint16_t port_id,
> +                     const struct rte_flow_port_attr *port_attr,
> +                     struct rte_flow_error *error);
> +
> +Information about resources that can benefit from pre-allocation can be
> +retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
> +of pre-configurable resources for a given port on a system.
> +
> +.. code-block:: c
> +
> +   int
> +   rte_flow_info_get(uint16_t port_id,
> +                     struct rte_flow_port_attr *port_attr,
> +                     struct rte_flow_error *error);
> +
>  .. _flow_isolated_mode:
>
>  Flow isolated mode
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index bf2e3f78a9..8593db3f6a 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -55,6 +55,10 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
>
> +* ethdev: Added ``rte_flow_configure`` API to configure Flow Management
> +  engine, allowing to pre-allocate some resources for better performance.
> +  Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
> +
>  * **Updated AF_XDP PMD**
>
>    * Added support for libxdp >=v1.2.2.
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index a93f68abbc..e7e6478bed 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
>         ret = ops->flex_item_release(dev, handle, error);
>         return flow_err(port_id, ret, error);
>  }
> +
> +int
> +rte_flow_info_get(uint16_t port_id,
> +                 struct rte_flow_port_attr *port_attr,
> +                 struct rte_flow_error *error)
> +{
> +       struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +       const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +       if (unlikely(!ops))
> +               return -rte_errno;
> +       if (likely(!!ops->info_get)) {
> +               return flow_err(port_id,
> +                               ops->info_get(dev, port_attr, error),
> +                               error);
> +       }
> +       return rte_flow_error_set(error, ENOTSUP,
> +                                 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +                                 NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_configure(uint16_t port_id,
> +                  const struct rte_flow_port_attr *port_attr,
> +                  struct rte_flow_error *error)
> +{
> +       struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +       const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +       if (unlikely(!ops))
> +               return -rte_errno;
> +       if (likely(!!ops->configure)) {
> +               return flow_err(port_id,
> +                               ops->configure(dev, port_attr, error),
> +                               error);
> +       }
> +       return rte_flow_error_set(error, ENOTSUP,
> +                                 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +                                 NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 1031fb246b..f3c7159484 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4853,6 +4853,88 @@ rte_flow_flex_item_release(uint16_t port_id,
>                            const struct rte_flow_item_flex_handle *handle,
>                            struct rte_flow_error *error);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Resource pre-allocation settings.
> + * The zero value means on demand resource allocations only.
> + *
> + */
> +struct rte_flow_port_attr {
> +       /**
> +        * Version of the struct layout, should be 0.
> +        */
> +       uint32_t version;

IMO, it is not concluded to use the version in the public API.


> +       /**
> +        * Number of counter actions pre-configured.
> +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> +        */
> +       uint32_t nb_counters;
> +       /**
> +        * Number of aging flows actions pre-configured.
> +        * @see RTE_FLOW_ACTION_TYPE_AGE
> +        */
> +       uint32_t nb_aging_flows;
> +       /**
> +        * Number of traffic metering actions pre-configured.
> +        * @see RTE_FLOW_ACTION_TYPE_METER
> +        */
> +       uint32_t nb_meters;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Retrieve configuration attributes supported by the port.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] port_attr

By default all parameters are "in", no need for additional "in" for just input.

> + *   Port configuration attributes.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_info_get(uint16_t port_id,
> +                 struct rte_flow_port_attr *port_attr,
> +                 struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Pre-configure the port's flow API engine.

IMO, s/ Pre-configure/Configure/

> + *
> + * This API can only be invoked before the application
> + * starts using the rest of the flow library functions.
> + *
> + * The API can be invoked multiple times to change the
> + * settings. The port, however, may reject the changes.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] port_attr
> + *   Port configuration attributes.

IMO, we need to have comments, the values should be <= the values
got it from rte_flow_info_get().

Also 0 representations the default value.

> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.



> + */
> +__rte_experimental
> +int
> +rte_flow_configure(uint16_t port_id,
> +                  const struct rte_flow_port_attr *port_attr,
> +                  struct rte_flow_error *error);
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> index f691b04af4..503700aec4 100644
> --- a/lib/ethdev/rte_flow_driver.h
> +++ b/lib/ethdev/rte_flow_driver.h
> @@ -152,6 +152,16 @@ struct rte_flow_ops {
>                 (struct rte_eth_dev *dev,
>                  const struct rte_flow_item_flex_handle *handle,
>                  struct rte_flow_error *error);
> +       /** See rte_flow_info_get() */
> +       int (*info_get)
> +               (struct rte_eth_dev *dev,
> +                struct rte_flow_port_attr *port_attr,
> +                struct rte_flow_error *err);
> +       /** See rte_flow_configure() */
> +       int (*configure)
> +               (struct rte_eth_dev *dev,
> +                const struct rte_flow_port_attr *port_attr,
> +                struct rte_flow_error *err);
>  };
>
>  /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 1f7359c846..59785c3634 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -256,6 +256,10 @@ EXPERIMENTAL {
>         rte_flow_flex_item_create;
>         rte_flow_flex_item_release;
>         rte_flow_pick_transfer_proxy;
> +
> +       # added in 22.03
> +       rte_flow_info_get;
> +       rte_flow_configure;
>  };
>
>  INTERNAL {
> --
> 2.18.2
>

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-07 14:52       ` Jerin Jacob
@ 2022-02-07 17:59         ` Alexander Kozyrev
  2022-02-07 18:24           ` Jerin Jacob
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-07 17:59 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde

On Monday, February 7, 2022 9:52 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> On Sun, Feb 6, 2022 at 8:56 AM Alexander Kozyrev <akozyrev@nvidia.com>
> wrote:
> >
> > The flow rules creation/destruction at a large scale incurs a performance
> > penalty and may negatively impact the packet processing when used
> > as part of the datapath logic. This is mainly because software/hardware
> > resources are allocated and prepared during the flow rule creation.
> >
> > In order to optimize the insertion rate, PMD may use some hints provided
> > by the application at the initialization phase. The rte_flow_configure()
> > function allows to pre-allocate all the needed resources beforehand.
> > These resources can be used at a later stage without costly allocations.
> > Every PMD may use only the subset of hints and ignore unused ones or
> > fail in case the requested configuration is not supported.
> >
> > The rte_flow_info_get() is available to retrieve the information about
> > supported pre-configurable resources. Both these functions must be called
> > before any other usage of the flow API engine.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > ---
> >  doc/guides/prog_guide/rte_flow.rst     | 37 ++++++++++++
> >  doc/guides/rel_notes/release_22_03.rst |  4 ++
> >  lib/ethdev/rte_flow.c                  | 40 +++++++++++++
> >  lib/ethdev/rte_flow.h                  | 82 ++++++++++++++++++++++++++
> >  lib/ethdev/rte_flow_driver.h           | 10 ++++
> >  lib/ethdev/version.map                 |  4 ++
> >  6 files changed, 177 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index b4aa9c47c2..5b4c5dd609 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -3589,6 +3589,43 @@ Return values:
> >
> >  - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
> >
> > +Flow engine configuration
> > +-------------------------
> > +
> > +Configure flow API management.
> > +
> > +An application may provide some hints at the initialization phase about
> > +rules engine configuration and/or expected flow rules characteristics.
> 
> IMO, We can explicitly remove _hint_ in the documentation.
> When we add new paramers to configure, it may not be hit.
> 
> > +These hints may be used by PMD to pre-allocate resources and configure
> NIC.
> 
> hints->parameters

Sounds good, let's call them parameters and PDM will decide to use them or not anyway.

> > +
> > +Configuration
> > +~~~~~~~~~~~~~
> > +
> > +This function performs the flow API management configuration and
> > +pre-allocates needed resources beforehand to avoid costly allocations
> later.
> > +Hints about the expected number of counters or meters in an application,
> > +for example, allow PMD to prepare and optimize NIC memory layout in
> advance.
> > +``rte_flow_configure()`` must be called before any flow rule is created,
> > +but after an Ethernet device is configured.
> > +
> > +.. code-block:: c
> > +
> > +   int
> > +   rte_flow_configure(uint16_t port_id,
> > +                     const struct rte_flow_port_attr *port_attr,
> > +                     struct rte_flow_error *error);
> > +
> > +Information about resources that can benefit from pre-allocation can be
> > +retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
> > +of pre-configurable resources for a given port on a system.
> > +
> > +.. code-block:: c
> > +
> > +   int
> > +   rte_flow_info_get(uint16_t port_id,
> > +                     struct rte_flow_port_attr *port_attr,
> > +                     struct rte_flow_error *error);
> > +
> >  .. _flow_isolated_mode:
> >
> >  Flow isolated mode
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> > index bf2e3f78a9..8593db3f6a 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -55,6 +55,10 @@ New Features
> >       Also, make sure to start the actual text at the margin.
> >       =======================================================
> >
> > +* ethdev: Added ``rte_flow_configure`` API to configure Flow
> Management
> > +  engine, allowing to pre-allocate some resources for better performance.
> > +  Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
> > +
> >  * **Updated AF_XDP PMD**
> >
> >    * Added support for libxdp >=v1.2.2.
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index a93f68abbc..e7e6478bed 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
> >         ret = ops->flex_item_release(dev, handle, error);
> >         return flow_err(port_id, ret, error);
> >  }
> > +
> > +int
> > +rte_flow_info_get(uint16_t port_id,
> > +                 struct rte_flow_port_attr *port_attr,
> > +                 struct rte_flow_error *error)
> > +{
> > +       struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +       const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +       if (unlikely(!ops))
> > +               return -rte_errno;
> > +       if (likely(!!ops->info_get)) {
> > +               return flow_err(port_id,
> > +                               ops->info_get(dev, port_attr, error),
> > +                               error);
> > +       }
> > +       return rte_flow_error_set(error, ENOTSUP,
> > +                                 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +                                 NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_configure(uint16_t port_id,
> > +                  const struct rte_flow_port_attr *port_attr,
> > +                  struct rte_flow_error *error)
> > +{
> > +       struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +       const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +       if (unlikely(!ops))
> > +               return -rte_errno;
> > +       if (likely(!!ops->configure)) {
> > +               return flow_err(port_id,
> > +                               ops->configure(dev, port_attr, error),
> > +                               error);
> > +       }
> > +       return rte_flow_error_set(error, ENOTSUP,
> > +                                 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +                                 NULL, rte_strerror(ENOTSUP));
> > +}
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index 1031fb246b..f3c7159484 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4853,6 +4853,88 @@ rte_flow_flex_item_release(uint16_t port_id,
> >                            const struct rte_flow_item_flex_handle *handle,
> >                            struct rte_flow_error *error);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Resource pre-allocation settings.
> > + * The zero value means on demand resource allocations only.
> > + *
> > + */
> > +struct rte_flow_port_attr {
> > +       /**
> > +        * Version of the struct layout, should be 0.
> > +        */
> > +       uint32_t version;
> 
> IMO, it is not concluded to use the version in the public API.

Yes, there is ongoing discussion on how to properly address versioning right now.
But I would like to proceed with this API without having to wait the final decision on that.
My API is experimental and it is possible to switch to any versioning model after.

> 
> > +       /**
> > +        * Number of counter actions pre-configured.
> > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > +        */
> > +       uint32_t nb_counters;
> > +       /**
> > +        * Number of aging flows actions pre-configured.
> > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > +        */
> > +       uint32_t nb_aging_flows;
> > +       /**
> > +        * Number of traffic metering actions pre-configured.
> > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > +        */
> > +       uint32_t nb_meters;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Retrieve configuration attributes supported by the port.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] port_attr
> 
> By default all parameters are "in", no need for additional "in" for just input.

"in" or "out" is a part of description of every pointer parameter throughout rte_flow.h

> > + *   Port configuration attributes.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_info_get(uint16_t port_id,
> > +                 struct rte_flow_port_attr *port_attr,
> > +                 struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Pre-configure the port's flow API engine.
> 
> IMO, s/ Pre-configure/Configure/

No problem.
 
> > + *
> > + * This API can only be invoked before the application
> > + * starts using the rest of the flow library functions.
> > + *
> > + * The API can be invoked multiple times to change the
> > + * settings. The port, however, may reject the changes.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] port_attr
> > + *   Port configuration attributes.
> 
> IMO, we need to have comments, the values should be <= the values
> got it from rte_flow_info_get().
> 
> Also 0 representations the default value.

Will add these comments, thank you.

> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> 
> 
> 
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_configure(uint16_t port_id,
> > +                  const struct rte_flow_port_attr *port_attr,
> > +                  struct rte_flow_error *error);
> > +
> >  #ifdef __cplusplus
> >  }
> >  #endif
> > diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> > index f691b04af4..503700aec4 100644
> > --- a/lib/ethdev/rte_flow_driver.h
> > +++ b/lib/ethdev/rte_flow_driver.h
> > @@ -152,6 +152,16 @@ struct rte_flow_ops {
> >                 (struct rte_eth_dev *dev,
> >                  const struct rte_flow_item_flex_handle *handle,
> >                  struct rte_flow_error *error);
> > +       /** See rte_flow_info_get() */
> > +       int (*info_get)
> > +               (struct rte_eth_dev *dev,
> > +                struct rte_flow_port_attr *port_attr,
> > +                struct rte_flow_error *err);
> > +       /** See rte_flow_configure() */
> > +       int (*configure)
> > +               (struct rte_eth_dev *dev,
> > +                const struct rte_flow_port_attr *port_attr,
> > +                struct rte_flow_error *err);
> >  };
> >
> >  /**
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index 1f7359c846..59785c3634 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -256,6 +256,10 @@ EXPERIMENTAL {
> >         rte_flow_flex_item_create;
> >         rte_flow_flex_item_release;
> >         rte_flow_pick_transfer_proxy;
> > +
> > +       # added in 22.03
> > +       rte_flow_info_get;
> > +       rte_flow_configure;
> >  };
> >
> >  INTERNAL {
> > --
> > 2.18.2
> >

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-07 17:59         ` Alexander Kozyrev
@ 2022-02-07 18:24           ` Jerin Jacob
  0 siblings, 0 replies; 220+ messages in thread
From: Jerin Jacob @ 2022-02-07 18:24 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde, Richardson, Bruce

On Mon, Feb 7, 2022 at 11:30 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> On Monday, February 7, 2022 9:52 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > On Sun, Feb 6, 2022 at 8:56 AM Alexander Kozyrev <akozyrev@nvidia.com>
> > wrote:
> > >
> > > The flow rules creation/destruction at a large scale incurs a performance
> > > penalty and may negatively impact the packet processing when used
> > > as part of the datapath logic. This is mainly because software/hardware
> > > resources are allocated and prepared during the flow rule creation.
> > >
> > > In order to optimize the insertion rate, PMD may use some hints provided
> > > by the application at the initialization phase. The rte_flow_configure()
> > > function allows to pre-allocate all the needed resources beforehand.
> > > These resources can be used at a later stage without costly allocations.
> > > Every PMD may use only the subset of hints and ignore unused ones or
> > > fail in case the requested configuration is not supported.
> > >
> > > The rte_flow_info_get() is available to retrieve the information about
> > > supported pre-configurable resources. Both these functions must be called
> > > before any other usage of the flow API engine.
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > ---
> > >  doc/guides/prog_guide/rte_flow.rst     | 37 ++++++++++++
> > >  doc/guides/rel_notes/release_22_03.rst |  4 ++
> > >  lib/ethdev/rte_flow.c                  | 40 +++++++++++++
> > >  lib/ethdev/rte_flow.h                  | 82 ++++++++++++++++++++++++++
> > >  lib/ethdev/rte_flow_driver.h           | 10 ++++
> > >  lib/ethdev/version.map                 |  4 ++
> > >  6 files changed, 177 insertions(+)
> > >
> > > diff --git a/doc/guides/prog_guide/rte_flow.rst
> > b/doc/guides/prog_guide/rte_flow.rst
> > > index b4aa9c47c2..5b4c5dd609 100644
> > > --- a/doc/guides/prog_guide/rte_flow.rst
> > > +++ b/doc/guides/prog_guide/rte_flow.rst
> > > @@ -3589,6 +3589,43 @@ Return values:
> > >
> > >  - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
> > >
> > > +Flow engine configuration
> > > +-------------------------
> > > +
> > > +Configure flow API management.
> > > +
> > > +An application may provide some hints at the initialization phase about
> > > +rules engine configuration and/or expected flow rules characteristics.
> >
> > IMO, We can explicitly remove _hint_ in the documentation.
> > When we add new paramers to configure, it may not be hit.
> >
> > > +These hints may be used by PMD to pre-allocate resources and configure
> > NIC.
> >
> > hints->parameters
>
> Sounds good, let's call them parameters and PDM will decide to use them or not anyway.
> > > + */
> > > +struct rte_flow_port_attr {
> > > +       /**
> > > +        * Version of the struct layout, should be 0.
> > > +        */
> > > +       uint32_t version;
> >
> > IMO, it is not concluded to use the version in the public API.
>
> Yes, there is ongoing discussion on how to properly address versioning right now.
> But I would like to proceed with this API without having to wait the final decision on that.
> My API is experimental and it is possible to switch to any versioning model after.

+ @Richardson, Bruce
On the same note, since it is experimental, we can add the version
when we needed.

I think, the primary pushback is application should not be aware of
version, instead,
implementation probe the version and adjust accordingly.

IMO, In order to make forward progress, I suggest having the next
version of the patch with without
version and we can introduce this topic to TB for a final decision one
way or another.



>
> >
> > > +       /**
> > > +        * Number of counter actions pre-configured.
> > > +        * @see RTE_FLOW_ACTION_TYPE_COUNT
> > > +        */
> > > +       uint32_t nb_counters;
> > > +       /**
> > > +        * Number of aging flows actions pre-configured.
> > > +        * @see RTE_FLOW_ACTION_TYPE_AGE
> > > +        */
> > > +       uint32_t nb_aging_flows;
> > > +       /**
> > > +        * Number of traffic metering actions pre-configured.
> > > +        * @see RTE_FLOW_ACTION_TYPE_METER
> > > +        */
> > > +       uint32_t nb_meters;
> > > +};
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Retrieve configuration attributes supported by the port.
> > > + *
> > > + * @param port_id
> > > + *   Port identifier of Ethernet device.
> > > + * @param[in] port_attr
> >
> > By default all parameters are "in", no need for additional "in" for just input.
>
> "in" or "out" is a part of description of every pointer parameter throughout rte_flow.h

Ack.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-06  3:25     ` [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
  2022-02-07 13:18       ` Ori Kam
@ 2022-02-08 10:56       ` Jerin Jacob
  2022-02-08 14:11         ` Alexander Kozyrev
  1 sibling, 1 reply; 220+ messages in thread
From: Jerin Jacob @ 2022-02-08 10:56 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Ori Kam, Thomas Monjalon, Ivan Malov, Andrew Rybchenko,
	Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
	Ajit Khaparde

On Sun, Feb 6, 2022 at 8:57 AM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and the queue
> should be accessed from the same thread for all queue operations.
> It is the responsibility of the app to sync the queue functions in case
> of multi-threaded access to the same queue.
>
> The rte_flow_q_flow_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_q_pull() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_q_flow_destroy() function
> enqueues a flow destruction to the requested queue.

It is good to see the implementation, specifically to understand,
1)
I understand, We are creating queues to make multiple producers to
enqueue multiple jobs in parallel.
On the consumer side, Is it HW or some other cores to consume the job?
Can we operate in consumer in parallel?

2) Is Queue part of HW or just SW primitive to submit the work as a channel.


>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---
>  doc/guides/prog_guide/img/rte_flow_q_init.svg |  71 ++++
>  .../prog_guide/img/rte_flow_q_usage.svg       |  60 +++
>  doc/guides/prog_guide/rte_flow.rst            | 159 +++++++-
>  doc/guides/rel_notes/release_22_03.rst        |   8 +
>  lib/ethdev/rte_flow.c                         | 173 ++++++++-
>  lib/ethdev/rte_flow.h                         | 342 ++++++++++++++++++
>  lib/ethdev/rte_flow_driver.h                  |  55 +++
>  lib/ethdev/version.map                        |   7 +
>  8 files changed, 873 insertions(+), 2 deletions(-)
>  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
>  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
>
> diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> new file mode 100644
> index 0000000000..2080bf4c04



Some comments on the diagrams:
# rte_flow_q_create_flow and rte_flow_q_destroy_flow used instead of
rte_flow_q_flow_create/destroy
# rte_flow_q_pull's brackets(i.e ()) not aligned


> +</svg>
> \ No newline at end of file
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index b7799c5abe..734294e65d 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3607,12 +3607,16 @@ Hints about the expected number of counters or meters in an application,
>  for example, allow PMD to prepare and optimize NIC memory layout in advance.
>  ``rte_flow_configure()`` must be called before any flow rule is created,
>  but after an Ethernet device is configured.
> +It also creates flow queues for asynchronous flow rules operations via
> +queue-based API, see `Asynchronous operations`_ section.
>
>  .. code-block:: c
>
>     int
>     rte_flow_configure(uint16_t port_id,
>                       const struct rte_flow_port_attr *port_attr,
> +                     uint16_t nb_queue,

# rte_flow_info_get() don't have number of queues, why not adding
number queues in rte_flow_port_attr.
# And additional APIs for queue_setup() like ethdev.


> +                     const struct rte_flow_queue_attr *queue_attr[],
>                       struct rte_flow_error *error);
>
>  Information about resources that can benefit from pre-allocation can be
> @@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
>
>  .. code-block:: c
>
> -       rte_flow_configure(port, *port_attr, *error);
> +       rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error);
>
>         struct rte_flow_pattern_template *pattern_templates[0] =
>                 rte_flow_pattern_template_create(port, &itr, &pattern, &error);
> @@ -3750,6 +3754,159 @@ and pattern and actions templates are created.
>                                 *actions_templates, nb_actions_templates,
>                                 *error);
>
> +Asynchronous operations
> +-----------------------
> +
> +Flow rules management can be done via special lockless flow management queues.
> +- Queue operations are asynchronous and not thread-safe.
> +- Operations can thus be invoked by the app's datapath,
> +packet processing can continue while queue operations are processed by NIC.
> +- The queue number is configured at initialization stage.
> +- Available operation types: rule creation, rule destruction,
> +indirect rule creation, indirect rule destruction, indirect rule update.
> +- Operations may be reordered within a queue.
> +- Operations can be postponed and pushed to NIC in batches.
> +- Results pulling must be done on time to avoid queue overflows.
> +- User data is returned as part of the result to identify an operation.
> +- Flow handle is valid once the creation operation is enqueued and must be
> +destroyed even if the operation is not successful and the rule is not inserted.

You need CR between lines as rendered text does comes as new line in
between the items.


> +
> +The asynchronous flow rule insertion logic can be broken into two phases.
> +
> +1. Initialization stage as shown here:
> +
> +.. _figure_rte_flow_q_init:
> +
> +.. figure:: img/rte_flow_q_init.*
> +
> +2. Main loop as presented on a datapath application example:
> +
> +.. _figure_rte_flow_q_usage:
> +
> +.. figure:: img/rte_flow_q_usage.*

it is better to add sequence operations as text to understand the flow.


> +
> +Enqueue creation operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Enqueueing a flow rule creation operation is similar to simple creation.

If it is enqueue operation, why not call it ad rte_flow_q_flow_enqueue()

> +
> +.. code-block:: c
> +
> +       struct rte_flow *
> +       rte_flow_q_flow_create(uint16_t port_id,
> +                               uint32_t queue_id,
> +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> +                               struct rte_flow_table *table,
> +                               const struct rte_flow_item pattern[],
> +                               uint8_t pattern_template_index,
> +                               const struct rte_flow_action actions[],

If I understand correctly, table is the pre-configured object that has
N number of patterns and N number of actions.
Why giving items[] and actions[] again?

> +                               uint8_t actions_template_index,
> +                               struct rte_flow_error *error);
> +
> +A valid handle in case of success is returned. It must be destroyed later
> +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
> +
> +Enqueue destruction operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Queue destruction operation.


> +
> +Enqueueing a flow rule destruction operation is similar to simple destruction.
> +
> +.. code-block:: c
> +
> +       int
> +       rte_flow_q_flow_destroy(uint16_t port_id,
> +                               uint32_t queue_id,
> +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> +                               struct rte_flow *flow,
> +                               struct rte_flow_error *error);
> +
> +Push enqueued operations
> +~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Pushing all internally stored rules from a queue to the NIC.
> +
> +.. code-block:: c
> +
> +       int
> +       rte_flow_q_push(uint16_t port_id,
> +                       uint32_t queue_id,
> +                       struct rte_flow_error *error);
> +
> +There is the postpone attribute in the queue operation attributes.
> +When it is set, multiple operations can be bulked together and not sent to HW
> +right away to save SW/HW interactions and prioritize throughput over latency.
> +The application must invoke this function to actually push all outstanding
> +operations to HW in this case.
> +
> +Pull enqueued operations
> +~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Pulling asynchronous operations results.
> +
> +The application must invoke this function in order to complete asynchronous
> +flow rule operations and to receive flow rule operations statuses.
> +
> +.. code-block:: c
> +
> +       int
> +       rte_flow_q_pull(uint16_t port_id,
> +                       uint32_t queue_id,
> +                       struct rte_flow_q_op_res res[],
> +                       uint16_t n_res,
> +                       struct rte_flow_error *error);
> +
> +Multiple outstanding operation results can be pulled simultaneously.
> +User data may be provided during a flow creation/destruction in order
> +to distinguish between multiple operations. User data is returned as part
> +of the result to provide a method to detect which operation is completed.
> +
> +Enqueue indirect action creation operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Asynchronous version of indirect action creation API.
> +
> +.. code-block:: c
> +
> +       struct rte_flow_action_handle *
> +       rte_flow_q_action_handle_create(uint16_t port_id,

What is the use case for this?
How application needs to use this. We already creating flow_table. Is
that not sufficient?


> +                       uint32_t queue_id,
> +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> +                       const struct rte_flow_indir_action_conf *indir_action_conf,
> +                       const struct rte_flow_action *action,
> +                       struct rte_flow_error *error);
> +
> +A valid handle in case of success is returned. It must be destroyed later by
> +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
> +
> +Enqueue indirect action destruction operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Asynchronous version of indirect action destruction API.
> +
> +.. code-block:: c
> +
> +       int
> +       rte_flow_q_action_handle_destroy(uint16_t port_id,
> +                       uint32_t queue_id,
> +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> +                       struct rte_flow_action_handle *action_handle,
> +                       struct rte_flow_error *error);
> +
> +Enqueue indirect action update operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Asynchronous version of indirect action update API.
> +
> +.. code-block:: c
> +
> +       int
> +       rte_flow_q_action_handle_update(uint16_t port_id,
> +                       uint32_t queue_id,
> +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> +                       struct rte_flow_action_handle *action_handle,
> +                       const void *update,
> +                       struct rte_flow_error *error);
> +
>  .. _flow_isolated_mode:
>
>  Flow isolated mode
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index d23d1591df..80a85124e6 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -67,6 +67,14 @@ New Features
>    ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy``
>    and ``rte_flow_actions_template_destroy``.
>
> +* ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` API
> +  to enqueue flow creaion/destruction operations asynchronously as well as
> +  ``rte_flow_q_pull`` to poll and retrieve results of these operations and
> +  ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
> +  Introduced asynchronous API for indirect actions management as well:
> +  ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` and
> +  ``rte_flow_q_action_handle_update``.
> +

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-08 10:56       ` Jerin Jacob
@ 2022-02-08 14:11         ` Alexander Kozyrev
  2022-02-08 15:23           ` Ivan Malov
                             ` (2 more replies)
  0 siblings, 3 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-08 14:11 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde

> On Tuesday, February 8, 2022 5:57 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> On Sun, Feb 6, 2022 at 8:57 AM Alexander Kozyrev <akozyrev@nvidia.com>
> wrote:

Hi Jerin, thanks you for reviewing my patch. I appreciate your input.
I'm planning to send v4 today with addressed comments today to be on time for RC1.
I hope that my answers are satisfactory for the rest of questions raised by you.

> >
> > A new, faster, queue-based flow rules management mechanism is needed
> for
> > applications offloading rules inside the datapath. This asynchronous
> > and lockless mechanism frees the CPU for further packet processing and
> > reduces the performance impact of the flow rules creation/destruction
> > on the datapath. Note that queues are not thread-safe and the queue
> > should be accessed from the same thread for all queue operations.
> > It is the responsibility of the app to sync the queue functions in case
> > of multi-threaded access to the same queue.
> >
> > The rte_flow_q_flow_create() function enqueues a flow creation to the
> > requested queue. It benefits from already configured resources and sets
> > unique values on top of item and action templates. A flow rule is enqueued
> > on the specified flow queue and offloaded asynchronously to the
> hardware.
> > The function returns immediately to spare CPU for further packet
> > processing. The application must invoke the rte_flow_q_pull() function
> > to complete the flow rule operation offloading, to clear the queue, and to
> > receive the operation status. The rte_flow_q_flow_destroy() function
> > enqueues a flow destruction to the requested queue.
> 
> It is good to see the implementation, specifically to understand,

We will send PMD implementation in the next few days.

> 1)
> I understand, We are creating queues to make multiple producers to
> enqueue multiple jobs in parallel.
> On the consumer side, Is it HW or some other cores to consume the job?

From API point of view there is no restriction on the type of consumer.
It could be hardware or software implementation, but in most cases
(and in our driver) it will be the NIC to handle the requests.

> Can we operate in consumer in parallel?

Yes, we can have separate multiple hardware queues to handle operations
in parallel independently and without any locking mechanism needed.

> 2) Is Queue part of HW or just SW primitive to submit the work as a channel.

The queue is a software primitive.

> 
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > ---
> >  doc/guides/prog_guide/img/rte_flow_q_init.svg |  71 ++++
> >  .../prog_guide/img/rte_flow_q_usage.svg       |  60 +++
> >  doc/guides/prog_guide/rte_flow.rst            | 159 +++++++-
> >  doc/guides/rel_notes/release_22_03.rst        |   8 +
> >  lib/ethdev/rte_flow.c                         | 173 ++++++++-
> >  lib/ethdev/rte_flow.h                         | 342 ++++++++++++++++++
> >  lib/ethdev/rte_flow_driver.h                  |  55 +++
> >  lib/ethdev/version.map                        |   7 +
> >  8 files changed, 873 insertions(+), 2 deletions(-)
> >  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
> >  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
> >
> > diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg
> b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> > new file mode 100644
> > index 0000000000..2080bf4c04
> 
> 
> 
> Some comments on the diagrams:
> # rte_flow_q_create_flow and rte_flow_q_destroy_flow used instead of
> rte_flow_q_flow_create/destroy
> # rte_flow_q_pull's brackets(i.e ()) not aligned

Will fix this, thanks for noticing.
 
> 
> > +</svg>
> > \ No newline at end of file
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index b7799c5abe..734294e65d 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -3607,12 +3607,16 @@ Hints about the expected number of counters
> or meters in an application,
> >  for example, allow PMD to prepare and optimize NIC memory layout in
> advance.
> >  ``rte_flow_configure()`` must be called before any flow rule is created,
> >  but after an Ethernet device is configured.
> > +It also creates flow queues for asynchronous flow rules operations via
> > +queue-based API, see `Asynchronous operations`_ section.
> >
> >  .. code-block:: c
> >
> >     int
> >     rte_flow_configure(uint16_t port_id,
> >                       const struct rte_flow_port_attr *port_attr,
> > +                     uint16_t nb_queue,
> 
> # rte_flow_info_get() don't have number of queues, why not adding
> number queues in rte_flow_port_attr.

Good suggestion, I'll add it to the capabilities structure.

> # And additional APIs for queue_setup() like ethdev.

ethdev has the start function which tells the PMD when all configurations are done.
In our case there is no such function and device is ready to create a flows as soon
as we exit the rte_flow_configure(). In addition, the number of queues may affect
the resource allocation it is best to process all the requested resources at the same time.

> 
> > +                     const struct rte_flow_queue_attr *queue_attr[],
> >                       struct rte_flow_error *error);
> >
> >  Information about resources that can benefit from pre-allocation can be
> > @@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
> >
> >  .. code-block:: c
> >
> > -       rte_flow_configure(port, *port_attr, *error);
> > +       rte_flow_configure(port, *port_attr, nb_queue, *queue_attr,
> *error);
> >
> >         struct rte_flow_pattern_template *pattern_templates[0] =
> >                 rte_flow_pattern_template_create(port, &itr, &pattern, &error);
> > @@ -3750,6 +3754,159 @@ and pattern and actions templates are created.
> >                                 *actions_templates, nb_actions_templates,
> >                                 *error);
> >
> > +Asynchronous operations
> > +-----------------------
> > +
> > +Flow rules management can be done via special lockless flow
> management queues.
> > +- Queue operations are asynchronous and not thread-safe.
> > +- Operations can thus be invoked by the app's datapath,
> > +packet processing can continue while queue operations are processed by
> NIC.
> > +- The queue number is configured at initialization stage.
> > +- Available operation types: rule creation, rule destruction,
> > +indirect rule creation, indirect rule destruction, indirect rule update.
> > +- Operations may be reordered within a queue.
> > +- Operations can be postponed and pushed to NIC in batches.
> > +- Results pulling must be done on time to avoid queue overflows.
> > +- User data is returned as part of the result to identify an operation.
> > +- Flow handle is valid once the creation operation is enqueued and must
> be
> > +destroyed even if the operation is not successful and the rule is not
> inserted.
> 
> You need CR between lines as rendered text does comes as new line in
> between the items.

OK.

> 
> > +
> > +The asynchronous flow rule insertion logic can be broken into two phases.
> > +
> > +1. Initialization stage as shown here:
> > +
> > +.. _figure_rte_flow_q_init:
> > +
> > +.. figure:: img/rte_flow_q_init.*
> > +
> > +2. Main loop as presented on a datapath application example:
> > +
> > +.. _figure_rte_flow_q_usage:
> > +
> > +.. figure:: img/rte_flow_q_usage.*
> 
> it is better to add sequence operations as text to understand the flow.

I prefer keeping the diagram here, it looks more clean and concise.
Block of text gives no new information and harder to follow, imho.

> 
> > +
> > +Enqueue creation operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Enqueueing a flow rule creation operation is similar to simple creation.
> 
> If it is enqueue operation, why not call it ad rte_flow_q_flow_enqueue()
> 
> > +
> > +.. code-block:: c
> > +
> > +       struct rte_flow *
> > +       rte_flow_q_flow_create(uint16_t port_id,
> > +                               uint32_t queue_id,
> > +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> > +                               struct rte_flow_table *table,
> > +                               const struct rte_flow_item pattern[],
> > +                               uint8_t pattern_template_index,
> > +                               const struct rte_flow_action actions[],
> 
> If I understand correctly, table is the pre-configured object that has
> N number of patterns and N number of actions.
> Why giving items[] and actions[] again?

Table only contains templates for pattern and actions.
We still need to provide the values for those templates when we create a flow.
Thus we specify patterns and action here.

> > +                               uint8_t actions_template_index,
> > +                               struct rte_flow_error *error);
> > +
> > +A valid handle in case of success is returned. It must be destroyed later
> > +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by
> HW.
> > +
> > +Enqueue destruction operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> Queue destruction operation.

We are not destroying queue, we are enqueuing the flow destruction operation.

> 
> > +
> > +Enqueueing a flow rule destruction operation is similar to simple
> destruction.
> > +
> > +.. code-block:: c
> > +
> > +       int
> > +       rte_flow_q_flow_destroy(uint16_t port_id,
> > +                               uint32_t queue_id,
> > +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> > +                               struct rte_flow *flow,
> > +                               struct rte_flow_error *error);
> > +
> > +Push enqueued operations
> > +~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Pushing all internally stored rules from a queue to the NIC.
> > +
> > +.. code-block:: c
> > +
> > +       int
> > +       rte_flow_q_push(uint16_t port_id,
> > +                       uint32_t queue_id,
> > +                       struct rte_flow_error *error);
> > +
> > +There is the postpone attribute in the queue operation attributes.
> > +When it is set, multiple operations can be bulked together and not sent to
> HW
> > +right away to save SW/HW interactions and prioritize throughput over
> latency.
> > +The application must invoke this function to actually push all outstanding
> > +operations to HW in this case.
> > +
> > +Pull enqueued operations
> > +~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Pulling asynchronous operations results.
> > +
> > +The application must invoke this function in order to complete
> asynchronous
> > +flow rule operations and to receive flow rule operations statuses.
> > +
> > +.. code-block:: c
> > +
> > +       int
> > +       rte_flow_q_pull(uint16_t port_id,
> > +                       uint32_t queue_id,
> > +                       struct rte_flow_q_op_res res[],
> > +                       uint16_t n_res,
> > +                       struct rte_flow_error *error);
> > +
> > +Multiple outstanding operation results can be pulled simultaneously.
> > +User data may be provided during a flow creation/destruction in order
> > +to distinguish between multiple operations. User data is returned as part
> > +of the result to provide a method to detect which operation is completed.
> > +
> > +Enqueue indirect action creation operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Asynchronous version of indirect action creation API.
> > +
> > +.. code-block:: c
> > +
> > +       struct rte_flow_action_handle *
> > +       rte_flow_q_action_handle_create(uint16_t port_id,
> 
> What is the use case for this?

Indirect action creation may take time, it may depend on hardware resources
allocation. So we add the asynchronous way of creating it the same way.

> How application needs to use this. We already creating flow_table. Is
> that not sufficient?

The indirect action object is used in flow rules via its handle.
This is an extension to the already existing API in order to speed up
the creation of these objects.

> 
> > +                       uint32_t queue_id,
> > +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> > +                       const struct rte_flow_indir_action_conf *indir_action_conf,
> > +                       const struct rte_flow_action *action,
> > +                       struct rte_flow_error *error);
> > +
> > +A valid handle in case of success is returned. It must be destroyed later by
> > +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is
> rejected.
> > +
> > +Enqueue indirect action destruction operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Asynchronous version of indirect action destruction API.
> > +
> > +.. code-block:: c
> > +
> > +       int
> > +       rte_flow_q_action_handle_destroy(uint16_t port_id,
> > +                       uint32_t queue_id,
> > +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> > +                       struct rte_flow_action_handle *action_handle,
> > +                       struct rte_flow_error *error);
> > +
> > +Enqueue indirect action update operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Asynchronous version of indirect action update API.
> > +
> > +.. code-block:: c
> > +
> > +       int
> > +       rte_flow_q_action_handle_update(uint16_t port_id,
> > +                       uint32_t queue_id,
> > +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> > +                       struct rte_flow_action_handle *action_handle,
> > +                       const void *update,
> > +                       struct rte_flow_error *error);
> > +
> >  .. _flow_isolated_mode:
> >
> >  Flow isolated mode
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> > index d23d1591df..80a85124e6 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -67,6 +67,14 @@ New Features
> >    ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy``
> >    and ``rte_flow_actions_template_destroy``.
> >
> > +* ethdev: Added ``rte_flow_q_flow_create`` and
> ``rte_flow_q_flow_destroy`` API
> > +  to enqueue flow creaion/destruction operations asynchronously as well
> as
> > +  ``rte_flow_q_pull`` to poll and retrieve results of these operations and
> > +  ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
> > +  Introduced asynchronous API for indirect actions management as well:
> > +  ``rte_flow_q_action_handle_create``,
> ``rte_flow_q_action_handle_destroy`` and
> > +  ``rte_flow_q_action_handle_update``.
> > +

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-08 14:11         ` Alexander Kozyrev
@ 2022-02-08 15:23           ` Ivan Malov
  2022-02-09  5:40             ` Alexander Kozyrev
  2022-02-08 17:36           ` Jerin Jacob
  2022-02-09  5:50           ` Jerin Jacob
  2 siblings, 1 reply; 220+ messages in thread
From: Ivan Malov @ 2022-02-08 15:23 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: Jerin Jacob, dpdk-dev, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde

Hi,

PSB

On Tue, 8 Feb 2022, Alexander Kozyrev wrote:

>> On Tuesday, February 8, 2022 5:57 Jerin Jacob <jerinjacobk@gmail.com> wrote:
>> On Sun, Feb 6, 2022 at 8:57 AM Alexander Kozyrev <akozyrev@nvidia.com>
>> wrote:
>
> Hi Jerin, thanks you for reviewing my patch. I appreciate your input.
> I'm planning to send v4 today with addressed comments today to be on time for RC1.
> I hope that my answers are satisfactory for the rest of questions raised by you.
>
>>>
>>> A new, faster, queue-based flow rules management mechanism is needed
>> for
>>> applications offloading rules inside the datapath. This asynchronous
>>> and lockless mechanism frees the CPU for further packet processing and
>>> reduces the performance impact of the flow rules creation/destruction
>>> on the datapath. Note that queues are not thread-safe and the queue
>>> should be accessed from the same thread for all queue operations.
>>> It is the responsibility of the app to sync the queue functions in case
>>> of multi-threaded access to the same queue.
>>>
>>> The rte_flow_q_flow_create() function enqueues a flow creation to the
>>> requested queue. It benefits from already configured resources and sets
>>> unique values on top of item and action templates. A flow rule is enqueued
>>> on the specified flow queue and offloaded asynchronously to the
>> hardware.
>>> The function returns immediately to spare CPU for further packet
>>> processing. The application must invoke the rte_flow_q_pull() function
>>> to complete the flow rule operation offloading, to clear the queue, and to
>>> receive the operation status. The rte_flow_q_flow_destroy() function
>>> enqueues a flow destruction to the requested queue.
>>
>> It is good to see the implementation, specifically to understand,
>
> We will send PMD implementation in the next few days.
>
>> 1)
>> I understand, We are creating queues to make multiple producers to
>> enqueue multiple jobs in parallel.
>> On the consumer side, Is it HW or some other cores to consume the job?
>
> From API point of view there is no restriction on the type of consumer.
> It could be hardware or software implementation, but in most cases
> (and in our driver) it will be the NIC to handle the requests.
>
>> Can we operate in consumer in parallel?
>
> Yes, we can have separate multiple hardware queues to handle operations
> in parallel independently and without any locking mechanism needed.
>
>> 2) Is Queue part of HW or just SW primitive to submit the work as a channel.
>
> The queue is a software primitive.
>
>>
>>>
>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>> ---
>>>  doc/guides/prog_guide/img/rte_flow_q_init.svg |  71 ++++
>>>  .../prog_guide/img/rte_flow_q_usage.svg       |  60 +++
>>>  doc/guides/prog_guide/rte_flow.rst            | 159 +++++++-
>>>  doc/guides/rel_notes/release_22_03.rst        |   8 +
>>>  lib/ethdev/rte_flow.c                         | 173 ++++++++-
>>>  lib/ethdev/rte_flow.h                         | 342 ++++++++++++++++++
>>>  lib/ethdev/rte_flow_driver.h                  |  55 +++
>>>  lib/ethdev/version.map                        |   7 +
>>>  8 files changed, 873 insertions(+), 2 deletions(-)
>>>  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
>>>  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
>>>
>>> diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg
>> b/doc/guides/prog_guide/img/rte_flow_q_init.svg
>>> new file mode 100644
>>> index 0000000000..2080bf4c04
>>
>>
>>
>> Some comments on the diagrams:
>> # rte_flow_q_create_flow and rte_flow_q_destroy_flow used instead of
>> rte_flow_q_flow_create/destroy
>> # rte_flow_q_pull's brackets(i.e ()) not aligned
>
> Will fix this, thanks for noticing.
>
>>
>>> +</svg>
>>> \ No newline at end of file
>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>> b/doc/guides/prog_guide/rte_flow.rst
>>> index b7799c5abe..734294e65d 100644
>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>> @@ -3607,12 +3607,16 @@ Hints about the expected number of counters
>> or meters in an application,
>>>  for example, allow PMD to prepare and optimize NIC memory layout in
>> advance.
>>>  ``rte_flow_configure()`` must be called before any flow rule is created,
>>>  but after an Ethernet device is configured.
>>> +It also creates flow queues for asynchronous flow rules operations via
>>> +queue-based API, see `Asynchronous operations`_ section.
>>>
>>>  .. code-block:: c
>>>
>>>     int
>>>     rte_flow_configure(uint16_t port_id,
>>>                       const struct rte_flow_port_attr *port_attr,
>>> +                     uint16_t nb_queue,
>>
>> # rte_flow_info_get() don't have number of queues, why not adding
>> number queues in rte_flow_port_attr.
>
> Good suggestion, I'll add it to the capabilities structure.
>
>> # And additional APIs for queue_setup() like ethdev.
>
> ethdev has the start function which tells the PMD when all configurations are done.
> In our case there is no such function and device is ready to create a flows as soon
> as we exit the rte_flow_configure(). In addition, the number of queues may affect
> the resource allocation it is best to process all the requested resources at the same time.
>
>>
>>> +                     const struct rte_flow_queue_attr *queue_attr[],
>>>                       struct rte_flow_error *error);
>>>
>>>  Information about resources that can benefit from pre-allocation can be
>>> @@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
>>>
>>>  .. code-block:: c
>>>
>>> -       rte_flow_configure(port, *port_attr, *error);
>>> +       rte_flow_configure(port, *port_attr, nb_queue, *queue_attr,
>> *error);
>>>
>>>         struct rte_flow_pattern_template *pattern_templates[0] =
>>>                 rte_flow_pattern_template_create(port, &itr, &pattern, &error);
>>> @@ -3750,6 +3754,159 @@ and pattern and actions templates are created.
>>>                                 *actions_templates, nb_actions_templates,
>>>                                 *error);
>>>
>>> +Asynchronous operations
>>> +-----------------------
>>> +
>>> +Flow rules management can be done via special lockless flow
>> management queues.
>>> +- Queue operations are asynchronous and not thread-safe.
>>> +- Operations can thus be invoked by the app's datapath,
>>> +packet processing can continue while queue operations are processed by
>> NIC.
>>> +- The queue number is configured at initialization stage.
>>> +- Available operation types: rule creation, rule destruction,
>>> +indirect rule creation, indirect rule destruction, indirect rule update.
>>> +- Operations may be reordered within a queue.
>>> +- Operations can be postponed and pushed to NIC in batches.
>>> +- Results pulling must be done on time to avoid queue overflows.
>>> +- User data is returned as part of the result to identify an operation.
>>> +- Flow handle is valid once the creation operation is enqueued and must
>> be
>>> +destroyed even if the operation is not successful and the rule is not
>> inserted.
>>
>> You need CR between lines as rendered text does comes as new line in
>> between the items.
>
> OK.
>
>>
>>> +
>>> +The asynchronous flow rule insertion logic can be broken into two phases.
>>> +
>>> +1. Initialization stage as shown here:
>>> +
>>> +.. _figure_rte_flow_q_init:
>>> +
>>> +.. figure:: img/rte_flow_q_init.*
>>> +
>>> +2. Main loop as presented on a datapath application example:
>>> +
>>> +.. _figure_rte_flow_q_usage:
>>> +
>>> +.. figure:: img/rte_flow_q_usage.*
>>
>> it is better to add sequence operations as text to understand the flow.
>
> I prefer keeping the diagram here, it looks more clean and concise.
> Block of text gives no new information and harder to follow, imho.
>
>>
>>> +
>>> +Enqueue creation operation
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Enqueueing a flow rule creation operation is similar to simple creation.
>>
>> If it is enqueue operation, why not call it ad rte_flow_q_flow_enqueue()
>>
>>> +
>>> +.. code-block:: c
>>> +
>>> +       struct rte_flow *
>>> +       rte_flow_q_flow_create(uint16_t port_id,
>>> +                               uint32_t queue_id,
>>> +                               const struct rte_flow_q_ops_attr *q_ops_attr,
>>> +                               struct rte_flow_table *table,
>>> +                               const struct rte_flow_item pattern[],
>>> +                               uint8_t pattern_template_index,
>>> +                               const struct rte_flow_action actions[],
>>
>> If I understand correctly, table is the pre-configured object that has
>> N number of patterns and N number of actions.
>> Why giving items[] and actions[] again?
>
> Table only contains templates for pattern and actions.

Then why not reflect it in the argument name? Perhaps, "template_table"?
Or even in the struct name: "struct rte_flow_template_table".
Chances are that readers will misread "rte_flow_table"
as "flow entry table" in the OpenFlow sense.

> We still need to provide the values for those templates when we create a flow.
> Thus we specify patterns and action here.

All of that is clear in terms of this review cycle, but please
consider improving the argument names to help future readers.

>
>>> +                               uint8_t actions_template_index,
>>> +                               struct rte_flow_error *error);
>>> +
>>> +A valid handle in case of success is returned. It must be destroyed later
>>> +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by
>> HW.
>>> +
>>> +Enqueue destruction operation
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> Queue destruction operation.
>
> We are not destroying queue, we are enqueuing the flow destruction operation.
>
>>
>>> +
>>> +Enqueueing a flow rule destruction operation is similar to simple
>> destruction.
>>> +
>>> +.. code-block:: c
>>> +
>>> +       int
>>> +       rte_flow_q_flow_destroy(uint16_t port_id,
>>> +                               uint32_t queue_id,
>>> +                               const struct rte_flow_q_ops_attr *q_ops_attr,
>>> +                               struct rte_flow *flow,
>>> +                               struct rte_flow_error *error);
>>> +
>>> +Push enqueued operations
>>> +~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Pushing all internally stored rules from a queue to the NIC.
>>> +
>>> +.. code-block:: c
>>> +
>>> +       int
>>> +       rte_flow_q_push(uint16_t port_id,
>>> +                       uint32_t queue_id,
>>> +                       struct rte_flow_error *error);
>>> +
>>> +There is the postpone attribute in the queue operation attributes.
>>> +When it is set, multiple operations can be bulked together and not sent to
>> HW
>>> +right away to save SW/HW interactions and prioritize throughput over
>> latency.
>>> +The application must invoke this function to actually push all outstanding
>>> +operations to HW in this case.
>>> +
>>> +Pull enqueued operations
>>> +~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Pulling asynchronous operations results.
>>> +
>>> +The application must invoke this function in order to complete
>> asynchronous
>>> +flow rule operations and to receive flow rule operations statuses.
>>> +
>>> +.. code-block:: c
>>> +
>>> +       int
>>> +       rte_flow_q_pull(uint16_t port_id,
>>> +                       uint32_t queue_id,
>>> +                       struct rte_flow_q_op_res res[],
>>> +                       uint16_t n_res,
>>> +                       struct rte_flow_error *error);
>>> +
>>> +Multiple outstanding operation results can be pulled simultaneously.
>>> +User data may be provided during a flow creation/destruction in order
>>> +to distinguish between multiple operations. User data is returned as part
>>> +of the result to provide a method to detect which operation is completed.
>>> +
>>> +Enqueue indirect action creation operation
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Asynchronous version of indirect action creation API.
>>> +
>>> +.. code-block:: c
>>> +
>>> +       struct rte_flow_action_handle *
>>> +       rte_flow_q_action_handle_create(uint16_t port_id,
>>
>> What is the use case for this?
>
> Indirect action creation may take time, it may depend on hardware resources
> allocation. So we add the asynchronous way of creating it the same way.
>
>> How application needs to use this. We already creating flow_table. Is
>> that not sufficient?
>
> The indirect action object is used in flow rules via its handle.
> This is an extension to the already existing API in order to speed up
> the creation of these objects.
>
>>
>>> +                       uint32_t queue_id,
>>> +                       const struct rte_flow_q_ops_attr *q_ops_attr,
>>> +                       const struct rte_flow_indir_action_conf *indir_action_conf,
>>> +                       const struct rte_flow_action *action,
>>> +                       struct rte_flow_error *error);
>>> +
>>> +A valid handle in case of success is returned. It must be destroyed later by
>>> +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is
>> rejected.
>>> +
>>> +Enqueue indirect action destruction operation
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Asynchronous version of indirect action destruction API.
>>> +
>>> +.. code-block:: c
>>> +
>>> +       int
>>> +       rte_flow_q_action_handle_destroy(uint16_t port_id,
>>> +                       uint32_t queue_id,
>>> +                       const struct rte_flow_q_ops_attr *q_ops_attr,
>>> +                       struct rte_flow_action_handle *action_handle,
>>> +                       struct rte_flow_error *error);
>>> +
>>> +Enqueue indirect action update operation
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Asynchronous version of indirect action update API.
>>> +
>>> +.. code-block:: c
>>> +
>>> +       int
>>> +       rte_flow_q_action_handle_update(uint16_t port_id,
>>> +                       uint32_t queue_id,
>>> +                       const struct rte_flow_q_ops_attr *q_ops_attr,
>>> +                       struct rte_flow_action_handle *action_handle,
>>> +                       const void *update,
>>> +                       struct rte_flow_error *error);
>>> +
>>>  .. _flow_isolated_mode:
>>>
>>>  Flow isolated mode
>>> diff --git a/doc/guides/rel_notes/release_22_03.rst
>> b/doc/guides/rel_notes/release_22_03.rst
>>> index d23d1591df..80a85124e6 100644
>>> --- a/doc/guides/rel_notes/release_22_03.rst
>>> +++ b/doc/guides/rel_notes/release_22_03.rst
>>> @@ -67,6 +67,14 @@ New Features
>>>    ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy``
>>>    and ``rte_flow_actions_template_destroy``.
>>>
>>> +* ethdev: Added ``rte_flow_q_flow_create`` and
>> ``rte_flow_q_flow_destroy`` API
>>> +  to enqueue flow creaion/destruction operations asynchronously as well
>> as
>>> +  ``rte_flow_q_pull`` to poll and retrieve results of these operations and
>>> +  ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
>>> +  Introduced asynchronous API for indirect actions management as well:
>>> +  ``rte_flow_q_action_handle_create``,
>> ``rte_flow_q_action_handle_destroy`` and
>>> +  ``rte_flow_q_action_handle_update``.
>>> +
>

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-08 14:11         ` Alexander Kozyrev
  2022-02-08 15:23           ` Ivan Malov
@ 2022-02-08 17:36           ` Jerin Jacob
  2022-02-09  5:50           ` Jerin Jacob
  2 siblings, 0 replies; 220+ messages in thread
From: Jerin Jacob @ 2022-02-08 17:36 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde

On Tue, Feb 8, 2022 at 7:42 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> > On Tuesday, February 8, 2022 5:57 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > On Sun, Feb 6, 2022 at 8:57 AM Alexander Kozyrev <akozyrev@nvidia.com>
> > wrote:
>
> Hi Jerin, thanks you for reviewing my patch. I appreciate your input.

Hi Alex,

> I'm planning to send v4 today with addressed comments today to be on time for RC1.
> I hope that my answers are satisfactory for the rest of questions raised by you.

Comments looks good to me. Please remove version field in the next patch.


>
> > >
> > > A new, faster, queue-based flow rules management mechanism is needed
> > for
> > > applications offloading rules inside the datapath. This asynchronous
> > > and lockless mechanism frees the CPU for further packet processing and
> > > reduces the performance impact of the flow rules creation/destruction
> > > on the datapath. Note that queues are not thread-safe and the queue
> > > should be accessed from the same thread for all queue operations.
> > > It is the responsibility of the app to sync the queue functions in case
> > > of multi-threaded access to the same queue.
> > >
> > > The rte_flow_q_flow_create() function enqueues a flow creation to the
> > > requested queue. It benefits from already configured resources and sets
> > > unique values on top of item and action templates. A flow rule is enqueued
> > > on the specified flow queue and offloaded asynchronously to the
> > hardware.
> > > The function returns immediately to spare CPU for further packet
> > > processing. The application must invoke the rte_flow_q_pull() function
> > > to complete the flow rule operation offloading, to clear the queue, and to
> > > receive the operation status. The rte_flow_q_flow_destroy() function
> > > enqueues a flow destruction to the requested queue.
> >
> > It is good to see the implementation, specifically to understand,
>
> We will send PMD implementation in the next few days.
>
> > 1)
> > I understand, We are creating queues to make multiple producers to
> > enqueue multiple jobs in parallel.
> > On the consumer side, Is it HW or some other cores to consume the job?
>
> From API point of view there is no restriction on the type of consumer.
> It could be hardware or software implementation, but in most cases
> (and in our driver) it will be the NIC to handle the requests.
>
> > Can we operate in consumer in parallel?
>
> Yes, we can have separate multiple hardware queues to handle operations
> in parallel independently and without any locking mechanism needed.
>
> > 2) Is Queue part of HW or just SW primitive to submit the work as a channel.
>
> The queue is a software primitive.
>
> >
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > ---
> > >  doc/guides/prog_guide/img/rte_flow_q_init.svg |  71 ++++
> > >  .../prog_guide/img/rte_flow_q_usage.svg       |  60 +++
> > >  doc/guides/prog_guide/rte_flow.rst            | 159 +++++++-
> > >  doc/guides/rel_notes/release_22_03.rst        |   8 +
> > >  lib/ethdev/rte_flow.c                         | 173 ++++++++-
> > >  lib/ethdev/rte_flow.h                         | 342 ++++++++++++++++++
> > >  lib/ethdev/rte_flow_driver.h                  |  55 +++
> > >  lib/ethdev/version.map                        |   7 +
> > >  8 files changed, 873 insertions(+), 2 deletions(-)
> > >  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
> > >  create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
> > >
> > > diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg
> > b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> > > new file mode 100644
> > > index 0000000000..2080bf4c04
> >
> >
> >
> > Some comments on the diagrams:
> > # rte_flow_q_create_flow and rte_flow_q_destroy_flow used instead of
> > rte_flow_q_flow_create/destroy
> > # rte_flow_q_pull's brackets(i.e ()) not aligned
>
> Will fix this, thanks for noticing.
>
> >
> > > +</svg>
> > > \ No newline at end of file
> > > diff --git a/doc/guides/prog_guide/rte_flow.rst
> > b/doc/guides/prog_guide/rte_flow.rst
> > > index b7799c5abe..734294e65d 100644
> > > --- a/doc/guides/prog_guide/rte_flow.rst
> > > +++ b/doc/guides/prog_guide/rte_flow.rst
> > > @@ -3607,12 +3607,16 @@ Hints about the expected number of counters
> > or meters in an application,
> > >  for example, allow PMD to prepare and optimize NIC memory layout in
> > advance.
> > >  ``rte_flow_configure()`` must be called before any flow rule is created,
> > >  but after an Ethernet device is configured.
> > > +It also creates flow queues for asynchronous flow rules operations via
> > > +queue-based API, see `Asynchronous operations`_ section.
> > >
> > >  .. code-block:: c
> > >
> > >     int
> > >     rte_flow_configure(uint16_t port_id,
> > >                       const struct rte_flow_port_attr *port_attr,
> > > +                     uint16_t nb_queue,
> >
> > # rte_flow_info_get() don't have number of queues, why not adding
> > number queues in rte_flow_port_attr.
>
> Good suggestion, I'll add it to the capabilities structure.
>
> > # And additional APIs for queue_setup() like ethdev.
>
> ethdev has the start function which tells the PMD when all configurations are done.
> In our case there is no such function and device is ready to create a flows as soon
> as we exit the rte_flow_configure(). In addition, the number of queues may affect
> the resource allocation it is best to process all the requested resources at the same time.
>
> >
> > > +                     const struct rte_flow_queue_attr *queue_attr[],
> > >                       struct rte_flow_error *error);
> > >
> > >  Information about resources that can benefit from pre-allocation can be
> > > @@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
> > >
> > >  .. code-block:: c
> > >
> > > -       rte_flow_configure(port, *port_attr, *error);
> > > +       rte_flow_configure(port, *port_attr, nb_queue, *queue_attr,
> > *error);
> > >
> > >         struct rte_flow_pattern_template *pattern_templates[0] =
> > >                 rte_flow_pattern_template_create(port, &itr, &pattern, &error);
> > > @@ -3750,6 +3754,159 @@ and pattern and actions templates are created.
> > >                                 *actions_templates, nb_actions_templates,
> > >                                 *error);
> > >
> > > +Asynchronous operations
> > > +-----------------------
> > > +
> > > +Flow rules management can be done via special lockless flow
> > management queues.
> > > +- Queue operations are asynchronous and not thread-safe.
> > > +- Operations can thus be invoked by the app's datapath,
> > > +packet processing can continue while queue operations are processed by
> > NIC.
> > > +- The queue number is configured at initialization stage.
> > > +- Available operation types: rule creation, rule destruction,
> > > +indirect rule creation, indirect rule destruction, indirect rule update.
> > > +- Operations may be reordered within a queue.
> > > +- Operations can be postponed and pushed to NIC in batches.
> > > +- Results pulling must be done on time to avoid queue overflows.
> > > +- User data is returned as part of the result to identify an operation.
> > > +- Flow handle is valid once the creation operation is enqueued and must
> > be
> > > +destroyed even if the operation is not successful and the rule is not
> > inserted.
> >
> > You need CR between lines as rendered text does comes as new line in
> > between the items.
>
> OK.
>
> >
> > > +
> > > +The asynchronous flow rule insertion logic can be broken into two phases.
> > > +
> > > +1. Initialization stage as shown here:
> > > +
> > > +.. _figure_rte_flow_q_init:
> > > +
> > > +.. figure:: img/rte_flow_q_init.*
> > > +
> > > +2. Main loop as presented on a datapath application example:
> > > +
> > > +.. _figure_rte_flow_q_usage:
> > > +
> > > +.. figure:: img/rte_flow_q_usage.*
> >
> > it is better to add sequence operations as text to understand the flow.
>
> I prefer keeping the diagram here, it looks more clean and concise.
> Block of text gives no new information and harder to follow, imho.
>
> >
> > > +
> > > +Enqueue creation operation
> > > +~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Enqueueing a flow rule creation operation is similar to simple creation.
> >
> > If it is enqueue operation, why not call it ad rte_flow_q_flow_enqueue()
> >
> > > +
> > > +.. code-block:: c
> > > +
> > > +       struct rte_flow *
> > > +       rte_flow_q_flow_create(uint16_t port_id,
> > > +                               uint32_t queue_id,
> > > +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> > > +                               struct rte_flow_table *table,
> > > +                               const struct rte_flow_item pattern[],
> > > +                               uint8_t pattern_template_index,
> > > +                               const struct rte_flow_action actions[],
> >
> > If I understand correctly, table is the pre-configured object that has
> > N number of patterns and N number of actions.
> > Why giving items[] and actions[] again?
>
> Table only contains templates for pattern and actions.
> We still need to provide the values for those templates when we create a flow.
> Thus we specify patterns and action here.
>
> > > +                               uint8_t actions_template_index,
> > > +                               struct rte_flow_error *error);
> > > +
> > > +A valid handle in case of success is returned. It must be destroyed later
> > > +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by
> > HW.
> > > +
> > > +Enqueue destruction operation
> > > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > Queue destruction operation.
>
> We are not destroying queue, we are enqueuing the flow destruction operation.
>
> >
> > > +
> > > +Enqueueing a flow rule destruction operation is similar to simple
> > destruction.
> > > +
> > > +.. code-block:: c
> > > +
> > > +       int
> > > +       rte_flow_q_flow_destroy(uint16_t port_id,
> > > +                               uint32_t queue_id,
> > > +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> > > +                               struct rte_flow *flow,
> > > +                               struct rte_flow_error *error);
> > > +
> > > +Push enqueued operations
> > > +~~~~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Pushing all internally stored rules from a queue to the NIC.
> > > +
> > > +.. code-block:: c
> > > +
> > > +       int
> > > +       rte_flow_q_push(uint16_t port_id,
> > > +                       uint32_t queue_id,
> > > +                       struct rte_flow_error *error);
> > > +
> > > +There is the postpone attribute in the queue operation attributes.
> > > +When it is set, multiple operations can be bulked together and not sent to
> > HW
> > > +right away to save SW/HW interactions and prioritize throughput over
> > latency.
> > > +The application must invoke this function to actually push all outstanding
> > > +operations to HW in this case.
> > > +
> > > +Pull enqueued operations
> > > +~~~~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Pulling asynchronous operations results.
> > > +
> > > +The application must invoke this function in order to complete
> > asynchronous
> > > +flow rule operations and to receive flow rule operations statuses.
> > > +
> > > +.. code-block:: c
> > > +
> > > +       int
> > > +       rte_flow_q_pull(uint16_t port_id,
> > > +                       uint32_t queue_id,
> > > +                       struct rte_flow_q_op_res res[],
> > > +                       uint16_t n_res,
> > > +                       struct rte_flow_error *error);
> > > +
> > > +Multiple outstanding operation results can be pulled simultaneously.
> > > +User data may be provided during a flow creation/destruction in order
> > > +to distinguish between multiple operations. User data is returned as part
> > > +of the result to provide a method to detect which operation is completed.
> > > +
> > > +Enqueue indirect action creation operation
> > > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Asynchronous version of indirect action creation API.
> > > +
> > > +.. code-block:: c
> > > +
> > > +       struct rte_flow_action_handle *
> > > +       rte_flow_q_action_handle_create(uint16_t port_id,
> >
> > What is the use case for this?
>
> Indirect action creation may take time, it may depend on hardware resources
> allocation. So we add the asynchronous way of creating it the same way.
>
> > How application needs to use this. We already creating flow_table. Is
> > that not sufficient?
>
> The indirect action object is used in flow rules via its handle.
> This is an extension to the already existing API in order to speed up
> the creation of these objects.
>
> >
> > > +                       uint32_t queue_id,
> > > +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> > > +                       const struct rte_flow_indir_action_conf *indir_action_conf,
> > > +                       const struct rte_flow_action *action,
> > > +                       struct rte_flow_error *error);
> > > +
> > > +A valid handle in case of success is returned. It must be destroyed later by
> > > +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is
> > rejected.
> > > +
> > > +Enqueue indirect action destruction operation
> > > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Asynchronous version of indirect action destruction API.
> > > +
> > > +.. code-block:: c
> > > +
> > > +       int
> > > +       rte_flow_q_action_handle_destroy(uint16_t port_id,
> > > +                       uint32_t queue_id,
> > > +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> > > +                       struct rte_flow_action_handle *action_handle,
> > > +                       struct rte_flow_error *error);
> > > +
> > > +Enqueue indirect action update operation
> > > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > +
> > > +Asynchronous version of indirect action update API.
> > > +
> > > +.. code-block:: c
> > > +
> > > +       int
> > > +       rte_flow_q_action_handle_update(uint16_t port_id,
> > > +                       uint32_t queue_id,
> > > +                       const struct rte_flow_q_ops_attr *q_ops_attr,
> > > +                       struct rte_flow_action_handle *action_handle,
> > > +                       const void *update,
> > > +                       struct rte_flow_error *error);
> > > +
> > >  .. _flow_isolated_mode:
> > >
> > >  Flow isolated mode
> > > diff --git a/doc/guides/rel_notes/release_22_03.rst
> > b/doc/guides/rel_notes/release_22_03.rst
> > > index d23d1591df..80a85124e6 100644
> > > --- a/doc/guides/rel_notes/release_22_03.rst
> > > +++ b/doc/guides/rel_notes/release_22_03.rst
> > > @@ -67,6 +67,14 @@ New Features
> > >    ``rte_flow_table_destroy``, ``rte_flow_pattern_template_destroy``
> > >    and ``rte_flow_actions_template_destroy``.
> > >
> > > +* ethdev: Added ``rte_flow_q_flow_create`` and
> > ``rte_flow_q_flow_destroy`` API
> > > +  to enqueue flow creaion/destruction operations asynchronously as well
> > as
> > > +  ``rte_flow_q_pull`` to poll and retrieve results of these operations and
> > > +  ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
> > > +  Introduced asynchronous API for indirect actions management as well:
> > > +  ``rte_flow_q_action_handle_create``,
> > ``rte_flow_q_action_handle_destroy`` and
> > > +  ``rte_flow_q_action_handle_update``.
> > > +

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-08 15:23           ` Ivan Malov
@ 2022-02-09  5:40             ` Alexander Kozyrev
  0 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09  5:40 UTC (permalink / raw)
  To: Ivan Malov
  Cc: Jerin Jacob, dpdk-dev, Ori Kam,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde

On Tuesday, February 8, 2022 10:24 Ivan Malov <ivan.malov@oktetlabs.ru> wrote:
> On Tue, 8 Feb 2022, Alexander Kozyrev wrote:
> >>
> >>> +
> >>> +Enqueue creation operation
> >>> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>> +
> >>> +Enqueueing a flow rule creation operation is similar to simple creation.
> >>
> >> If it is enqueue operation, why not call it ad rte_flow_q_flow_enqueue()
> >>
> >>> +
> >>> +.. code-block:: c
> >>> +
> >>> +       struct rte_flow *
> >>> +       rte_flow_q_flow_create(uint16_t port_id,
> >>> +                               uint32_t queue_id,
> >>> +                               const struct rte_flow_q_ops_attr *q_ops_attr,
> >>> +                               struct rte_flow_table *table,
> >>> +                               const struct rte_flow_item pattern[],
> >>> +                               uint8_t pattern_template_index,
> >>> +                               const struct rte_flow_action actions[],
> >>
> >> If I understand correctly, table is the pre-configured object that has
> >> N number of patterns and N number of actions.
> >> Why giving items[] and actions[] again?
> >
> > Table only contains templates for pattern and actions.
> 
> Then why not reflect it in the argument name? Perhaps, "template_table"?
> Or even in the struct name: "struct rte_flow_template_table".
> Chances are that readers will misread "rte_flow_table"
> as "flow entry table" in the OpenFlow sense.
> > We still need to provide the values for those templates when we create a
> flow.
> > Thus we specify patterns and action here.
> 
> All of that is clear in terms of this review cycle, but please
> consider improving the argument names to help future readers.

Agree, it is a good idea to rename it to template_table, thanks.

> >
> >>> +                               uint8_t actions_template_index,
> >>> +                               struct rte_flow_error *error);
> >>> +
> >>> +A valid handle in case of success is returned. It must be destroyed later
> >>> +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by
> >> HW.
> >>> +
> >>> +Enqueue destruction operation
> >>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>
> >> Queue destruction operation.
> >
> > We are not destroying queue, we are enqueuing the flow destruction
> operation.
7

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-08 14:11         ` Alexander Kozyrev
  2022-02-08 15:23           ` Ivan Malov
  2022-02-08 17:36           ` Jerin Jacob
@ 2022-02-09  5:50           ` Jerin Jacob
  2 siblings, 0 replies; 220+ messages in thread
From: Jerin Jacob @ 2022-02-09  5:50 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde

On Tue, Feb 8, 2022 at 7:42 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> > On Tuesday, February 8, 2022 5:57 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > On Sun, Feb 6, 2022 at 8:57 AM Alexander Kozyrev <akozyrev@nvidia.com>
> > wrote:
>
>
> > > +The asynchronous flow rule insertion logic can be broken into two phases.
> > > +
> > > +1. Initialization stage as shown here:
> > > +
> > > +.. _figure_rte_flow_q_init:
> > > +
> > > +.. figure:: img/rte_flow_q_init.*
> > > +
> > > +2. Main loop as presented on a datapath application example:
> > > +
> > > +.. _figure_rte_flow_q_usage:
> > > +
> > > +.. figure:: img/rte_flow_q_usage.*
> >
> > it is better to add sequence operations as text to understand the flow.
>
> I prefer keeping the diagram here, it looks more clean and concise.
> Block of text gives no new information and harder to follow, IMHO.

I forgot to reply yesterday on this specific item.

IMO, The diagram is good. The request is, In the diagram, you can add items like
[1] [2] etc and corresponding text can be added either in image or as text
See example https://doc.dpdk.org/guides/prog_guide/event_crypto_adapter.html
Fig. 49.2.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 00/10] ethdev: datapath-focused flow rules management
       [not found] <20220206032526.816079-1-akozyrev@nvidia.com >
@ 2022-02-09 21:37 ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
                     ` (11 more replies)
  0 siblings, 12 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:37 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>

---
v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (10):
  ethdev: introduce flow pre-configuration hints
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  app/testpmd: implement rte flow configuration
  app/testpmd: implement rte flow template management
  app/testpmd: implement rte flow table management
  app/testpmd: implement rte flow queue flow operations
  app/testpmd: implement rte flow push operations
  app/testpmd: implement rte flow pull operations
  app/testpmd: implement rte flow queue indirect actions

 app/test-pmd/cmdline_flow.c                   | 1496 ++++++++++++++++-
 app/test-pmd/config.c                         |  771 +++++++++
 app/test-pmd/testpmd.h                        |   66 +
 doc/guides/prog_guide/img/rte_flow_q_init.svg |  205 +++
 .../prog_guide/img/rte_flow_q_usage.svg       |  351 ++++
 doc/guides/prog_guide/rte_flow.rst            |  326 ++++
 doc/guides/rel_notes/release_22_03.rst        |   22 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  378 ++++-
 lib/ethdev/rte_flow.c                         |  359 ++++
 lib/ethdev/rte_flow.h                         |  702 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  102 ++
 lib/ethdev/version.map                        |   15 +
 12 files changed, 4773 insertions(+), 20 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 02/10] ethdev: add flow item/action templates Alexander Kozyrev
                     ` (10 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  37 +++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/rte_flow.c                  |  40 +++++++++
 lib/ethdev/rte_flow.h                  | 108 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 6 files changed, 203 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b4aa9c47c2..72fb1132ac 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3589,6 +3589,43 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API management configuration and
+pre-allocates needed resources beforehand to avoid costly allocations later.
+Expected number of counters or meters in an application, for example,
+allow PMD to prepare and optimize NIC memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                     const struct rte_flow_port_attr *port_attr,
+                     struct rte_flow_error *error);
+
+Information about resources that can benefit from pre-allocation can be
+retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
+of pre-configurable resources for a given port on a system.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index f03183ee86..2a47a37f0a 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -69,6 +69,12 @@ New Features
   New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and
   ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index a93f68abbc..66614ae29b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		return flow_err(port_id,
+				ops->configure(dev, port_attr, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1031fb246b..92be2a9a89 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about available pre-configurable resources.
+ * The zero value means a resource cannot be pre-allocated.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Number of pre-configurable counter actions.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of pre-configurable aging flows actions.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_flows;
+	/**
+	 * Number of pre-configurable traffic metering actions.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Retrieve configuration attributes supported by the port.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the contextual information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Resource pre-allocation and pre-configuration settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counter actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging flows actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_flows;
+	/**
+	 * Number of traffic metering actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the
+ * settings. The port, however, may reject the changes.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index cd0c4c428d..f1235aa913 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -260,6 +260,8 @@ EXPERIMENTAL {
 	# added in 22.03
 	rte_eth_dev_priority_flow_ctrl_queue_configure;
 	rte_eth_dev_priority_flow_ctrl_queue_info_get;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 02/10] ethdev: add flow item/action templates
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 124 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 147 ++++++++++++++
 lib/ethdev/rte_flow.h                  | 260 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 582 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 72fb1132ac..5391648833 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on a system.
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+				const struct rte_flow_pattern_template_attr *template_attr,
+				const struct rte_flow_item pattern[],
+				struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	struct rte_flow_item pattern[2] = {{0}};
+	struct rte_flow_item_eth eth_m = {0};
+	pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+	eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	pattern[0].mask = &eth_m;
+	pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+				const struct rte_flow_actions_template_attr *template_attr,
+				const struct rte_flow_action actions[],
+				const struct rte_flow_action masks[],
+				struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	struct rte_flow_action actions[] = {
+		/* Mark ID is constant (4) for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action masks[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+
+	struct rte_flow_actions_template *at =
+		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+				const struct rte_flow_template_table_attr *table_attr,
+				struct rte_flow_pattern_template *pattern_templates[],
+				uint8_t nb_pattern_templates,
+				struct rte_flow_actions_template *actions_templates[],
+				uint8_t nb_actions_templates,
+				struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_configure(port, *port_attr, *error);
+
+	struct rte_flow_pattern_template *pattern_templates[0] =
+		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
+	struct rte_flow_actions_template *actions_templates[0] =
+		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, *table_attr,
+				*pattern_templates, nb_pattern_templates,
+				*actions_templates, nb_actions_templates,
+				*error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 2a47a37f0a..6656b35295 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -75,6 +75,14 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 66614ae29b..b53f8c9b89 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+						     pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 92be2a9a89..e87db5a540 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - PMD may match only on items with mask member set and skip
+	 * matching on protocol layers specified without any masks.
+	 * - If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * - Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets, starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+struct rte_flow_actions_template_attr;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index f1235aa913..5fd2108895 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -262,6 +262,12 @@ EXPERIMENTAL {
 	rte_eth_dev_priority_flow_ctrl_queue_info_get;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
                     ` (8 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_q_flow_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_q_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_q_flow_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/img/rte_flow_q_init.svg | 205 ++++++++++
 .../prog_guide/img/rte_flow_q_usage.svg       | 351 ++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 167 ++++++++-
 doc/guides/rel_notes/release_22_03.rst        |   8 +
 lib/ethdev/rte_flow.c                         | 174 ++++++++-
 lib/ethdev/rte_flow.h                         | 334 +++++++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  55 +++
 lib/ethdev/version.map                        |   7 +
 8 files changed, 1299 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
new file mode 100644
index 0000000000..96160bde42
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_q_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
new file mode 100644
index 0000000000..a1f6c0a0a8
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
@@ -0,0 +1,351 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_q_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84606"
+     inkscape:cy="305.37562"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="572.5"
+       y="279.5"
+       width="234"
+       height="46"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(591.429 308)"
+       id="text21">rte_flow_q_flow_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="81.5001"
+       y="280.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect65" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(96.2282 308)"
+       id="text67">rte_flow_q_flow_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="334.5"
+       y="540.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="385.08301"
+       y="569">rte_flow_q_pull()</text>
+    <rect
+       x="334.5"
+       y="462.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect79" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(379.19 491)"
+       id="text81">rte_flow_q_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 5391648833..964c104ed3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3607,12 +3607,16 @@ Expected number of counters or meters in an application, for example,
 allow PMD to prepare and optimize NIC memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                      const struct rte_flow_port_attr *port_attr,
+                     uint16_t nb_queue,
+                     const struct rte_flow_queue_attr *queue_attr[],
                      struct rte_flow_error *error);
 
 Information about resources that can benefit from pre-allocation can be
@@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
 
 .. code-block:: c
 
-	rte_flow_configure(port, *port_attr, *error);
+	rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error);
 
 	struct rte_flow_pattern_template *pattern_templates[0] =
 		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
@@ -3750,6 +3754,167 @@ and pattern and actions templates are created.
 				*actions_templates, nb_actions_templates,
 				*error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+packet processing can continue while queue operations are processed by NIC.
+
+- The queue number is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+destroyed even if the operation is not successful and the rule is not inserted.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_q_init:
+
+.. figure:: img/rte_flow_q_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_q_usage:
+
+.. figure:: img/rte_flow_q_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_q_flow_create(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow_template_table *template_table,
+				const struct rte_flow_item pattern[],
+				uint8_t pattern_template_index,
+				const struct rte_flow_action actions[],
+				uint8_t actions_template_index,
+				struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_flow_destroy(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow *flow,
+				struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_push(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_pull(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_q_op_res res[],
+			uint16_t n_res,
+			struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_q_action_handle_create(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			const struct rte_flow_indir_action_conf *indir_action_conf,
+			const struct rte_flow_action *action,
+			struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_update(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			const void *update,
+			struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6656b35295..b4e18836ea 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -83,6 +83,14 @@ New Features
     ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+  * ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy``
+    API to enqueue flow creaion/destruction operations asynchronously as well
+	as ``rte_flow_q_pull`` to poll and retrieve results of these operations
+	and ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
+    Introduced asynchronous API for indirect actions management as well:
+    ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy``
+	and ``rte_flow_q_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index b53f8c9b89..bf1d3d2062 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1424,7 +1426,7 @@ rte_flow_configure(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
 		return flow_err(port_id,
-				ops->configure(dev, port_attr, error),
+				ops->configure(dev, port_attr, nb_queue, queue_attr, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1578,3 +1580,173 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_template_table *template_table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->q_flow_create)) {
+		flow = ops->q_flow_create(dev, queue_id,
+					  q_ops_attr, template_table,
+					  pattern, pattern_template_index,
+					  actions, actions_template_index,
+					  error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_flow_destroy)) {
+		return flow_err(port_id,
+				ops->q_flow_destroy(dev, queue_id,
+						    q_ops_attr, flow, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->q_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_push)) {
+		return flow_err(port_id,
+				ops->q_push(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_pull)) {
+		ret = ops->q_pull(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e87db5a540..b0d4f33bfd 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4862,6 +4862,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Number of queues for asynchronous operations.
+	 */
+	uint32_t nb_queues;
 	/**
 	 * Number of pre-configurable counter actions.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4879,6 +4883,17 @@ struct rte_flow_port_info {
 	uint32_t nb_meters;
 };
 
+/**
+ * Flow engine queue configuration.
+ */
+__extension__
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4948,6 +4963,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4959,6 +4979,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5221,6 +5243,318 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+struct rte_flow_q_ops_attr {
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule was offloaded.
+ *   Only completion result indicates that the rule was offloaded.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_template_table *template_table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *    0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation results.
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..33dc57a15e 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -161,6 +161,8 @@ struct rte_flow_ops {
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +201,59 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_create() */
+	struct rte_flow *(*q_flow_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_destroy() */
+	int (*q_flow_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_create() */
+	struct rte_flow_action_handle *(*q_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_destroy() */
+	int (*q_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_action_handle_update() */
+	int (*q_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_push() */
+	int (*q_push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_pull() */
+	int (*q_pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 5fd2108895..46a4151053 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -268,6 +268,13 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_q_flow_create;
+	rte_flow_q_flow_destroy;
+	rte_flow_q_action_handle_create;
+	rte_flow_q_action_handle_destroy;
+	rte_flow_q_action_handle_update;
+	rte_flow_q_push;
+	rte_flow_q_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 04/10] app/testpmd: implement rte flow configuration
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (2 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-10  9:32     ` Thomas Monjalon
  2022-02-09 21:38   ` [PATCH v4 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
                     ` (7 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  54 +++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  60 +++++++++-
 4 files changed, 244 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7b56b1b0ff..cc3003e6eb 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -847,6 +856,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -928,6 +942,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1964,6 +1988,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2189,7 +2216,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2204,6 +2233,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_COUNTERS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging flows",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_flows)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7480,6 +7568,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8708,6 +8823,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e812f57151..df83f8dbdd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1609,6 +1609,60 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	if (rte_flow_info_get(port_id, &port_info, &error))
+		return port_flow_complain(&error);
+	printf("Pre-configurable resources on port %u:\n"
+	       "Number of queues: %d\n"
+	       "Number of counters: %d\n"
+	       "Number of aging flows: %d\n"
+	       "Number of meters: %d\n",
+	       port_id, port_info.nb_queues, port_info.nb_counters,
+	       port_info.nb_aging_flows, port_info.nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b2e98df6e1..cfdda5005c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,50 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Pre-configurable resources on port #[...]:
+   Number of queues: #[...]
+   Number of counters: #[...]
+   Number of aging flows: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 05/10] app/testpmd: implement rte flow template management
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (3 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 376 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++++
 app/test-pmd/testpmd.h                      |  23 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  97 +++++
 4 files changed, 697 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index cc3003e6eb..34bc73eea3 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,22 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -861,6 +881,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -869,10 +893,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -952,6 +979,43 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1991,6 +2055,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2060,6 +2130,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2210,6 +2284,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2218,6 +2306,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2292,6 +2382,112 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2614,7 +2810,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5731,7 +5927,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7595,6 +7793,114 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8564,6 +8870,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -8832,6 +9186,24 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port, in->args.vc.pat_templ_id,
+				in->args.vc.attr.reserved, in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port, in->args.vc.act_templ_id,
+				in->args.vc.actions, in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index df83f8dbdd..2ef7c3e07a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1609,6 +1609,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2078,6 +2121,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id, bool relaxed,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_pattern_template_attr attr = {
+					.relaxed_matching = relaxed };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						&attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						NULL, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..c70b1fa4e8 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,16 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      bool relaxed,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index cfdda5005c..acb763bdf0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,24 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3447,6 +3465,85 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 06/10] app/testpmd: implement rte flow table management
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (4 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 34bc73eea3..3e89525445 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -112,6 +114,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -885,6 +901,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1016,6 +1044,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2061,6 +2115,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2134,6 +2193,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2298,6 +2359,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2308,6 +2376,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2488,6 +2557,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7901,6 +8068,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8918,6 +9198,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9204,6 +9508,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2ef7c3e07a..316c16901a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1652,6 +1652,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2281,6 +2324,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c70b1fa4e8..4c6e775bad 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -915,6 +926,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index acb763bdf0..16b874250c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3362,6 +3362,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3544,6 +3557,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (5 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:53     ` Ori Kam
  2022-02-09 21:38   ` [PATCH v4 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
                     ` (4 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 3e89525445..f794a83a07 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -114,6 +116,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -891,6 +909,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -921,6 +941,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1070,6 +1091,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2120,6 +2153,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2195,6 +2234,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2366,6 +2407,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2388,7 +2436,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2655,6 +2704,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8181,6 +8308,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9222,6 +9454,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9519,6 +9773,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 316c16901a..e8ae16a044 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2452,6 +2452,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr,
+		pt->table, pattern, pattern_idx, actions, actions_idx, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr,
+						    pf->flow, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_q_pull(port_id, queue_id,
+							 &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 4c6e775bad..d0e1e3eeec 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -932,6 +932,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 16b874250c..b802288c66 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3382,6 +3382,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3703,6 +3717,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_q_flow_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4418,6 +4456,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 08/10] app/testpmd: implement rte flow push operations
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (6 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f794a83a07..11240d6f04 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -132,6 +133,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2159,6 +2163,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2437,7 +2444,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2782,6 +2790,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8413,6 +8436,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9784,6 +9835,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e8ae16a044..24660c01dd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2618,6 +2618,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_q_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d0e1e3eeec..03f135ff46 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b802288c66..01e5e3c19f 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3396,6 +3396,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3611,6 +3615,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_q_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 09/10] app/testpmd: implement rte flow pull operations
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (7 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-09 21:38   ` [PATCH v4 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 11240d6f04..26ef2ccfd4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -136,6 +137,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2166,6 +2170,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2445,7 +2452,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2805,6 +2813,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8464,6 +8487,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9838,6 +9889,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 24660c01dd..4937851c41 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2461,14 +2461,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2531,16 +2529,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2555,7 +2543,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2591,21 +2578,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_q_pull(port_id, queue_id,
-							 &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2646,6 +2618,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_q_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 03f135ff46..6fe829edab 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 01e5e3c19f..d5d9125d50 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3400,6 +3400,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3632,6 +3636,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_q_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3762,6 +3783,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4496,6 +4519,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v4 10/10] app/testpmd: implement rte flow queue indirect actions
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (8 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
@ 2022-02-09 21:38   ` Alexander Kozyrev
  2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-09 21:38 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 26ef2ccfd4..b9edb1d482 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -121,6 +121,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -134,6 +135,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1102,6 +1123,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1111,6 +1133,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2167,6 +2219,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2744,6 +2802,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2797,6 +2862,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6209,6 +6358,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -9892,6 +10145,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 4937851c41..e69dd2feff 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2590,6 +2590,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr,
+						      conf, action, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_q_action_handle_destroy(port_id, queue_id,
+						&attr, pia->handle, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_q_action_handle_update(port_id, queue_id, &attr,
+					    action_handle, action, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 6fe829edab..167f1741dc 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index d5d9125d50..65ecef754e 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4780,6 +4780,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4809,6 +4834,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4832,6 +4876,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_q_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations
  2022-02-09 21:38   ` [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
@ 2022-02-09 21:53     ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-09 21:53 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

Hi  Alexander,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Wednesday, February 9, 2022 11:38 PM
> Subject: [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations
> 
> Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
> Provide the command line interface for enqueueing flow
> creation/destruction operations. Usage example:
>   testpmd> flow queue 0 create 0 postpone no
>            template_table 6 pattern_template 0 actions_template 0
>            pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
>   testpmd> flow queue 0 destroy 0 postpone yes rule 0
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v4 04/10] app/testpmd: implement rte flow configuration
  2022-02-09 21:38   ` [PATCH v4 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
@ 2022-02-10  9:32     ` Thomas Monjalon
  0 siblings, 0 replies; 220+ messages in thread
From: Thomas Monjalon @ 2022-02-10  9:32 UTC (permalink / raw)
  To: Alexander Kozyrev
  Cc: dev, orika, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

09/02/2022 22:38, Alexander Kozyrev:
> Add testpmd support for the rte_flow_configure API.

A note about the titles for testpmd patches in this series:
You don't "implement" because it was implemented in rte_flow.c.
Instead, better to say "add" in the testpmd app context.

Also you should not mention "rte flow" with a space.
It's better to keep rte_flow with underscore (even if discouraged),
or in a more verbal English manner, just "flow" if it is enough to understand.
Here I think it could be:
	app/testpmd: add flow engine configuration




^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v4 00/10] ethdev: datapath-focused flow rules management
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (9 preceding siblings ...)
  2022-02-09 21:38   ` [PATCH v4 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
@ 2022-02-10 16:00   ` Ferruh Yigit
  2022-02-10 16:12     ` Asaf Penso
                       ` (3 more replies)
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
  11 siblings, 4 replies; 220+ messages in thread
From: Ferruh Yigit @ 2022-02-10 16:00 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/9/2022 9:37 PM, Alexander Kozyrev wrote:
> Three major changes to a generic RTE Flow API were implemented in order
> to speed up flow rule insertion/destruction and adapt the API to the
> needs of a datapath-focused flow rules management applications:
> 
> 1. Pre-configuration hints.
> Application may give us some hints on what type of resources are needed.
> Introduce the configuration routine to prepare all the needed resources
> inside a PMD/HW before any flow rules are created at the init stage.
> 
> 2. Flow grouping using templates.
> Use the knowledge about which flow rules are to be used in an application
> and prepare item and action templates for them in advance. Group flow rules
> with common patterns and actions together for better resource management.
> 
> 3. Queue-based flow management.
> Perform flow rule insertion/destruction asynchronously to spare the datapath
> from blocking on RTE Flow API and allow it to continue with packet processing.
> Enqueue flow rules operations and poll for the results later.
> 
> testpmd examples are part of the patch series. PMD changes will follow.
> 
> RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> 
> ---
> v4:
> - removed structures versioning
> - introduced new rte_flow_port_info structure for rte_flow_info_get API
> - renamed rte_flow_table_create to rte_flow_template_table_create
> 
> v3: addressed review comments and updated documentation
> - added API to get info about pre-configurable resources
> - renamed rte_flow_item_template to rte_flow_pattern_template
> - renamed drain operation attribute to postpone
> - renamed rte_flow_q_drain to rte_flow_q_push
> - renamed rte_flow_q_dequeue to rte_flow_q_pull
> 
> v2: fixed patch series thread
> 
> Alexander Kozyrev (10):
>    ethdev: introduce flow pre-configuration hints
>    ethdev: add flow item/action templates
>    ethdev: bring in async queue-based flow rules operations
>    app/testpmd: implement rte flow configuration
>    app/testpmd: implement rte flow template management
>    app/testpmd: implement rte flow table management
>    app/testpmd: implement rte flow queue flow operations
>    app/testpmd: implement rte flow push operations
>    app/testpmd: implement rte flow pull operations
>    app/testpmd: implement rte flow queue indirect actions
> 

Hi Jerin, Ajit, Ivan,

As far as I can see you did some reviews in the previous versions,
but not ack the patch.
Is there any objection to last version of the patch, if not I will
proceed with it.


Hi Alex,

As process we require at least one PMD implementation (it can be draft)
to justify the API design.

If there is no objection from above reviewers and PMD implementation
exists before end of the week, I think we can get the set for -rc1.

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v4 00/10] ethdev: datapath-focused flow rules management
  2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
@ 2022-02-10 16:12     ` Asaf Penso
  2022-02-10 16:33       ` Suanming Mou
  2022-02-10 18:04     ` Ajit Khaparde
                       ` (2 subsequent siblings)
  3 siblings, 1 reply; 220+ messages in thread
From: Asaf Penso @ 2022-02-10 16:12 UTC (permalink / raw)
  To: Ferruh Yigit, Alexander Kozyrev, dev, Suanming Mou
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Thanks, Ferruh.
The pmd part is being updated according to the previous API comments.
@Suanming Mou is working on it and will send it once ready, before the weekend.

Regards,
Asaf Penso

>-----Original Message-----
>From: Ferruh Yigit <ferruh.yigit@intel.com>
>Sent: Thursday, February 10, 2022 6:00 PM
>To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
>Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
><thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
>andrew.rybchenko@oktetlabs.ru; mohammad.abdul.awal@intel.com;
>qi.z.zhang@intel.com; jerinj@marvell.com; ajit.khaparde@broadcom.com;
>bruce.richardson@intel.com
>Subject: Re: [PATCH v4 00/10] ethdev: datapath-focused flow rules
>management
>
>On 2/9/2022 9:37 PM, Alexander Kozyrev wrote:
>> Three major changes to a generic RTE Flow API were implemented in
>> order to speed up flow rule insertion/destruction and adapt the API to
>> the needs of a datapath-focused flow rules management applications:
>>
>> 1. Pre-configuration hints.
>> Application may give us some hints on what type of resources are needed.
>> Introduce the configuration routine to prepare all the needed
>> resources inside a PMD/HW before any flow rules are created at the init
>stage.
>>
>> 2. Flow grouping using templates.
>> Use the knowledge about which flow rules are to be used in an
>> application and prepare item and action templates for them in advance.
>> Group flow rules with common patterns and actions together for better
>resource management.
>>
>> 3. Queue-based flow management.
>> Perform flow rule insertion/destruction asynchronously to spare the
>> datapath from blocking on RTE Flow API and allow it to continue with packet
>processing.
>> Enqueue flow rules operations and poll for the results later.
>>
>> testpmd examples are part of the patch series. PMD changes will follow.
>>
>> RFC:
>> https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-
>1
>> -akozyrev@nvidia.com/
>>
>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>> Acked-by: Ori Kam <orika@nvidia.com>
>>
>> ---
>> v4:
>> - removed structures versioning
>> - introduced new rte_flow_port_info structure for rte_flow_info_get
>> API
>> - renamed rte_flow_table_create to rte_flow_template_table_create
>>
>> v3: addressed review comments and updated documentation
>> - added API to get info about pre-configurable resources
>> - renamed rte_flow_item_template to rte_flow_pattern_template
>> - renamed drain operation attribute to postpone
>> - renamed rte_flow_q_drain to rte_flow_q_push
>> - renamed rte_flow_q_dequeue to rte_flow_q_pull
>>
>> v2: fixed patch series thread
>>
>> Alexander Kozyrev (10):
>>    ethdev: introduce flow pre-configuration hints
>>    ethdev: add flow item/action templates
>>    ethdev: bring in async queue-based flow rules operations
>>    app/testpmd: implement rte flow configuration
>>    app/testpmd: implement rte flow template management
>>    app/testpmd: implement rte flow table management
>>    app/testpmd: implement rte flow queue flow operations
>>    app/testpmd: implement rte flow push operations
>>    app/testpmd: implement rte flow pull operations
>>    app/testpmd: implement rte flow queue indirect actions
>>
>
>Hi Jerin, Ajit, Ivan,
>
>As far as I can see you did some reviews in the previous versions, but not ack
>the patch.
>Is there any objection to last version of the patch, if not I will proceed with it.
>
>
>Hi Alex,
>
>As process we require at least one PMD implementation (it can be draft) to
>justify the API design.
>
>If there is no objection from above reviewers and PMD implementation exists
>before end of the week, I think we can get the set for -rc1.
>
>Thanks,
>ferruh

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v4 00/10] ethdev: datapath-focused flow rules management
  2022-02-10 16:12     ` Asaf Penso
@ 2022-02-10 16:33       ` Suanming Mou
  0 siblings, 0 replies; 220+ messages in thread
From: Suanming Mou @ 2022-02-10 16:33 UTC (permalink / raw)
  To: Asaf Penso, Ferruh Yigit, Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi,

I wish the PMD part is not too late.  You can find the series here:
https://patches.dpdk.org/project/dpdk/cover/20220210162926.20436-1-suanmingm@nvidia.com/

Thanks,
Suanming Mou

> -----Original Message-----
> From: Asaf Penso <asafp@nvidia.com>
> Sent: Friday, February 11, 2022 12:12 AM
> To: Ferruh Yigit <ferruh.yigit@intel.com>; Alexander Kozyrev
> <akozyrev@nvidia.com>; dev@dpdk.org; Suanming Mou
> <suanmingm@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> andrew.rybchenko@oktetlabs.ru; mohammad.abdul.awal@intel.com;
> qi.z.zhang@intel.com; jerinj@marvell.com; ajit.khaparde@broadcom.com;
> bruce.richardson@intel.com
> Subject: RE: [PATCH v4 00/10] ethdev: datapath-focused flow rules
> management
> 
> Thanks, Ferruh.
> The pmd part is being updated according to the previous API comments.
> @Suanming Mou is working on it and will send it once ready, before the
> weekend.
> 
> Regards,
> Asaf Penso
> 
> >-----Original Message-----
> >From: Ferruh Yigit <ferruh.yigit@intel.com>
> >Sent: Thursday, February 10, 2022 6:00 PM
> >To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
> >Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> ><thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> >andrew.rybchenko@oktetlabs.ru; mohammad.abdul.awal@intel.com;
> >qi.z.zhang@intel.com; jerinj@marvell.com; ajit.khaparde@broadcom.com;
> >bruce.richardson@intel.com
> >Subject: Re: [PATCH v4 00/10] ethdev: datapath-focused flow rules
> >management
> >
> >On 2/9/2022 9:37 PM, Alexander Kozyrev wrote:
> >> Three major changes to a generic RTE Flow API were implemented in
> >> order to speed up flow rule insertion/destruction and adapt the API
> >> to the needs of a datapath-focused flow rules management applications:
> >>
> >> 1. Pre-configuration hints.
> >> Application may give us some hints on what type of resources are needed.
> >> Introduce the configuration routine to prepare all the needed
> >> resources inside a PMD/HW before any flow rules are created at the
> >> init
> >stage.
> >>
> >> 2. Flow grouping using templates.
> >> Use the knowledge about which flow rules are to be used in an
> >> application and prepare item and action templates for them in advance.
> >> Group flow rules with common patterns and actions together for better
> >resource management.
> >>
> >> 3. Queue-based flow management.
> >> Perform flow rule insertion/destruction asynchronously to spare the
> >> datapath from blocking on RTE Flow API and allow it to continue with
> >> packet
> >processing.
> >> Enqueue flow rules operations and poll for the results later.
> >>
> >> testpmd examples are part of the patch series. PMD changes will follow.
> >>
> >> RFC:
> >> https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-
> >1
> >> -akozyrev@nvidia.com/
> >>
> >> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >> Acked-by: Ori Kam <orika@nvidia.com>
> >>
> >> ---
> >> v4:
> >> - removed structures versioning
> >> - introduced new rte_flow_port_info structure for rte_flow_info_get
> >> API
> >> - renamed rte_flow_table_create to rte_flow_template_table_create
> >>
> >> v3: addressed review comments and updated documentation
> >> - added API to get info about pre-configurable resources
> >> - renamed rte_flow_item_template to rte_flow_pattern_template
> >> - renamed drain operation attribute to postpone
> >> - renamed rte_flow_q_drain to rte_flow_q_push
> >> - renamed rte_flow_q_dequeue to rte_flow_q_pull
> >>
> >> v2: fixed patch series thread
> >>
> >> Alexander Kozyrev (10):
> >>    ethdev: introduce flow pre-configuration hints
> >>    ethdev: add flow item/action templates
> >>    ethdev: bring in async queue-based flow rules operations
> >>    app/testpmd: implement rte flow configuration
> >>    app/testpmd: implement rte flow template management
> >>    app/testpmd: implement rte flow table management
> >>    app/testpmd: implement rte flow queue flow operations
> >>    app/testpmd: implement rte flow push operations
> >>    app/testpmd: implement rte flow pull operations
> >>    app/testpmd: implement rte flow queue indirect actions
> >>
> >
> >Hi Jerin, Ajit, Ivan,
> >
> >As far as I can see you did some reviews in the previous versions, but
> >not ack the patch.
> >Is there any objection to last version of the patch, if not I will proceed with it.
> >
> >
> >Hi Alex,
> >
> >As process we require at least one PMD implementation (it can be draft)
> >to justify the API design.
> >
> >If there is no objection from above reviewers and PMD implementation
> >exists before end of the week, I think we can get the set for -rc1.
> >
> >Thanks,
> >ferruh

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v4 00/10] ethdev: datapath-focused flow rules management
  2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
  2022-02-10 16:12     ` Asaf Penso
@ 2022-02-10 18:04     ` Ajit Khaparde
  2022-02-11 10:22     ` Ivan Malov
  2022-02-11 10:48     ` Jerin Jacob
  3 siblings, 0 replies; 220+ messages in thread
From: Ajit Khaparde @ 2022-02-10 18:04 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Alexander Kozyrev, dpdk-dev, Ori Kam, Thomas Monjalon,
	Ivan Malov, Andrew Rybchenko, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob Kollanukkaran, Bruce Richardson

[-- Attachment #1: Type: text/plain, Size: 3352 bytes --]

On Thu, Feb 10, 2022 at 8:00 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 2/9/2022 9:37 PM, Alexander Kozyrev wrote:
> > Three major changes to a generic RTE Flow API were implemented in order
> > to speed up flow rule insertion/destruction and adapt the API to the
> > needs of a datapath-focused flow rules management applications:
> >
> > 1. Pre-configuration hints.
> > Application may give us some hints on what type of resources are needed.
> > Introduce the configuration routine to prepare all the needed resources
> > inside a PMD/HW before any flow rules are created at the init stage.
> >
> > 2. Flow grouping using templates.
> > Use the knowledge about which flow rules are to be used in an application
> > and prepare item and action templates for them in advance. Group flow rules
> > with common patterns and actions together for better resource management.
> >
> > 3. Queue-based flow management.
> > Perform flow rule insertion/destruction asynchronously to spare the datapath
> > from blocking on RTE Flow API and allow it to continue with packet processing.
> > Enqueue flow rules operations and poll for the results later.
> >
> > testpmd examples are part of the patch series. PMD changes will follow.
> >
> > RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> >
> > ---
> > v4:
> > - removed structures versioning
> > - introduced new rte_flow_port_info structure for rte_flow_info_get API
> > - renamed rte_flow_table_create to rte_flow_template_table_create
> >
> > v3: addressed review comments and updated documentation
> > - added API to get info about pre-configurable resources
> > - renamed rte_flow_item_template to rte_flow_pattern_template
> > - renamed drain operation attribute to postpone
> > - renamed rte_flow_q_drain to rte_flow_q_push
> > - renamed rte_flow_q_dequeue to rte_flow_q_pull
> >
> > v2: fixed patch series thread
> >
> > Alexander Kozyrev (10):
> >    ethdev: introduce flow pre-configuration hints
> >    ethdev: add flow item/action templates
> >    ethdev: bring in async queue-based flow rules operations
> >    app/testpmd: implement rte flow configuration
> >    app/testpmd: implement rte flow template management
> >    app/testpmd: implement rte flow table management
> >    app/testpmd: implement rte flow queue flow operations
> >    app/testpmd: implement rte flow push operations
> >    app/testpmd: implement rte flow pull operations
> >    app/testpmd: implement rte flow queue indirect actions
> >
>
> Hi Jerin, Ajit, Ivan,
>
> As far as I can see you did some reviews in the previous versions,
> but not ack the patch.
> Is there any objection to last version of the patch, if not I will
> proceed with it.
The latest set is looking good. There are some places where we could
cleanup or rephrase the text. But that need not block the series.
So for the series
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

Thanks for checking.

>
>
> Hi Alex,
>
> As process we require at least one PMD implementation (it can be draft)
> to justify the API design.
>
> If there is no objection from above reviewers and PMD implementation
> exists before end of the week, I think we can get the set for -rc1.
>
> Thanks,
> ferruh

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 00/10] ethdev: datapath-focused flow rules management
  2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                     ` (10 preceding siblings ...)
  2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
@ 2022-02-11  2:26   ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
                       ` (10 more replies)
  11 siblings, 11 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
v5: cahnged titles for testpmd commits

v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread


Alexander Kozyrev (10):
  ethdev: introduce flow pre-configuration hints
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  app/testpmd: add flow engine configuration
  app/testpmd: add flow template management
  app/testpmd: add flow table management
  app/testpmd: add async flow create/destroy operations
  app/testpmd: add flow queue push operation
  app/testpmd: add flow queue pull operation
  app/testpmd: add async indirect actions creation/destruction

 app/test-pmd/cmdline_flow.c                   | 1496 ++++++++++++++++-
 app/test-pmd/config.c                         |  772 +++++++++
 app/test-pmd/testpmd.h                        |   66 +
 doc/guides/prog_guide/img/rte_flow_q_init.svg |  205 +++
 .../prog_guide/img/rte_flow_q_usage.svg       |  351 ++++
 doc/guides/prog_guide/rte_flow.rst            |  326 ++++
 doc/guides/rel_notes/release_22_03.rst        |   22 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  378 ++++-
 lib/ethdev/rte_flow.c                         |  360 ++++
 lib/ethdev/rte_flow.h                         |  702 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  102 ++
 lib/ethdev/version.map                        |   15 +
 12 files changed, 4775 insertions(+), 20 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11 10:16       ` Andrew Rybchenko
  2022-02-11  2:26     ` [PATCH v5 02/10] ethdev: add flow item/action templates Alexander Kozyrev
                       ` (9 subsequent siblings)
  10 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  37 +++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/rte_flow.c                  |  40 +++++++++
 lib/ethdev/rte_flow.h                  | 108 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 6 files changed, 203 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b4aa9c47c2..72fb1132ac 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3589,6 +3589,43 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API management configuration and
+pre-allocates needed resources beforehand to avoid costly allocations later.
+Expected number of counters or meters in an application, for example,
+allow PMD to prepare and optimize NIC memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                     const struct rte_flow_port_attr *port_attr,
+                     struct rte_flow_error *error);
+
+Information about resources that can benefit from pre-allocation can be
+retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
+of pre-configurable resources for a given port on a system.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index f03183ee86..2a47a37f0a 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -69,6 +69,12 @@ New Features
   New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and
   ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index a93f68abbc..66614ae29b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		return flow_err(port_id,
+				ops->configure(dev, port_attr, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1031fb246b..92be2a9a89 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about available pre-configurable resources.
+ * The zero value means a resource cannot be pre-allocated.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Number of pre-configurable counter actions.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of pre-configurable aging flows actions.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_flows;
+	/**
+	 * Number of pre-configurable traffic metering actions.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Retrieve configuration attributes supported by the port.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the contextual information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Resource pre-allocation and pre-configuration settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counter actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging flows actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_flows;
+	/**
+	 * Number of traffic metering actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the
+ * settings. The port, however, may reject the changes.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index cd0c4c428d..f1235aa913 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -260,6 +260,8 @@ EXPERIMENTAL {
 	# added in 22.03
 	rte_eth_dev_priority_flow_ctrl_queue_configure;
 	rte_eth_dev_priority_flow_ctrl_queue_info_get;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11 11:27       ` Andrew Rybchenko
  2022-02-11  2:26     ` [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                       ` (8 subsequent siblings)
  10 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 124 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 147 ++++++++++++++
 lib/ethdev/rte_flow.h                  | 260 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 582 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 72fb1132ac..5391648833 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on a system.
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+				const struct rte_flow_pattern_template_attr *template_attr,
+				const struct rte_flow_item pattern[],
+				struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	struct rte_flow_item pattern[2] = {{0}};
+	struct rte_flow_item_eth eth_m = {0};
+	pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+	eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	pattern[0].mask = &eth_m;
+	pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+				const struct rte_flow_actions_template_attr *template_attr,
+				const struct rte_flow_action actions[],
+				const struct rte_flow_action masks[],
+				struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	struct rte_flow_action actions[] = {
+		/* Mark ID is constant (4) for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action masks[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+
+	struct rte_flow_actions_template *at =
+		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+				const struct rte_flow_template_table_attr *table_attr,
+				struct rte_flow_pattern_template *pattern_templates[],
+				uint8_t nb_pattern_templates,
+				struct rte_flow_actions_template *actions_templates[],
+				uint8_t nb_actions_templates,
+				struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_configure(port, *port_attr, *error);
+
+	struct rte_flow_pattern_template *pattern_templates[0] =
+		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
+	struct rte_flow_actions_template *actions_templates[0] =
+		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, *table_attr,
+				*pattern_templates, nb_pattern_templates,
+				*actions_templates, nb_actions_templates,
+				*error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 2a47a37f0a..6656b35295 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -75,6 +75,14 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 66614ae29b..b53f8c9b89 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+						     pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 92be2a9a89..e87db5a540 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - PMD may match only on items with mask member set and skip
+	 * matching on protocol layers specified without any masks.
+	 * - If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * - Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets, starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+struct rte_flow_actions_template_attr;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index f1235aa913..5fd2108895 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -262,6 +262,12 @@ EXPERIMENTAL {
 	rte_eth_dev_priority_flow_ctrl_queue_info_get;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11 12:42       ` Andrew Rybchenko
  2022-02-11  2:26     ` [PATCH v5 04/10] app/testpmd: add flow engine configuration Alexander Kozyrev
                       ` (7 subsequent siblings)
  10 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_q_flow_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_q_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_q_flow_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/img/rte_flow_q_init.svg | 205 ++++++++++
 .../prog_guide/img/rte_flow_q_usage.svg       | 351 ++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 167 ++++++++-
 doc/guides/rel_notes/release_22_03.rst        |   8 +
 lib/ethdev/rte_flow.c                         | 175 ++++++++-
 lib/ethdev/rte_flow.h                         | 334 +++++++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  55 +++
 lib/ethdev/version.map                        |   7 +
 8 files changed, 1300 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
new file mode 100644
index 0000000000..96160bde42
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_q_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
new file mode 100644
index 0000000000..a1f6c0a0a8
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
@@ -0,0 +1,351 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_q_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84606"
+     inkscape:cy="305.37562"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="572.5"
+       y="279.5"
+       width="234"
+       height="46"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(591.429 308)"
+       id="text21">rte_flow_q_flow_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="81.5001"
+       y="280.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect65" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(96.2282 308)"
+       id="text67">rte_flow_q_flow_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="334.5"
+       y="540.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="385.08301"
+       y="569">rte_flow_q_pull()</text>
+    <rect
+       x="334.5"
+       y="462.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect79" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(379.19 491)"
+       id="text81">rte_flow_q_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 5391648833..5d47f3bd21 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3607,12 +3607,16 @@ Expected number of counters or meters in an application, for example,
 allow PMD to prepare and optimize NIC memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                      const struct rte_flow_port_attr *port_attr,
+                     uint16_t nb_queue,
+                     const struct rte_flow_queue_attr *queue_attr[],
                      struct rte_flow_error *error);
 
 Information about resources that can benefit from pre-allocation can be
@@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
 
 .. code-block:: c
 
-	rte_flow_configure(port, *port_attr, *error);
+	rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error);
 
 	struct rte_flow_pattern_template *pattern_templates[0] =
 		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
@@ -3750,6 +3754,167 @@ and pattern and actions templates are created.
 				*actions_templates, nb_actions_templates,
 				*error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+  packet processing can continue while queue operations are processed by NIC.
+
+- The queue number is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+  indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+  destroyed even if the operation is not successful and the rule is not inserted.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_q_init:
+
+.. figure:: img/rte_flow_q_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_q_usage:
+
+.. figure:: img/rte_flow_q_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_q_flow_create(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow_template_table *template_table,
+				const struct rte_flow_item pattern[],
+				uint8_t pattern_template_index,
+				const struct rte_flow_action actions[],
+				uint8_t actions_template_index,
+				struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_flow_destroy(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow *flow,
+				struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_push(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_pull(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_q_op_res res[],
+			uint16_t n_res,
+			struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_q_action_handle_create(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			const struct rte_flow_indir_action_conf *indir_action_conf,
+			const struct rte_flow_action *action,
+			struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_update(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			const void *update,
+			struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6656b35295..87cea8a966 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -83,6 +83,14 @@ New Features
     ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+  * ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy``
+    API to enqueue flow creaion/destruction operations asynchronously as well
+    as ``rte_flow_q_pull`` to poll and retrieve results of these operations
+    and ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
+    Introduced asynchronous API for indirect actions management as well:
+    ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy``
+    and ``rte_flow_q_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index b53f8c9b89..aca5bac2da 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1424,7 +1426,8 @@ rte_flow_configure(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
 		return flow_err(port_id,
-				ops->configure(dev, port_attr, error),
+				ops->configure(dev, port_attr,
+					       nb_queue, queue_attr, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1578,3 +1581,173 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_template_table *template_table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->q_flow_create)) {
+		flow = ops->q_flow_create(dev, queue_id,
+					  q_ops_attr, template_table,
+					  pattern, pattern_template_index,
+					  actions, actions_template_index,
+					  error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_flow_destroy)) {
+		return flow_err(port_id,
+				ops->q_flow_destroy(dev, queue_id,
+						    q_ops_attr, flow, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->q_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_push)) {
+		return flow_err(port_id,
+				ops->q_push(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_pull)) {
+		ret = ops->q_pull(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index e87db5a540..b0d4f33bfd 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4862,6 +4862,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Number of queues for asynchronous operations.
+	 */
+	uint32_t nb_queues;
 	/**
 	 * Number of pre-configurable counter actions.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4879,6 +4883,17 @@ struct rte_flow_port_info {
 	uint32_t nb_meters;
 };
 
+/**
+ * Flow engine queue configuration.
+ */
+__extension__
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4948,6 +4963,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4959,6 +4979,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5221,6 +5243,318 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+struct rte_flow_q_ops_attr {
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule was offloaded.
+ *   Only completion result indicates that the rule was offloaded.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_template_table *template_table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *    0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation results.
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..33dc57a15e 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -161,6 +161,8 @@ struct rte_flow_ops {
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +201,59 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_create() */
+	struct rte_flow *(*q_flow_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_destroy() */
+	int (*q_flow_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_create() */
+	struct rte_flow_action_handle *(*q_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_destroy() */
+	int (*q_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_action_handle_update() */
+	int (*q_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_push() */
+	int (*q_push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_pull() */
+	int (*q_pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 5fd2108895..46a4151053 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -268,6 +268,13 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_q_flow_create;
+	rte_flow_q_flow_destroy;
+	rte_flow_q_action_handle_create;
+	rte_flow_q_action_handle_destroy;
+	rte_flow_q_action_handle_update;
+	rte_flow_q_push;
+	rte_flow_q_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 04/10] app/testpmd: add flow engine configuration
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (2 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 05/10] app/testpmd: add flow template management Alexander Kozyrev
                       ` (6 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  55 +++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  60 +++++++++-
 4 files changed, 245 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7b56b1b0ff..cc3003e6eb 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -847,6 +856,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -928,6 +942,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1964,6 +1988,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2189,7 +2216,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2204,6 +2233,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_COUNTERS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging flows",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_flows)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7480,6 +7568,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8708,6 +8823,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e812f57151..0f9374163e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1609,6 +1609,61 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	memset(&port_info, 0, sizeof(port_info));
+	if (rte_flow_info_get(port_id, &port_info, &error))
+		return port_flow_complain(&error);
+	printf("Pre-configurable resources on port %u:\n"
+	       "Number of queues: %d\n"
+	       "Number of counters: %d\n"
+	       "Number of aging flows: %d\n"
+	       "Number of meters: %d\n",
+	       port_id, port_info.nb_queues, port_info.nb_counters,
+	       port_info.nb_aging_flows, port_info.nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b2e98df6e1..cfdda5005c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,50 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Pre-configurable resources on port #[...]:
+   Number of queues: #[...]
+   Number of counters: #[...]
+   Number of aging flows: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 05/10] app/testpmd: add flow template management
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (3 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 04/10] app/testpmd: add flow engine configuration Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 06/10] app/testpmd: add flow table management Alexander Kozyrev
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 376 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++++
 app/test-pmd/testpmd.h                      |  23 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  97 +++++
 4 files changed, 697 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index cc3003e6eb..34bc73eea3 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,22 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -861,6 +881,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -869,10 +893,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -952,6 +979,43 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1991,6 +2055,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2060,6 +2130,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2210,6 +2284,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2218,6 +2306,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2292,6 +2382,112 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2614,7 +2810,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5731,7 +5927,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7595,6 +7793,114 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8564,6 +8870,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -8832,6 +9186,24 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port, in->args.vc.pat_templ_id,
+				in->args.vc.attr.reserved, in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port, in->args.vc.act_templ_id,
+				in->args.vc.actions, in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 0f9374163e..7af44eadf9 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1609,6 +1609,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2079,6 +2122,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id, bool relaxed,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_pattern_template_attr attr = {
+					.relaxed_matching = relaxed };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						&attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						NULL, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..c70b1fa4e8 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,16 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      bool relaxed,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index cfdda5005c..acb763bdf0 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,24 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3447,6 +3465,85 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 06/10] app/testpmd: add flow table management
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (4 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 05/10] app/testpmd: add flow template management Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 07/10] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 34bc73eea3..3e89525445 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -112,6 +114,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -885,6 +901,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1016,6 +1044,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2061,6 +2115,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2134,6 +2193,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2298,6 +2359,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2308,6 +2376,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2488,6 +2557,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7901,6 +8068,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8918,6 +9198,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9204,6 +9508,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 7af44eadf9..5f6da3944e 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1652,6 +1652,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2282,6 +2325,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c70b1fa4e8..4c6e775bad 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -915,6 +926,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index acb763bdf0..16b874250c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3362,6 +3362,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3544,6 +3557,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 07/10] app/testpmd: add async flow create/destroy operations
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (5 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 06/10] app/testpmd: add flow table management Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 08/10] app/testpmd: add flow queue push operation Alexander Kozyrev
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 3e89525445..f794a83a07 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -114,6 +116,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -891,6 +909,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -921,6 +941,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1070,6 +1091,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2120,6 +2153,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2195,6 +2234,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2366,6 +2407,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2388,7 +2436,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2655,6 +2704,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8181,6 +8308,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9222,6 +9454,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9519,6 +9773,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 5f6da3944e..50781b2b84 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2453,6 +2453,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr,
+		pt->table, pattern, pattern_idx, actions, actions_idx, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr,
+						    pf->flow, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_q_pull(port_id, queue_id,
+							 &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 4c6e775bad..d0e1e3eeec 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -932,6 +932,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 16b874250c..b802288c66 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3382,6 +3382,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3703,6 +3717,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_q_flow_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4418,6 +4456,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 08/10] app/testpmd: add flow queue push operation
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (6 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 07/10] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 09/10] app/testpmd: add flow queue pull operation Alexander Kozyrev
                       ` (2 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f794a83a07..11240d6f04 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -132,6 +133,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2159,6 +2163,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2437,7 +2444,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2782,6 +2790,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8413,6 +8436,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9784,6 +9835,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 50781b2b84..52ecc4d773 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2619,6 +2619,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_q_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d0e1e3eeec..03f135ff46 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b802288c66..01e5e3c19f 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3396,6 +3396,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3611,6 +3615,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_q_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 09/10] app/testpmd: add flow queue pull operation
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (7 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 08/10] app/testpmd: add flow queue push operation Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-11  2:26     ` [PATCH v5 10/10] app/testpmd: add async indirect actions creation/destruction Alexander Kozyrev
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 11240d6f04..26ef2ccfd4 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -136,6 +137,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2166,6 +2170,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2445,7 +2452,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2805,6 +2813,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8464,6 +8487,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9838,6 +9889,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 52ecc4d773..79e7eaa8ce 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2462,14 +2462,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2532,16 +2530,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2556,7 +2544,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2592,21 +2579,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_q_pull(port_id, queue_id,
-							 &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2647,6 +2619,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_q_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 03f135ff46..6fe829edab 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 01e5e3c19f..d5d9125d50 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3400,6 +3400,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3632,6 +3636,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_q_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3762,6 +3783,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4496,6 +4519,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v5 10/10] app/testpmd: add async indirect actions creation/destruction
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (8 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 09/10] app/testpmd: add flow queue pull operation Alexander Kozyrev
@ 2022-02-11  2:26     ` Alexander Kozyrev
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11  2:26 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 26ef2ccfd4..b9edb1d482 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -121,6 +121,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -134,6 +135,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1102,6 +1123,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1111,6 +1133,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2167,6 +2219,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2744,6 +2802,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2797,6 +2862,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6209,6 +6358,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -9892,6 +10145,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 79e7eaa8ce..39bf775b69 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2591,6 +2591,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr,
+						      conf, action, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_q_action_handle_destroy(port_id, queue_id,
+						&attr, pia->handle, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_q_action_handle_update(port_id, queue_id, &attr,
+					    action_handle, action, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 6fe829edab..167f1741dc 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index d5d9125d50..65ecef754e 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4780,6 +4780,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4809,6 +4834,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4832,6 +4876,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_q_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-11  2:26     ` [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-02-11 10:16       ` Andrew Rybchenko
  2022-02-11 18:47         ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-11 10:16 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/11/22 05:26, Alexander Kozyrev wrote:
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
> 
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
> 
> The rte_flow_info_get() is available to retrieve the information about
> supported pre-configurable resources. Both these functions must be called
> before any other usage of the flow API engine.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
>   doc/guides/prog_guide/rte_flow.rst     |  37 +++++++++
>   doc/guides/rel_notes/release_22_03.rst |   6 ++
>   lib/ethdev/rte_flow.c                  |  40 +++++++++
>   lib/ethdev/rte_flow.h                  | 108 +++++++++++++++++++++++++
>   lib/ethdev/rte_flow_driver.h           |  10 +++
>   lib/ethdev/version.map                 |   2 +
>   6 files changed, 203 insertions(+)
> 
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index b4aa9c47c2..72fb1132ac 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3589,6 +3589,43 @@ Return values:
>   
>   - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
>   
> +Flow engine configuration
> +-------------------------
> +
> +Configure flow API management.
> +
> +An application may provide some parameters at the initialization phase about
> +rules engine configuration and/or expected flow rules characteristics.
> +These parameters may be used by PMD to preallocate resources and configure NIC.
> +
> +Configuration
> +~~~~~~~~~~~~~
> +
> +This function performs the flow API management configuration and
> +pre-allocates needed resources beforehand to avoid costly allocations later.
> +Expected number of counters or meters in an application, for example,
> +allow PMD to prepare and optimize NIC memory layout in advance.
> +``rte_flow_configure()`` must be called before any flow rule is created,
> +but after an Ethernet device is configured.
> +
> +.. code-block:: c
> +
> +   int
> +   rte_flow_configure(uint16_t port_id,
> +                     const struct rte_flow_port_attr *port_attr,
> +                     struct rte_flow_error *error);
> +
> +Information about resources that can benefit from pre-allocation can be
> +retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
> +of pre-configurable resources for a given port on a system.
> +
> +.. code-block:: c
> +
> +   int
> +   rte_flow_info_get(uint16_t port_id,
> +                     struct rte_flow_port_info *port_info,
> +                     struct rte_flow_error *error);
> +
>   .. _flow_isolated_mode:
>   
>   Flow isolated mode
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index f03183ee86..2a47a37f0a 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -69,6 +69,12 @@ New Features
>     New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and
>     ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added.
>   
> +* ** Added functions to configure Flow API engine
> +
> +  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
> +    engine, allowing to pre-allocate some resources for better performance.
> +    Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
> +
>   * **Updated AF_XDP PMD**
>   
>     * Added support for libxdp >=v1.2.2.
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index a93f68abbc..66614ae29b 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
>   	ret = ops->flex_item_release(dev, handle, error);
>   	return flow_err(port_id, ret, error);
>   }
> +
> +int
> +rte_flow_info_get(uint16_t port_id,
> +		  struct rte_flow_port_info *port_info,
> +		  struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->info_get)) {

expected ethdev state must be validated. Just configured?

> +		return flow_err(port_id,
> +				ops->info_get(dev, port_info, error),

port_info must be checked vs NULL

> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_configure(uint16_t port_id,
> +		   const struct rte_flow_port_attr *port_attr,
> +		   struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->configure)) {

The API must validate ethdev state. configured and not started?

> +		return flow_err(port_id,
> +				ops->configure(dev, port_attr, error),

port_attr must be checked vs NULL

> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 1031fb246b..92be2a9a89 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id,
>   			   const struct rte_flow_item_flex_handle *handle,
>   			   struct rte_flow_error *error);
>   
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Information about available pre-configurable resources.
> + * The zero value means a resource cannot be pre-allocated.
> + *
> + */
> +struct rte_flow_port_info {
> +	/**
> +	 * Number of pre-configurable counter actions.
> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> +	 */
> +	uint32_t nb_counters;

Name says that it is a number of counters, but description
says that it is about actions.
Also I don't understand what does "pre-configurable" mean.
Isn't it a maximum number of available counters?
If no, how can I find a maximum?

> +	/**
> +	 * Number of pre-configurable aging flows actions.
> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> +	 */
> +	uint32_t nb_aging_flows;

Same

> +	/**
> +	 * Number of pre-configurable traffic metering actions.
> +	 * @see RTE_FLOW_ACTION_TYPE_METER
> +	 */
> +	uint32_t nb_meters;

Same

> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Retrieve configuration attributes supported by the port.

Description should be a bit more flow API aware.
Right now it sounds too generic.

> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[out] port_info
> + *   A pointer to a structure of type *rte_flow_port_info*
> + *   to be filled with the contextual information of the port.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_info_get(uint16_t port_id,
> +		  struct rte_flow_port_info *port_info,
> +		  struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Resource pre-allocation and pre-configuration settings.

What is the difference between pre-allocation and pre-configuration?
Why are both mentioned above, but just pre-configured actions are
mentioned below?

> + * The zero value means on demand resource allocations only.
> + *
> + */
> +struct rte_flow_port_attr {
> +	/**
> +	 * Number of counter actions pre-configured.
> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> +	 */
> +	uint32_t nb_counters;
> +	/**
> +	 * Number of aging flows actions pre-configured.
> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> +	 */
> +	uint32_t nb_aging_flows;
> +	/**
> +	 * Number of traffic metering actions pre-configured.
> +	 * @see RTE_FLOW_ACTION_TYPE_METER
> +	 */
> +	uint32_t nb_meters;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Configure the port's flow API engine.
> + *
> + * This API can only be invoked before the application
> + * starts using the rest of the flow library functions.
> + *
> + * The API can be invoked multiple times to change the
> + * settings. The port, however, may reject the changes.
> + *
> + * Parameters in configuration attributes must not exceed
> + * numbers of resources returned by the rte_flow_info_get API.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] port_attr
> + *   Port configuration attributes.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_configure(uint16_t port_id,
> +		   const struct rte_flow_port_attr *port_attr,
> +		   struct rte_flow_error *error);
> +
>   #ifdef __cplusplus
>   }
>   #endif
> diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> index f691b04af4..7c29930d0f 100644
> --- a/lib/ethdev/rte_flow_driver.h
> +++ b/lib/ethdev/rte_flow_driver.h
> @@ -152,6 +152,16 @@ struct rte_flow_ops {
>   		(struct rte_eth_dev *dev,
>   		 const struct rte_flow_item_flex_handle *handle,
>   		 struct rte_flow_error *error);
> +	/** See rte_flow_info_get() */
> +	int (*info_get)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_port_info *port_info,
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_configure() */
> +	int (*configure)
> +		(struct rte_eth_dev *dev,
> +		 const struct rte_flow_port_attr *port_attr,
> +		 struct rte_flow_error *err);
>   };
>   
>   /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index cd0c4c428d..f1235aa913 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -260,6 +260,8 @@ EXPERIMENTAL {
>   	# added in 22.03
>   	rte_eth_dev_priority_flow_ctrl_queue_configure;
>   	rte_eth_dev_priority_flow_ctrl_queue_info_get;
> +	rte_flow_info_get;
> +	rte_flow_configure;
>   };
>   
>   INTERNAL {


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v4 00/10] ethdev: datapath-focused flow rules management
  2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
  2022-02-10 16:12     ` Asaf Penso
  2022-02-10 18:04     ` Ajit Khaparde
@ 2022-02-11 10:22     ` Ivan Malov
  2022-02-11 10:48     ` Jerin Jacob
  3 siblings, 0 replies; 220+ messages in thread
From: Ivan Malov @ 2022-02-11 10:22 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Alexander Kozyrev, dev, orika, thomas, andrew.rybchenko,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Hi Ferruh,

On Thu, 10 Feb 2022, Ferruh Yigit wrote:

> On 2/9/2022 9:37 PM, Alexander Kozyrev wrote:
>> Three major changes to a generic RTE Flow API were implemented in order
>> to speed up flow rule insertion/destruction and adapt the API to the
>> needs of a datapath-focused flow rules management applications:
>> 
>> 1. Pre-configuration hints.
>> Application may give us some hints on what type of resources are needed.
>> Introduce the configuration routine to prepare all the needed resources
>> inside a PMD/HW before any flow rules are created at the init stage.
>> 
>> 2. Flow grouping using templates.
>> Use the knowledge about which flow rules are to be used in an application
>> and prepare item and action templates for them in advance. Group flow rules
>> with common patterns and actions together for better resource management.
>> 
>> 3. Queue-based flow management.
>> Perform flow rule insertion/destruction asynchronously to spare the 
>> datapath
>> from blocking on RTE Flow API and allow it to continue with packet 
>> processing.
>> Enqueue flow rules operations and poll for the results later.
>> 
>> testpmd examples are part of the patch series. PMD changes will follow.
>> 
>> RFC: 
>> https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
>> 
>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>> Acked-by: Ori Kam <orika@nvidia.com>
>> 
>> ---
>> v4:
>> - removed structures versioning
>> - introduced new rte_flow_port_info structure for rte_flow_info_get API
>> - renamed rte_flow_table_create to rte_flow_template_table_create
>> 
>> v3: addressed review comments and updated documentation
>> - added API to get info about pre-configurable resources
>> - renamed rte_flow_item_template to rte_flow_pattern_template
>> - renamed drain operation attribute to postpone
>> - renamed rte_flow_q_drain to rte_flow_q_push
>> - renamed rte_flow_q_dequeue to rte_flow_q_pull
>> 
>> v2: fixed patch series thread
>> 
>> Alexander Kozyrev (10):
>>    ethdev: introduce flow pre-configuration hints
>>    ethdev: add flow item/action templates
>>    ethdev: bring in async queue-based flow rules operations
>>    app/testpmd: implement rte flow configuration
>>    app/testpmd: implement rte flow template management
>>    app/testpmd: implement rte flow table management
>>    app/testpmd: implement rte flow queue flow operations
>>    app/testpmd: implement rte flow push operations
>>    app/testpmd: implement rte flow pull operations
>>    app/testpmd: implement rte flow queue indirect actions
>> 
>
> Hi Jerin, Ajit, Ivan,
>
> As far as I can see you did some reviews in the previous versions,
> but not ack the patch.

Thanks for sending the reminder. Yes, I did review the series.
During the review, we did not find a common ground with regard
to possibly having a universal "task enqueue" method. However,
I was assured that such design would affect performance.

> Is there any objection to last version of the patch, if not I will
> proceed with it.

Personally, I have no strong objections. The v5 series seems a lot
clearer in a number of ways, yet, it is going to be experimental,
so I believe that if we run into some issues with this deisgn,
we will still have a chance to improve it to some extent.
In general, the author did a very good job applying that
many review notes. Thanks to Alexander for perseverance.

Please feel free to proceed with the series as you see fit.

>
>
> Hi Alex,
>
> As process we require at least one PMD implementation (it can be draft)
> to justify the API design.
>
> If there is no objection from above reviewers and PMD implementation
> exists before end of the week, I think we can get the set for -rc1.
>
> Thanks,
> ferruh
>

--
Ivan M

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v4 00/10] ethdev: datapath-focused flow rules management
  2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
                       ` (2 preceding siblings ...)
  2022-02-11 10:22     ` Ivan Malov
@ 2022-02-11 10:48     ` Jerin Jacob
  3 siblings, 0 replies; 220+ messages in thread
From: Jerin Jacob @ 2022-02-11 10:48 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Alexander Kozyrev, dpdk-dev, Ori Kam, Thomas Monjalon,
	Ivan Malov, Andrew Rybchenko, mohammad.abdul.awal, Qi Zhang,
	Jerin Jacob, Ajit Khaparde, Richardson, Bruce

On Thu, Feb 10, 2022 at 9:30 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 2/9/2022 9:37 PM, Alexander Kozyrev wrote:
> > Three major changes to a generic RTE Flow API were implemented in order
> > to speed up flow rule insertion/destruction and adapt the API to the
> > needs of a datapath-focused flow rules management applications:
> >
> > 1. Pre-configuration hints.
> > Application may give us some hints on what type of resources are needed.
> > Introduce the configuration routine to prepare all the needed resources
> > inside a PMD/HW before any flow rules are created at the init stage.
> >
> > 2. Flow grouping using templates.
> > Use the knowledge about which flow rules are to be used in an application
> > and prepare item and action templates for them in advance. Group flow rules
> > with common patterns and actions together for better resource management.
> >
> > 3. Queue-based flow management.
> > Perform flow rule insertion/destruction asynchronously to spare the datapath
> > from blocking on RTE Flow API and allow it to continue with packet processing.
> > Enqueue flow rules operations and poll for the results later.
> >
> > testpmd examples are part of the patch series. PMD changes will follow.
> >
> > RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> >
> > ---
> > v4:
> > - removed structures versioning
> > - introduced new rte_flow_port_info structure for rte_flow_info_get API
> > - renamed rte_flow_table_create to rte_flow_template_table_create
> >
> > v3: addressed review comments and updated documentation
> > - added API to get info about pre-configurable resources
> > - renamed rte_flow_item_template to rte_flow_pattern_template
> > - renamed drain operation attribute to postpone
> > - renamed rte_flow_q_drain to rte_flow_q_push
> > - renamed rte_flow_q_dequeue to rte_flow_q_pull
> >
> > v2: fixed patch series thread
> >
> > Alexander Kozyrev (10):
> >    ethdev: introduce flow pre-configuration hints
> >    ethdev: add flow item/action templates
> >    ethdev: bring in async queue-based flow rules operations
> >    app/testpmd: implement rte flow configuration
> >    app/testpmd: implement rte flow template management
> >    app/testpmd: implement rte flow table management
> >    app/testpmd: implement rte flow queue flow operations
> >    app/testpmd: implement rte flow push operations
> >    app/testpmd: implement rte flow pull operations
> >    app/testpmd: implement rte flow queue indirect actions
> >
>
> Hi Jerin, Ajit, Ivan,
>
> As far as I can see you did some reviews in the previous versions,
> but not ack the patch.
> Is there any objection to last version of the patch, if not I will
> proceed with it.


Personally, I have no strong objections.  Based on the top level review,
It looks good to me on the application API side.
Please feel free to proceed with the series as you see fit.


>
> Hi Alex,
>
> As process we require at least one PMD implementation (it can be draft)
> to justify the API design.
>
> If there is no objection from above reviewers and PMD implementation
> exists before end of the week, I think we can get the set for -rc1.
>
> Thanks,
> ferruh

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-11  2:26     ` [PATCH v5 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-11 11:27       ` Andrew Rybchenko
  2022-02-11 22:25         ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-11 11:27 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/11/22 05:26, Alexander Kozyrev wrote:
> Treating every single flow rule as a completely independent and separate
> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> application, many flow rules share a common structure (the same item mask
> and/or action list) so they can be grouped and classified together.
> This knowledge may be used as a source of optimization by a PMD/HW.
> 
> The pattern template defines common matching fields (the item mask) without
> values. The actions template holds a list of action types that will be used
> together in the same rule. The specific values for items and actions will
> be given only during the rule creation.
> 
> A table combines pattern and actions templates along with shared flow rule
> attributes (group ID, priority and traffic direction). This way a PMD/HW
> can prepare all the resources needed for efficient flow rules creation in
> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> number of flow rules is defined at the table creation time.
> 
> The flow rule creation is done by selecting a table, a pattern template
> and an actions template (which are bound to the table), and setting unique
> values for the items and actions.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
>   doc/guides/prog_guide/rte_flow.rst     | 124 ++++++++++++
>   doc/guides/rel_notes/release_22_03.rst |   8 +
>   lib/ethdev/rte_flow.c                  | 147 ++++++++++++++
>   lib/ethdev/rte_flow.h                  | 260 +++++++++++++++++++++++++
>   lib/ethdev/rte_flow_driver.h           |  37 ++++
>   lib/ethdev/version.map                 |   6 +
>   6 files changed, 582 insertions(+)
> 
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 72fb1132ac..5391648833 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on a system.
>                        struct rte_flow_port_info *port_info,
>                        struct rte_flow_error *error);
>   
> +Flow templates
> +~~~~~~~~~~~~~~
> +
> +Oftentimes in an application, many flow rules share a common structure
> +(the same pattern and/or action list) so they can be grouped and classified
> +together. This knowledge may be used as a source of optimization by a PMD/HW.
> +The flow rule creation is done by selecting a table, a pattern template
> +and an actions template (which are bound to the table), and setting unique
> +values for the items and actions. This API is not thread-safe.
> +
> +Pattern templates
> +^^^^^^^^^^^^^^^^^
> +
> +The pattern template defines a common pattern (the item mask) without values.
> +The mask value is used to select a field to match on, spec/last are ignored.
> +The pattern template may be used by multiple tables and must not be destroyed
> +until all these tables are destroyed first.
> +
> +.. code-block:: c
> +
> +	struct rte_flow_pattern_template *
> +	rte_flow_pattern_template_create(uint16_t port_id,
> +				const struct rte_flow_pattern_template_attr *template_attr,
> +				const struct rte_flow_item pattern[],
> +				struct rte_flow_error *error);
> +
> +For example, to create a pattern template to match on the destination MAC:
> +
> +.. code-block:: c
> +
> +	struct rte_flow_item pattern[2] = {{0}};
> +	struct rte_flow_item_eth eth_m = {0};
> +	pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> +	eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
> +	pattern[0].mask = &eth_m;
> +	pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> +
> +	struct rte_flow_pattern_template *pattern_template =
> +		rte_flow_pattern_template_create(port, &itr, &pattern, &error);

itr?

> +
> +The concrete value to match on will be provided at the rule creation.
> +
> +Actions templates
> +^^^^^^^^^^^^^^^^^
> +
> +The actions template holds a list of action types to be used in flow rules.
> +The mask parameter allows specifying a shared constant value for every rule.
> +The actions template may be used by multiple tables and must not be destroyed
> +until all these tables are destroyed first.
> +
> +.. code-block:: c
> +
> +	struct rte_flow_actions_template *
> +	rte_flow_actions_template_create(uint16_t port_id,
> +				const struct rte_flow_actions_template_attr *template_attr,
> +				const struct rte_flow_action actions[],
> +				const struct rte_flow_action masks[],
> +				struct rte_flow_error *error);
> +
> +For example, to create an actions template with the same Mark ID
> +but different Queue Index for every rule:
> +
> +.. code-block:: c
> +
> +	struct rte_flow_action actions[] = {
> +		/* Mark ID is constant (4) for every rule, Queue Index is unique */
> +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> +			   .conf = &(struct rte_flow_action_mark){.id = 4}},
> +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> +	};
> +	struct rte_flow_action masks[] = {
> +		/* Assign to MARK mask any non-zero value to make it constant */
> +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> +			   .conf = &(struct rte_flow_action_mark){.id = 1}},
> +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> +	};
> +
> +	struct rte_flow_actions_template *at =
> +		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);

atr?

> +
> +The concrete value for Queue Index will be provided at the rule creation.
> +
> +Template table
> +^^^^^^^^^^^^^^
> +
> +A template table combines a number of pattern and actions templates along with
> +shared flow rule attributes (group ID, priority and traffic direction).
> +This way a PMD/HW can prepare all the resources needed for efficient flow rules
> +creation in the datapath. To avoid any hiccups due to memory reallocation,
> +the maximum number of flow rules is defined at table creation time.
> +Any flow rule creation beyond the maximum table size is rejected.
> +Application may create another table to accommodate more rules in this case.
> +
> +.. code-block:: c
> +
> +	struct rte_flow_template_table *
> +	rte_flow_template_table_create(uint16_t port_id,
> +				const struct rte_flow_template_table_attr *table_attr,
> +				struct rte_flow_pattern_template *pattern_templates[],

const?

> +				uint8_t nb_pattern_templates,
> +				struct rte_flow_actions_template *actions_templates[],

const?

> +				uint8_t nb_actions_templates,
> +				struct rte_flow_error *error);
> +
> +A table can be created only after the Flow Rules management is configured
> +and pattern and actions templates are created.
> +
> +.. code-block:: c
> +
> +	rte_flow_configure(port, *port_attr, *error);


Why do you have '*' before port_attr and error above?


> +
> +	struct rte_flow_pattern_template *pattern_templates[0] =

Definition of zero size array looks wrong.

> +		rte_flow_pattern_template_create(port, &itr, &pattern, &error);

itr?

> +	struct rte_flow_actions_template *actions_templates[0] =

Zero size array?

> +		rte_flow_actions_template_create(port, &atr, &actions, &masks, &error);

atr?

> +
> +	struct rte_flow_template_table *table =
> +		rte_flow_template_table_create(port, *table_attr,
> +				*pattern_templates, nb_pattern_templates,
> +				*actions_templates, nb_actions_templates,
> +				*error);

Similar question here.

> +
>   .. _flow_isolated_mode:
>   
>   Flow isolated mode
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index 2a47a37f0a..6656b35295 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -75,6 +75,14 @@ New Features
>       engine, allowing to pre-allocate some resources for better performance.
>       Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
>   
> +  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
> +    with the same flow attributes and common matching patterns and actions
> +    defined by ``rte_flow_pattern_template_create`` and
> +    ``rte_flow_actions_template_create`` respectively.
> +    Corresponding functions to destroy these entities are:
> +    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
> +    and ``rte_flow_actions_template_destroy``.
> +
>   * **Updated AF_XDP PMD**
>   
>     * Added support for libxdp >=v1.2.2.
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 66614ae29b..b53f8c9b89 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id,
>   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>   				  NULL, rte_strerror(ENOTSUP));
>   }
> +
> +struct rte_flow_pattern_template *
> +rte_flow_pattern_template_create(uint16_t port_id,
> +		const struct rte_flow_pattern_template_attr *template_attr,
> +		const struct rte_flow_item pattern[],
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_pattern_template *template;
> +
> +	if (unlikely(!ops))
> +		return NULL;
> +	if (likely(!!ops->pattern_template_create)) {

Don't we need any state checks?

Check pattern vs NULL?

> +		template = ops->pattern_template_create(dev, template_attr,
> +						     pattern, error);
> +		if (template == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return template;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_pattern_template_destroy(uint16_t port_id,
> +		struct rte_flow_pattern_template *pattern_template,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->pattern_template_destroy)) {

IMHO we should return success here if pattern_template is NULL

> +		return flow_err(port_id,
> +				ops->pattern_template_destroy(dev,
> +							      pattern_template,
> +							      error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +struct rte_flow_actions_template *
> +rte_flow_actions_template_create(uint16_t port_id,
> +			const struct rte_flow_actions_template_attr *template_attr,
> +			const struct rte_flow_action actions[],
> +			const struct rte_flow_action masks[],
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_actions_template *template;
> +
> +	if (unlikely(!ops))
> +		return NULL;
> +	if (likely(!!ops->actions_template_create)) {

State checks?

Check actions and masks vs NULL?

> +		template = ops->actions_template_create(dev, template_attr,
> +							actions, masks, error);
> +		if (template == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return template;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_actions_template_destroy(uint16_t port_id,
> +			struct rte_flow_actions_template *actions_template,
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->actions_template_destroy)) {

IMHO we should return success here if actions_template is NULL


> +		return flow_err(port_id,
> +				ops->actions_template_destroy(dev,
> +							      actions_template,
> +							      error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +struct rte_flow_template_table *
> +rte_flow_template_table_create(uint16_t port_id,
> +			const struct rte_flow_template_table_attr *table_attr,
> +			struct rte_flow_pattern_template *pattern_templates[],
> +			uint8_t nb_pattern_templates,
> +			struct rte_flow_actions_template *actions_templates[],
> +			uint8_t nb_actions_templates,
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_template_table *table;
> +
> +	if (unlikely(!ops))
> +		return NULL;
> +	if (likely(!!ops->template_table_create)) {

Argument sanity checks here. array NULL when size is not 0.

> +		table = ops->template_table_create(dev, table_attr,
> +					pattern_templates, nb_pattern_templates,
> +					actions_templates, nb_actions_templates,
> +					error);
> +		if (table == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return table;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_template_table_destroy(uint16_t port_id,
> +				struct rte_flow_template_table *template_table,
> +				struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->template_table_destroy)) {

Return success if template_table is NULL

> +		return flow_err(port_id,
> +				ops->template_table_destroy(dev,
> +							    template_table,
> +							    error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 92be2a9a89..e87db5a540 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
>   		   struct rte_flow_error *error);
>   
> +/**
> + * Opaque type returned after successful creation of pattern template.
> + * This handle can be used to manage the created pattern template.
> + */
> +struct rte_flow_pattern_template;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flow pattern template attributes.
> + */
> +__extension__
> +struct rte_flow_pattern_template_attr {
> +	/**
> +	 * Relaxed matching policy.
> +	 * - PMD may match only on items with mask member set and skip
> +	 * matching on protocol layers specified without any masks.
> +	 * - If not set, PMD will match on protocol layers
> +	 * specified without any masks as well.
> +	 * - Packet data must be stacked in the same order as the
> +	 * protocol layers to match inside packets, starting from the lowest.
> +	 */
> +	uint32_t relaxed_matching:1;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create pattern template.

Create flow pattern template.

> + *
> + * The pattern template defines common matching fields without values.
> + * For example, matching on 5 tuple TCP flow, the template will be
> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> + * while values for each rule will be set during the flow rule creation.
> + * The number and order of items in the template must be the same
> + * at the rule creation.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template_attr
> + *   Pattern template attributes.
> + * @param[in] pattern
> + *   Pattern specification (list terminated by the END pattern item).
> + *   The spec member of an item is not used unless the end member is used.

Interpretation of the pattern may depend on transfer vs non-transfer
rule to be used. It is essential information and we should provide it
when pattern template is created.

The information is provided on table stage, but it is too late.

> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_pattern_template *
> +rte_flow_pattern_template_create(uint16_t port_id,
> +		const struct rte_flow_pattern_template_attr *template_attr,
> +		const struct rte_flow_item pattern[],
> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Destroy pattern template.

Destroy flow pattern template.

> + *
> + * This function may be called only when
> + * there are no more tables referencing this template.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] pattern_template
> + *   Handle of the template to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_pattern_template_destroy(uint16_t port_id,
> +		struct rte_flow_pattern_template *pattern_template,
> +		struct rte_flow_error *error);
> +
> +/**
> + * Opaque type returned after successful creation of actions template.
> + * This handle can be used to manage the created actions template.
> + */
> +struct rte_flow_actions_template;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flow actions template attributes.
> + */
> +struct rte_flow_actions_template_attr;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create actions template.

Create flow rule actions template.

> + *
> + * The actions template holds a list of action types without values.
> + * For example, the template to change TCP ports is TCP(s_port + d_port),
> + * while values for each rule will be set during the flow rule creation.
> + * The number and order of actions in the template must be the same
> + * at the rule creation.

Again, it highly depends on transfer vs non-transfer. Moreover,
application definitely know it. So, it should say if the action
is intended for transfer or non-transfer flow rule.

> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template_attr
> + *   Template attributes.
> + * @param[in] actions
> + *   Associated actions (list terminated by the END action).
> + *   The spec member is only used if @p masks spec is non-zero.
> + * @param[in] masks
> + *   List of actions that marks which of the action's member is constant.
> + *   A mask has the same format as the corresponding action.
> + *   If the action field in @p masks is not 0,
> + *   the corresponding value in an action from @p actions will be the part
> + *   of the template and used in all flow rules.
> + *   The order of actions in @p masks is the same as in @p actions.
> + *   In case of indirect actions present in @p actions,
> + *   the actual action type should be present in @p mask.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_actions_template *
> +rte_flow_actions_template_create(uint16_t port_id,
> +		const struct rte_flow_actions_template_attr *template_attr,
> +		const struct rte_flow_action actions[],
> +		const struct rte_flow_action masks[],
> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Destroy actions template.

Destroy flow rule actions template.

> + *
> + * This function may be called only when
> + * there are no more tables referencing this template.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] actions_template
> + *   Handle to the template to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_actions_template_destroy(uint16_t port_id,
> +		struct rte_flow_actions_template *actions_template,
> +		struct rte_flow_error *error);
> +
> +/**
> + * Opaque type returned after successful creation of a template table.
> + * This handle can be used to manage the created template table.
> + */
> +struct rte_flow_template_table;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Table attributes.
> + */
> +struct rte_flow_template_table_attr {
> +	/**
> +	 * Flow attributes to be used in each rule generated from this table.
> +	 */
> +	struct rte_flow_attr flow_attr;
> +	/**
> +	 * Maximum number of flow rules that this table holds.
> +	 */
> +	uint32_t nb_flows;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create template table.
> + *
> + * A template table consists of multiple pattern templates and actions
> + * templates associated with a single set of rule attributes (group ID,
> + * priority and traffic direction).
> + *
> + * Each rule is free to use any combination of pattern and actions templates
> + * and specify particular values for items and actions it would like to change.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] table_attr
> + *   Template table attributes.
> + * @param[in] pattern_templates
> + *   Array of pattern templates to be used in this table.
> + * @param[in] nb_pattern_templates
> + *   The number of pattern templates in the pattern_templates array.
> + * @param[in] actions_templates
> + *   Array of actions templates to be used in this table.
> + * @param[in] nb_actions_templates
> + *   The number of actions templates in the actions_templates array.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_template_table *
> +rte_flow_template_table_create(uint16_t port_id,
> +		const struct rte_flow_template_table_attr *table_attr,
> +		struct rte_flow_pattern_template *pattern_templates[],
> +		uint8_t nb_pattern_templates,
> +		struct rte_flow_actions_template *actions_templates[],
> +		uint8_t nb_actions_templates,
> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Destroy template table.
> + *
> + * This function may be called only when
> + * there are no more flow rules referencing this table.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template_table
> + *   Handle to the table to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_template_table_destroy(uint16_t port_id,
> +		struct rte_flow_template_table *template_table,
> +		struct rte_flow_error *error);
> +
>   #ifdef __cplusplus
>   }
>   #endif
> diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> index 7c29930d0f..2d96db1dc7 100644
> --- a/lib/ethdev/rte_flow_driver.h
> +++ b/lib/ethdev/rte_flow_driver.h
> @@ -162,6 +162,43 @@ struct rte_flow_ops {
>   		(struct rte_eth_dev *dev,
>   		 const struct rte_flow_port_attr *port_attr,
>   		 struct rte_flow_error *err);
> +	/** See rte_flow_pattern_template_create() */
> +	struct rte_flow_pattern_template *(*pattern_template_create)
> +		(struct rte_eth_dev *dev,
> +		 const struct rte_flow_pattern_template_attr *template_attr,
> +		 const struct rte_flow_item pattern[],
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_pattern_template_destroy() */
> +	int (*pattern_template_destroy)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_pattern_template *pattern_template,
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_actions_template_create() */
> +	struct rte_flow_actions_template *(*actions_template_create)
> +		(struct rte_eth_dev *dev,
> +		 const struct rte_flow_actions_template_attr *template_attr,
> +		 const struct rte_flow_action actions[],
> +		 const struct rte_flow_action masks[],
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_actions_template_destroy() */
> +	int (*actions_template_destroy)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_actions_template *actions_template,
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_template_table_create() */
> +	struct rte_flow_template_table *(*template_table_create)
> +		(struct rte_eth_dev *dev,
> +		 const struct rte_flow_template_table_attr *table_attr,
> +		 struct rte_flow_pattern_template *pattern_templates[],
> +		 uint8_t nb_pattern_templates,
> +		 struct rte_flow_actions_template *actions_templates[],
> +		 uint8_t nb_actions_templates,
> +		 struct rte_flow_error *err);
> +	/** See rte_flow_template_table_destroy() */
> +	int (*template_table_destroy)
> +		(struct rte_eth_dev *dev,
> +		 struct rte_flow_template_table *template_table,
> +		 struct rte_flow_error *err);
>   };
>   
>   /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index f1235aa913..5fd2108895 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -262,6 +262,12 @@ EXPERIMENTAL {
>   	rte_eth_dev_priority_flow_ctrl_queue_info_get;
>   	rte_flow_info_get;
>   	rte_flow_configure;
> +	rte_flow_pattern_template_create;
> +	rte_flow_pattern_template_destroy;
> +	rte_flow_actions_template_create;
> +	rte_flow_actions_template_destroy;
> +	rte_flow_template_table_create;
> +	rte_flow_template_table_destroy;
>   };
>   
>   INTERNAL {


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-11  2:26     ` [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-11 12:42       ` Andrew Rybchenko
  2022-02-12  2:19         ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-11 12:42 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/11/22 05:26, Alexander Kozyrev wrote:
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and the queue
> should be accessed from the same thread for all queue operations.
> It is the responsibility of the app to sync the queue functions in case
> of multi-threaded access to the same queue.
> 
> The rte_flow_q_flow_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_q_pull() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_q_flow_destroy() function
> enqueues a flow destruction to the requested queue.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
>   doc/guides/prog_guide/img/rte_flow_q_init.svg | 205 ++++++++++
>   .../prog_guide/img/rte_flow_q_usage.svg       | 351 ++++++++++++++++++
>   doc/guides/prog_guide/rte_flow.rst            | 167 ++++++++-
>   doc/guides/rel_notes/release_22_03.rst        |   8 +
>   lib/ethdev/rte_flow.c                         | 175 ++++++++-
>   lib/ethdev/rte_flow.h                         | 334 +++++++++++++++++
>   lib/ethdev/rte_flow_driver.h                  |  55 +++
>   lib/ethdev/version.map                        |   7 +
>   8 files changed, 1300 insertions(+), 2 deletions(-)
>   create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
>   create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
> 
> diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> new file mode 100644
> index 0000000000..96160bde42
> --- /dev/null
> +++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> @@ -0,0 +1,205 @@
> +<?xml version="1.0" encoding="UTF-8" standalone="no"?>
> +<!-- SPDX-License-Identifier: BSD-3-Clause -->
> +
> +<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
> +
> +<svg
> +   width="485"
> +   height="535"
> +   overflow="hidden"
> +   version="1.1"
> +   id="svg61"
> +   sodipodi:docname="rte_flow_q_init.svg"
> +   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
> +   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
> +   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
> +   xmlns="http://www.w3.org/2000/svg"
> +   xmlns:svg="http://www.w3.org/2000/svg">
> +  <sodipodi:namedview
> +     id="namedview63"
> +     pagecolor="#ffffff"
> +     bordercolor="#666666"
> +     borderopacity="1.0"
> +     inkscape:pageshadow="2"
> +     inkscape:pageopacity="0.0"
> +     inkscape:pagecheckerboard="0"
> +     showgrid="false"
> +     inkscape:zoom="1.517757"
> +     inkscape:cx="242.79249"
> +     inkscape:cy="267.17057"
> +     inkscape:window-width="2400"
> +     inkscape:window-height="1271"
> +     inkscape:window-x="2391"
> +     inkscape:window-y="-9"
> +     inkscape:window-maximized="1"
> +     inkscape:current-layer="g59" />
> +  <defs
> +     id="defs5">
> +    <clipPath
> +       id="clip0">
> +      <rect
> +         x="0"
> +         y="0"
> +         width="485"
> +         height="535"
> +         id="rect2" />
> +    </clipPath>
> +  </defs>
> +  <g
> +     clip-path="url(#clip0)"
> +     id="g59">
> +    <rect
> +       x="0"
> +       y="0"
> +       width="485"
> +       height="535"
> +       fill="#FFFFFF"
> +       id="rect7" />
> +    <rect
> +       x="0.500053"
> +       y="79.5001"
> +       width="482"
> +       height="59"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#A6A6A6"
> +       id="rect9" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="24"
> +       transform="translate(121.6 116)"
> +       id="text13">
> +         rte_eth_dev_configure
> +         <tspan
> +   font-size="24"
> +   x="224.007"
> +   y="0"
> +   id="tspan11">()</tspan></text>
> +    <rect
> +       x="0.500053"
> +       y="158.5"
> +       width="482"
> +       height="59"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect15" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="24"
> +       transform="translate(140.273 195)"
> +       id="text17">
> +         rte_flow_configure()
> +      </text>
> +    <rect
> +       x="0.500053"
> +       y="236.5"
> +       width="482"
> +       height="60"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect19" />
> +    <text
> +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="24px"
> +       id="text21"
> +       x="63.425903"
> +       y="274">rte_flow_pattern_template_create()</text>
> +    <rect
> +       x="0.500053"
> +       y="316.5"
> +       width="482"
> +       height="59"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect23" />
> +    <text
> +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="24px"
> +       id="text27"
> +       x="69.379204"
> +       y="353">rte_flow_actions_template_create()</text>
> +    <rect
> +       x="0.500053"
> +       y="0.500053"
> +       width="482"
> +       height="60"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#A6A6A6"
> +       id="rect29" />
> +    <text
> +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="24px"
> +       transform="translate(177.233,37)"
> +       id="text33">rte_eal_init()</text>
> +    <path
> +       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
> +       transform="matrix(-1 0 0 1 241 60)"
> +       id="path35" />
> +    <path
> +       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
> +       transform="matrix(-1 0 0 1 241 138)"
> +       id="path37" />
> +    <path
> +       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
> +       transform="matrix(-1 0 0 1 241 217)"
> +       id="path39" />
> +    <rect
> +       x="0.500053"
> +       y="395.5"
> +       width="482"
> +       height="59"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect41" />
> +    <text
> +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="24px"
> +       id="text47"
> +       x="76.988998"
> +       y="432">rte_flow_template_table_create()</text>
> +    <path
> +       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
> +       transform="matrix(-1 0 0 1 241 296)"
> +       id="path49" />
> +    <path
> +       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
> +       id="path51" />
> +    <rect
> +       x="0.500053"
> +       y="473.5"
> +       width="482"
> +       height="60"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#A6A6A6"
> +       id="rect53" />
> +    <text
> +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="24px"
> +       id="text55"
> +       x="149.30299"
> +       y="511">rte_eth_dev_start()</text>
> +    <path
> +       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
> +       id="path57" />
> +  </g>
> +</svg>
> diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
> new file mode 100644
> index 0000000000..a1f6c0a0a8
> --- /dev/null
> +++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
> @@ -0,0 +1,351 @@
> +<?xml version="1.0" encoding="UTF-8" standalone="no"?>
> +<!-- SPDX-License-Identifier: BSD-3-Clause -->
> +
> +<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
> +
> +<svg
> +   width="880"
> +   height="610"
> +   overflow="hidden"
> +   version="1.1"
> +   id="svg103"
> +   sodipodi:docname="rte_flow_q_usage.svg"
> +   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
> +   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
> +   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
> +   xmlns="http://www.w3.org/2000/svg"
> +   xmlns:svg="http://www.w3.org/2000/svg">
> +  <sodipodi:namedview
> +     id="namedview105"
> +     pagecolor="#ffffff"
> +     bordercolor="#666666"
> +     borderopacity="1.0"
> +     inkscape:pageshadow="2"
> +     inkscape:pageopacity="0.0"
> +     inkscape:pagecheckerboard="0"
> +     showgrid="false"
> +     inkscape:zoom="1.3311475"
> +     inkscape:cx="439.84606"
> +     inkscape:cy="305.37562"
> +     inkscape:window-width="2400"
> +     inkscape:window-height="1271"
> +     inkscape:window-x="2391"
> +     inkscape:window-y="-9"
> +     inkscape:window-maximized="1"
> +     inkscape:current-layer="g101" />
> +  <defs
> +     id="defs5">
> +    <clipPath
> +       id="clip0">
> +      <rect
> +         x="0"
> +         y="0"
> +         width="880"
> +         height="610"
> +         id="rect2" />
> +    </clipPath>
> +  </defs>
> +  <g
> +     clip-path="url(#clip0)"
> +     id="g101">
> +    <rect
> +       x="0"
> +       y="0"
> +       width="880"
> +       height="610"
> +       fill="#FFFFFF"
> +       id="rect7" />
> +    <rect
> +       x="333.5"
> +       y="0.500053"
> +       width="234"
> +       height="45"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#A6A6A6"
> +       id="rect9" />
> +    <text
> +       font-family="Consolas, Consolas_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="19px"
> +       transform="translate(357.196,29)"
> +       id="text11">rte_eth_rx_burst()</text>
> +    <rect
> +       x="333.5"
> +       y="63.5001"
> +       width="234"
> +       height="45"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect13" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(394.666 91)"
> +       id="text17">analyze <tspan
> +   font-size="19"
> +   x="60.9267"
> +   y="0"
> +   id="tspan15">packet </tspan></text>
> +    <rect
> +       x="572.5"
> +       y="279.5"
> +       width="234"
> +       height="46"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect19" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(591.429 308)"
> +       id="text21">rte_flow_q_flow_create()</text>
> +    <path
> +       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       fill-rule="evenodd"
> +       id="path23" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(430.069 378)"
> +       id="text27">more <tspan
> +   font-size="19"
> +   x="-12.94"
> +   y="23"
> +   id="tspan25">packets?</tspan></text>
> +    <path
> +       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
> +       id="path29" />
> +    <path
> +       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
> +       id="path31" />
> +    <path
> +       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
> +       id="path33" />
> +    <path
> +       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
> +       id="path35" />
> +    <path
> +       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
> +       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
> +       id="path37" />
> +    <path
> +       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       fill-rule="evenodd"
> +       id="path39" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(417.576 155)"
> +       id="text43">add new <tspan
> +   font-size="19"
> +   x="13.2867"
> +   y="23"
> +   id="tspan41">rule?</tspan></text>
> +    <path
> +       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
> +       id="path45" />
> +    <rect
> +       x="602.5"
> +       y="127.5"
> +       width="46"
> +       height="30"
> +       stroke="#000000"
> +       stroke-width="0.666667"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect47" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(611.34 148)"
> +       id="text49">yes</text>
> +    <rect
> +       x="254.5"
> +       y="126.5"
> +       width="46"
> +       height="31"
> +       stroke="#000000"
> +       stroke-width="0.666667"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect51" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(267.182 147)"
> +       id="text53">no</text>
> +    <path
> +       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
> +       transform="matrix(1 0 0 -1 567.5 383.495)"
> +       id="path55" />
> +    <path
> +       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       fill-rule="evenodd"
> +       id="path57" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(159.155 208)"
> +       id="text61">destroy the <tspan
> +   font-size="19"
> +   x="24.0333"
> +   y="23"
> +   id="tspan59">rule?</tspan></text>
> +    <path
> +       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
> +       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
> +       id="path63" />
> +    <rect
> +       x="81.5001"
> +       y="280.5"
> +       width="234"
> +       height="45"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect65" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(96.2282 308)"
> +       id="text67">rte_flow_q_flow_destroy()</text>
> +    <path
> +       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
> +       transform="matrix(-1 0 0 1 319.915 213.5)"
> +       id="path69" />
> +    <path
> +       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
> +       id="path71" />
> +    <path
> +       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
> +       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
> +       id="path73" />
> +    <rect
> +       x="334.5"
> +       y="540.5"
> +       width="234"
> +       height="45"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect75" />
> +    <text
> +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> +       font-weight="400"
> +       font-size="19px"
> +       id="text77"
> +       x="385.08301"
> +       y="569">rte_flow_q_pull()</text>
> +    <rect
> +       x="334.5"
> +       y="462.5"
> +       width="234"
> +       height="45"
> +       stroke="#000000"
> +       stroke-width="1.33333"
> +       stroke-miterlimit="8"
> +       fill="#FFFFFF"
> +       id="rect79" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(379.19 491)"
> +       id="text81">rte_flow_q_push()</text>
> +    <path
> +       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
> +       id="path83" />
> +    <rect
> +       x="0.500053"
> +       y="287.5"
> +       width="46"
> +       height="30"
> +       stroke="#000000"
> +       stroke-width="0.666667"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect85" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(12.8617 308)"
> +       id="text87">no</text>
> +    <rect
> +       x="357.5"
> +       y="223.5"
> +       width="47"
> +       height="31"
> +       stroke="#000000"
> +       stroke-width="0.666667"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect89" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(367.001 244)"
> +       id="text91">yes</text>
> +    <rect
> +       x="469.5"
> +       y="421.5"
> +       width="46"
> +       height="30"
> +       stroke="#000000"
> +       stroke-width="0.666667"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect93" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(481.872 442)"
> +       id="text95">no</text>
> +    <rect
> +       x="832.5"
> +       y="223.5"
> +       width="46"
> +       height="31"
> +       stroke="#000000"
> +       stroke-width="0.666667"
> +       stroke-miterlimit="8"
> +       fill="#D9D9D9"
> +       id="rect97" />
> +    <text
> +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> +       font-weight="400"
> +       font-size="19"
> +       transform="translate(841.777 244)"
> +       id="text99">yes</text>
> +  </g>
> +</svg>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index 5391648833..5d47f3bd21 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -3607,12 +3607,16 @@ Expected number of counters or meters in an application, for example,
>   allow PMD to prepare and optimize NIC memory layout in advance.
>   ``rte_flow_configure()`` must be called before any flow rule is created,
>   but after an Ethernet device is configured.
> +It also creates flow queues for asynchronous flow rules operations via
> +queue-based API, see `Asynchronous operations`_ section.
>   
>   .. code-block:: c
>   
>      int
>      rte_flow_configure(uint16_t port_id,
>                        const struct rte_flow_port_attr *port_attr,
> +                     uint16_t nb_queue,
> +                     const struct rte_flow_queue_attr *queue_attr[],
>                        struct rte_flow_error *error);
>   
>   Information about resources that can benefit from pre-allocation can be
> @@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
>   
>   .. code-block:: c
>   
> -	rte_flow_configure(port, *port_attr, *error);
> +	rte_flow_configure(port, *port_attr, nb_queue, *queue_attr, *error);

* before queue_attr looks strange

>   
>   	struct rte_flow_pattern_template *pattern_templates[0] =
>   		rte_flow_pattern_template_create(port, &itr, &pattern, &error);
> @@ -3750,6 +3754,167 @@ and pattern and actions templates are created.
>   				*actions_templates, nb_actions_templates,
>   				*error);
>   
> +Asynchronous operations
> +-----------------------
> +
> +Flow rules management can be done via special lockless flow management queues.
> +- Queue operations are asynchronous and not thread-safe.
> +
> +- Operations can thus be invoked by the app's datapath,
> +  packet processing can continue while queue operations are processed by NIC.
> +
> +- The queue number is configured at initialization stage.

I read "the queue number" as some number for a specific queue.
May be "Number of queues is configured..."

> +
> +- Available operation types: rule creation, rule destruction,
> +  indirect rule creation, indirect rule destruction, indirect rule update.
> +
> +- Operations may be reordered within a queue.

Do we want to have barriers?
E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule 
lives forever.

> +
> +- Operations can be postponed and pushed to NIC in batches.
> +
> +- Results pulling must be done on time to avoid queue overflows.

polling? (as libc poll() which checks status of file descriptors)
it is not pulling the door to open it :)

> +
> +- User data is returned as part of the result to identify an operation.
> +
> +- Flow handle is valid once the creation operation is enqueued and must be
> +  destroyed even if the operation is not successful and the rule is not inserted.
> +
> +The asynchronous flow rule insertion logic can be broken into two phases.
> +
> +1. Initialization stage as shown here:
> +
> +.. _figure_rte_flow_q_init:
> +
> +.. figure:: img/rte_flow_q_init.*
> +
> +2. Main loop as presented on a datapath application example:
> +
> +.. _figure_rte_flow_q_usage:
> +
> +.. figure:: img/rte_flow_q_usage.*
> +
> +Enqueue creation operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Enqueueing a flow rule creation operation is similar to simple creation.
> +
> +.. code-block:: c
> +
> +	struct rte_flow *
> +	rte_flow_q_flow_create(uint16_t port_id,
> +				uint32_t queue_id,
> +				const struct rte_flow_q_ops_attr *q_ops_attr,
> +				struct rte_flow_template_table *template_table,
> +				const struct rte_flow_item pattern[],
> +				uint8_t pattern_template_index,
> +				const struct rte_flow_action actions[],
> +				uint8_t actions_template_index,
> +				struct rte_flow_error *error);
> +
> +A valid handle in case of success is returned. It must be destroyed later
> +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
> +
> +Enqueue destruction operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Enqueueing a flow rule destruction operation is similar to simple destruction.
> +
> +.. code-block:: c
> +
> +	int
> +	rte_flow_q_flow_destroy(uint16_t port_id,
> +				uint32_t queue_id,
> +				const struct rte_flow_q_ops_attr *q_ops_attr,
> +				struct rte_flow *flow,
> +				struct rte_flow_error *error);
> +
> +Push enqueued operations
> +~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Pushing all internally stored rules from a queue to the NIC.
> +
> +.. code-block:: c
> +
> +	int
> +	rte_flow_q_push(uint16_t port_id,
> +			uint32_t queue_id,
> +			struct rte_flow_error *error);
> +
> +There is the postpone attribute in the queue operation attributes.
> +When it is set, multiple operations can be bulked together and not sent to HW
> +right away to save SW/HW interactions and prioritize throughput over latency.
> +The application must invoke this function to actually push all outstanding
> +operations to HW in this case.
> +
> +Pull enqueued operations
> +~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Pulling asynchronous operations results.
> +
> +The application must invoke this function in order to complete asynchronous
> +flow rule operations and to receive flow rule operations statuses.
> +
> +.. code-block:: c
> +
> +	int
> +	rte_flow_q_pull(uint16_t port_id,
> +			uint32_t queue_id,
> +			struct rte_flow_q_op_res res[],
> +			uint16_t n_res,
> +			struct rte_flow_error *error);
> +
> +Multiple outstanding operation results can be pulled simultaneously.
> +User data may be provided during a flow creation/destruction in order
> +to distinguish between multiple operations. User data is returned as part
> +of the result to provide a method to detect which operation is completed.
> +
> +Enqueue indirect action creation operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Asynchronous version of indirect action creation API.
> +
> +.. code-block:: c
> +
> +	struct rte_flow_action_handle *
> +	rte_flow_q_action_handle_create(uint16_t port_id,
> +			uint32_t queue_id,
> +			const struct rte_flow_q_ops_attr *q_ops_attr,
> +			const struct rte_flow_indir_action_conf *indir_action_conf,
> +			const struct rte_flow_action *action,
> +			struct rte_flow_error *error);
> +
> +A valid handle in case of success is returned. It must be destroyed later by
> +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
> +
> +Enqueue indirect action destruction operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Asynchronous version of indirect action destruction API.
> +
> +.. code-block:: c
> +
> +	int
> +	rte_flow_q_action_handle_destroy(uint16_t port_id,
> +			uint32_t queue_id,
> +			const struct rte_flow_q_ops_attr *q_ops_attr,
> +			struct rte_flow_action_handle *action_handle,
> +			struct rte_flow_error *error);
> +
> +Enqueue indirect action update operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Asynchronous version of indirect action update API.
> +
> +.. code-block:: c
> +
> +	int
> +	rte_flow_q_action_handle_update(uint16_t port_id,
> +			uint32_t queue_id,
> +			const struct rte_flow_q_ops_attr *q_ops_attr,
> +			struct rte_flow_action_handle *action_handle,
> +			const void *update,
> +			struct rte_flow_error *error);
> +
>   .. _flow_isolated_mode:
>   
>   Flow isolated mode
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index 6656b35295..87cea8a966 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -83,6 +83,14 @@ New Features
>       ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
>       and ``rte_flow_actions_template_destroy``.
>   
> +  * ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy``
> +    API to enqueue flow creaion/destruction operations asynchronously as well
> +    as ``rte_flow_q_pull`` to poll and retrieve results of these operations
> +    and ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
> +    Introduced asynchronous API for indirect actions management as well:
> +    ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy``
> +    and ``rte_flow_q_action_handle_update``.
> +
>   * **Updated AF_XDP PMD**
>   
>     * Added support for libxdp >=v1.2.2.
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index b53f8c9b89..aca5bac2da 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id,
>   int
>   rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
> +		   uint16_t nb_queue,
> +		   const struct rte_flow_queue_attr *queue_attr[],
>   		   struct rte_flow_error *error)
>   {
>   	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> @@ -1424,7 +1426,8 @@ rte_flow_configure(uint16_t port_id,
>   		return -rte_errno;
>   	if (likely(!!ops->configure)) {
>   		return flow_err(port_id,
> -				ops->configure(dev, port_attr, error),
> +				ops->configure(dev, port_attr,
> +					       nb_queue, queue_attr, error),
>   				error);
>   	}
>   	return rte_flow_error_set(error, ENOTSUP,
> @@ -1578,3 +1581,173 @@ rte_flow_template_table_destroy(uint16_t port_id,
>   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>   				  NULL, rte_strerror(ENOTSUP));
>   }
> +
> +struct rte_flow *
> +rte_flow_q_flow_create(uint16_t port_id,
> +		       uint32_t queue_id,
> +		       const struct rte_flow_q_ops_attr *q_ops_attr,
> +		       struct rte_flow_template_table *template_table,
> +		       const struct rte_flow_item pattern[],
> +		       uint8_t pattern_template_index,
> +		       const struct rte_flow_action actions[],
> +		       uint8_t actions_template_index,
> +		       struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow *flow;
> +
> +	if (unlikely(!ops))
> +		return NULL;
> +	if (likely(!!ops->q_flow_create)) {
> +		flow = ops->q_flow_create(dev, queue_id,
> +					  q_ops_attr, template_table,
> +					  pattern, pattern_template_index,
> +					  actions, actions_template_index,
> +					  error);
> +		if (flow == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return flow;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_q_flow_destroy(uint16_t port_id,
> +			uint32_t queue_id,
> +			const struct rte_flow_q_ops_attr *q_ops_attr,
> +			struct rte_flow *flow,
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->q_flow_destroy)) {
> +		return flow_err(port_id,
> +				ops->q_flow_destroy(dev, queue_id,
> +						    q_ops_attr, flow, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +struct rte_flow_action_handle *
> +rte_flow_q_action_handle_create(uint16_t port_id,
> +		uint32_t queue_id,
> +		const struct rte_flow_q_ops_attr *q_ops_attr,
> +		const struct rte_flow_indir_action_conf *indir_action_conf,
> +		const struct rte_flow_action *action,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_action_handle *handle;
> +
> +	if (unlikely(!ops))
> +		return NULL;
> +	if (unlikely(!ops->q_action_handle_create)) {
> +		rte_flow_error_set(error, ENOSYS,
> +				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
> +				   rte_strerror(ENOSYS));
> +		return NULL;
> +	}
> +	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
> +					     indir_action_conf, action, error);
> +	if (handle == NULL)
> +		flow_err(port_id, -rte_errno, error);
> +	return handle;
> +}
> +
> +int
> +rte_flow_q_action_handle_destroy(uint16_t port_id,
> +		uint32_t queue_id,
> +		const struct rte_flow_q_ops_attr *q_ops_attr,
> +		struct rte_flow_action_handle *action_handle,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	int ret;
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (unlikely(!ops->q_action_handle_destroy))
> +		return rte_flow_error_set(error, ENOSYS,
> +					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +					  NULL, rte_strerror(ENOSYS));
> +	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
> +					   action_handle, error);
> +	return flow_err(port_id, ret, error);
> +}
> +
> +int
> +rte_flow_q_action_handle_update(uint16_t port_id,
> +		uint32_t queue_id,
> +		const struct rte_flow_q_ops_attr *q_ops_attr,
> +		struct rte_flow_action_handle *action_handle,
> +		const void *update,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	int ret;
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (unlikely(!ops->q_action_handle_update))
> +		return rte_flow_error_set(error, ENOSYS,
> +					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +					  NULL, rte_strerror(ENOSYS));
> +	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
> +					  action_handle, update, error);
> +	return flow_err(port_id, ret, error);
> +}
> +
> +int
> +rte_flow_q_push(uint16_t port_id,
> +		uint32_t queue_id,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->q_push)) {
> +		return flow_err(port_id,
> +				ops->q_push(dev, queue_id, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_q_pull(uint16_t port_id,
> +		uint32_t queue_id,
> +		struct rte_flow_q_op_res res[],
> +		uint16_t n_res,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	int ret;
> +
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->q_pull)) {
> +		ret = ops->q_pull(dev, queue_id, res, n_res, error);
> +		return ret ? ret : flow_err(port_id, ret, error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index e87db5a540..b0d4f33bfd 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4862,6 +4862,10 @@ rte_flow_flex_item_release(uint16_t port_id,
>    *
>    */
>   struct rte_flow_port_info {
> +	/**
> +	 * Number of queues for asynchronous operations.

Is it a maximum number of queues?

> +	 */
> +	uint32_t nb_queues;
>   	/**
>   	 * Number of pre-configurable counter actions.
>   	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> @@ -4879,6 +4883,17 @@ struct rte_flow_port_info {
>   	uint32_t nb_meters;
>   };
>   
> +/**
> + * Flow engine queue configuration.
> + */
> +__extension__
> +struct rte_flow_queue_attr {
> +	/**
> +	 * Number of flow rule operations a queue can hold.
> +	 */
> +	uint32_t size;

Whar are the min/max sizes? 0 as the default size, if yes, do we need
an API to find actual size?

> +};
> +
>   /**
>    * @warning
>    * @b EXPERIMENTAL: this API may change without prior notice.
> @@ -4948,6 +4963,11 @@ struct rte_flow_port_attr {
>    *   Port identifier of Ethernet device.
>    * @param[in] port_attr
>    *   Port configuration attributes.
> + * @param[in] nb_queue
> + *   Number of flow queues to be configured.
> + * @param[in] queue_attr
> + *   Array that holds attributes for each flow queue.
> + *   Number of elements is set in @p port_attr.nb_queues.
>    * @param[out] error
>    *   Perform verbose error reporting if not NULL.
>    *   PMDs initialize this structure in case of error only.
> @@ -4959,6 +4979,8 @@ __rte_experimental
>   int
>   rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
> +		   uint16_t nb_queue,
> +		   const struct rte_flow_queue_attr *queue_attr[],
>   		   struct rte_flow_error *error);
>   
>   /**
> @@ -5221,6 +5243,318 @@ rte_flow_template_table_destroy(uint16_t port_id,
>   		struct rte_flow_template_table *template_table,
>   		struct rte_flow_error *error);
>   
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Queue operation attributes.
> + */
> +struct rte_flow_q_ops_attr {
> +	/**
> +	 * The user data that will be returned on the completion events.
> +	 */
> +	void *user_data;

IMHO it must not be hiddne in attrs. It is a key information
which is used to understand the opration result. It should
be passed separately.

> +	 /**
> +	  * When set, the requested action will not be sent to the HW immediately.
> +	  * The application must call the rte_flow_queue_push to actually send it.

Will the next operation without the attribute set implicitly push it?
Is it mandatory for the driver to respect it? Or is it just a possible
optimization hint?

> +	  */
> +	uint32_t postpone:1;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue rule creation operation.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue_id
> + *   Flow queue used to insert the rule.
> + * @param[in] q_ops_attr
> + *   Rule creation operation attributes.
> + * @param[in] template_table
> + *   Template table to select templates from.

IMHO it should be done optional. I.e. NULL allows.
If NULL, indecies are ignored and pattern+actions are full
specificiation as in rte_flow_create(). The only missing bit
is attributes.

Basically I'm sure that hardwiring queue-based flow rule control
to template is the right solution. It should be possible without
templates. May be it should be a separate API to be added later
if/when required.

> + * @param[in] pattern
> + *   List of pattern items to be used.
> + *   The list order should match the order in the pattern template.
> + *   The spec is the only relevant member of the item that is being used.
> + * @param[in] pattern_template_index
> + *   Pattern template index in the table.
> + * @param[in] actions
> + *   List of actions to be used.
> + *   The list order should match the order in the actions template.
> + * @param[in] actions_template_index
> + *   Actions template index in the table.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + *   The rule handle doesn't mean that the rule was offloaded.

"was offloaded" sounds ambiguous. API says nothing about any kind
of offloading before. "has been populated" or "has been
created" (since API says "create").

> + *   Only completion result indicates that the rule was offloaded.
> + */
> +__rte_experimental
> +struct rte_flow *
> +rte_flow_q_flow_create(uint16_t port_id,

flow_q_flow does not sound like a good nameing, consider:
rte_flow_q_rule_create() is <subsystem>_<subtype>_<object>_<action>

> +		       uint32_t queue_id,
> +		       const struct rte_flow_q_ops_attr *q_ops_attr,
> +		       struct rte_flow_template_table *template_table,
> +		       const struct rte_flow_item pattern[],
> +		       uint8_t pattern_template_index,
> +		       const struct rte_flow_action actions[],
> +		       uint8_t actions_template_index,
> +		       struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue rule destruction operation.
> + *
> + * This function enqueues a destruction operation on the queue.
> + * Application should assume that after calling this function
> + * the rule handle is not valid anymore.
> + * Completion indicates the full removal of the rule from the HW.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue_id
> + *   Flow queue which is used to destroy the rule.
> + *   This must match the queue on which the rule was created.
> + * @param[in] q_ops_attr
> + *   Rule destroy operation attributes.
> + * @param[in] flow
> + *   Flow handle to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_flow_destroy(uint16_t port_id,
> +			uint32_t queue_id,
> +			const struct rte_flow_q_ops_attr *q_ops_attr,
> +			struct rte_flow *flow,
> +			struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue indirect action creation operation.
> + * @see rte_flow_action_handle_create
> + *
> + * @param[in] port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] queue_id
> + *   Flow queue which is used to create the rule.
> + * @param[in] q_ops_attr
> + *   Queue operation attributes.
> + * @param[in] indir_action_conf
> + *   Action configuration for the indirect action object creation.
> + * @param[in] action
> + *   Specific configuration of the indirect action object.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   - (0) if success.

Hold on. Pointer is returned by the function.

> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-ENOSYS) if underlying device does not support this functionality.
> + *   - (-EIO) if underlying device is removed.
> + *   - (-ENOENT) if action pointed by *action* handle was not found.
> + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> + *   rte_errno is also set.

Which error code should be used if too many ops are enqueued (overflow)?

> + */
> +__rte_experimental
> +struct rte_flow_action_handle *
> +rte_flow_q_action_handle_create(uint16_t port_id,
> +		uint32_t queue_id,
> +		const struct rte_flow_q_ops_attr *q_ops_attr,
> +		const struct rte_flow_indir_action_conf *indir_action_conf,
> +		const struct rte_flow_action *action,

I don't understand why it differs so much from rule creation.
Why is action template not used?
IMHO indirect actions should be dropped from the patch
and added separately since it is a separate feature.

> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue indirect action destruction operation.
> + * The destroy queue must be the same
> + * as the queue on which the action was created.
> + *
> + * @param[in] port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] queue_id
> + *   Flow queue which is used to destroy the rule.
> + * @param[in] q_ops_attr
> + *   Queue operation attributes.
> + * @param[in] action_handle
> + *   Handle for the indirect action object to be destroyed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   - (0) if success.
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-ENOSYS) if underlying device does not support this functionality.
> + *   - (-EIO) if underlying device is removed.
> + *   - (-ENOENT) if action pointed by *action* handle was not found.
> + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> + *   rte_errno is also set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_action_handle_destroy(uint16_t port_id,
> +		uint32_t queue_id,
> +		const struct rte_flow_q_ops_attr *q_ops_attr,
> +		struct rte_flow_action_handle *action_handle,
> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue indirect action update operation.
> + * @see rte_flow_action_handle_create
> + *
> + * @param[in] port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] queue_id
> + *   Flow queue which is used to update the rule.
> + * @param[in] q_ops_attr
> + *   Queue operation attributes.
> + * @param[in] action_handle
> + *   Handle for the indirect action object to be updated.
> + * @param[in] update
> + *   Update profile specification used to modify the action pointed by handle.
> + *   *update* could be with the same type of the immediate action corresponding
> + *   to the *handle* argument when creating, or a wrapper structure includes
> + *   action configuration to be updated and bit fields to indicate the member
> + *   of fields inside the action to update.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   - (0) if success.
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-ENOSYS) if underlying device does not support this functionality.
> + *   - (-EIO) if underlying device is removed.
> + *   - (-ENOENT) if action pointed by *action* handle was not found.
> + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> + *   rte_errno is also set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_action_handle_update(uint16_t port_id,
> +		uint32_t queue_id,
> +		const struct rte_flow_q_ops_attr *q_ops_attr,
> +		struct rte_flow_action_handle *action_handle,
> +		const void *update,
> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Push all internally stored rules to the HW.
> + * Postponed rules are rules that were inserted with the postpone flag set.
> + * Can be used to notify the HW about batch of rules prepared by the SW to
> + * reduce the number of communications between the HW and SW.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue_id
> + *   Flow queue to be pushed.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *    0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_q_push(uint16_t port_id,
> +		uint32_t queue_id,
> +		struct rte_flow_error *error);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Queue operation status.
> + */
> +enum rte_flow_q_op_status {
> +	/**
> +	 * The operation was completed successfully.
> +	 */
> +	RTE_FLOW_Q_OP_SUCCESS,
> +	/**
> +	 * The operation was not completed successfully.
> +	 */
> +	RTE_FLOW_Q_OP_ERROR,
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Queue operation results.
> + */
> +__extension__
> +struct rte_flow_q_op_res {
> +	/**
> +	 * Returns the status of the operation that this completion signals.
> +	 */
> +	enum rte_flow_q_op_status status;
> +	/**
> +	 * The user data that will be returned on the completion events.
> +	 */
> +	void *user_data;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Pull a rte flow operation.
> + * The application must invoke this function in order to complete
> + * the flow rule offloading and to retrieve the flow rule operation status.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param queue_id
> + *   Flow queue which is used to pull the operation.
> + * @param[out] res
> + *   Array of results that will be set.
> + * @param[in] n_res
> + *   Maximum number of results that can be returned.
> + *   This value is equal to the size of the res array.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Number of results that were pulled,
> + *   a negative errno value otherwise and rte_errno is set.

Don't we want to define negative error code meaning?

> + */
> +__rte_experimental
> +int
> +rte_flow_q_pull(uint16_t port_id,
> +		uint32_t queue_id,
> +		struct rte_flow_q_op_res res[],
> +		uint16_t n_res,
> +		struct rte_flow_error *error);
> +
>   #ifdef __cplusplus
>   }
>   #endif

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-11 10:16       ` Andrew Rybchenko
@ 2022-02-11 18:47         ` Alexander Kozyrev
  2022-02-16 13:03           ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11 18:47 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Friday, February 11, 2022 5:17 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
> Sent: Friday, February 11, 2022 5:17
> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru; ferruh.yigit@intel.com;
> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com; jerinj@marvell.com;
> ajit.khaparde@broadcom.com; bruce.richardson@intel.com
> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
> 
> On 2/11/22 05:26, Alexander Kozyrev wrote:
> > The flow rules creation/destruction at a large scale incurs a performance
> > penalty and may negatively impact the packet processing when used
> > as part of the datapath logic. This is mainly because software/hardware
> > resources are allocated and prepared during the flow rule creation.
> >
> > In order to optimize the insertion rate, PMD may use some hints provided
> > by the application at the initialization phase. The rte_flow_configure()
> > function allows to pre-allocate all the needed resources beforehand.
> > These resources can be used at a later stage without costly allocations.
> > Every PMD may use only the subset of hints and ignore unused ones or
> > fail in case the requested configuration is not supported.
> >
> > The rte_flow_info_get() is available to retrieve the information about
> > supported pre-configurable resources. Both these functions must be called
> > before any other usage of the flow API engine.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> > ---
> >   doc/guides/prog_guide/rte_flow.rst     |  37 +++++++++
> >   doc/guides/rel_notes/release_22_03.rst |   6 ++
> >   lib/ethdev/rte_flow.c                  |  40 +++++++++
> >   lib/ethdev/rte_flow.h                  | 108 +++++++++++++++++++++++++
> >   lib/ethdev/rte_flow_driver.h           |  10 +++
> >   lib/ethdev/version.map                 |   2 +
> >   6 files changed, 203 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index b4aa9c47c2..72fb1132ac 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -3589,6 +3589,43 @@ Return values:
> >
> >   - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
> >
> > +Flow engine configuration
> > +-------------------------
> > +
> > +Configure flow API management.
> > +
> > +An application may provide some parameters at the initialization phase about
> > +rules engine configuration and/or expected flow rules characteristics.
> > +These parameters may be used by PMD to preallocate resources and
> configure NIC.
> > +
> > +Configuration
> > +~~~~~~~~~~~~~
> > +
> > +This function performs the flow API management configuration and
> > +pre-allocates needed resources beforehand to avoid costly allocations later.
> > +Expected number of counters or meters in an application, for example,
> > +allow PMD to prepare and optimize NIC memory layout in advance.
> > +``rte_flow_configure()`` must be called before any flow rule is created,
> > +but after an Ethernet device is configured.
> > +
> > +.. code-block:: c
> > +
> > +   int
> > +   rte_flow_configure(uint16_t port_id,
> > +                     const struct rte_flow_port_attr *port_attr,
> > +                     struct rte_flow_error *error);
> > +
> > +Information about resources that can benefit from pre-allocation can be
> > +retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
> > +of pre-configurable resources for a given port on a system.
> > +
> > +.. code-block:: c
> > +
> > +   int
> > +   rte_flow_info_get(uint16_t port_id,
> > +                     struct rte_flow_port_info *port_info,
> > +                     struct rte_flow_error *error);
> > +
> >   .. _flow_isolated_mode:
> >
> >   Flow isolated mode
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> > index f03183ee86..2a47a37f0a 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -69,6 +69,12 @@ New Features
> >     New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and
> >     ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added.
> >
> > +* ** Added functions to configure Flow API engine
> > +
> > +  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
> > +    engine, allowing to pre-allocate some resources for better performance.
> > +    Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
> > +
> >   * **Updated AF_XDP PMD**
> >
> >     * Added support for libxdp >=v1.2.2.
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index a93f68abbc..66614ae29b 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
> >   	ret = ops->flex_item_release(dev, handle, error);
> >   	return flow_err(port_id, ret, error);
> >   }
> > +
> > +int
> > +rte_flow_info_get(uint16_t port_id,
> > +		  struct rte_flow_port_info *port_info,
> > +		  struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->info_get)) {
> 
> expected ethdev state must be validated. Just configured?
> 
> > +		return flow_err(port_id,
> > +				ops->info_get(dev, port_info, error),
> 
> port_info must be checked vs NULL

We don’t have any NULL checks for parameters in the whole ret flow API library.
See rte_flow_create() for example. attributes, pattern and actions are passed to PMD unchecked.

> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_configure(uint16_t port_id,
> > +		   const struct rte_flow_port_attr *port_attr,
> > +		   struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->configure)) {
> 
> The API must validate ethdev state. configured and not started?
Again, we have no such validation for any rte flow API today.

> 
> > +		return flow_err(port_id,
> > +				ops->configure(dev, port_attr, error),
> 
> port_attr must be checked vs NULL
Same.

> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index 1031fb246b..92be2a9a89 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id,
> >   			   const struct rte_flow_item_flex_handle *handle,
> >   			   struct rte_flow_error *error);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Information about available pre-configurable resources.
> > + * The zero value means a resource cannot be pre-allocated.
> > + *
> > + */
> > +struct rte_flow_port_info {
> > +	/**
> > +	 * Number of pre-configurable counter actions.
> > +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> > +	 */
> > +	uint32_t nb_counters;
> 
> Name says that it is a number of counters, but description
> says that it is about actions.
> Also I don't understand what does "pre-configurable" mean.
> Isn't it a maximum number of available counters?
> If no, how can I find a maximum?
It is number of pre-allocated and pre-configured actions.
How are they pr-configured is up to PDM driver.
But let's change to "pre-configured" everywhere.
Configuration includes some memory allocation anyway. 

> 
> > +	/**
> > +	 * Number of pre-configurable aging flows actions.
> > +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> > +	 */
> > +	uint32_t nb_aging_flows;
> 
> Same
Ditto.
 
> > +	/**
> > +	 * Number of pre-configurable traffic metering actions.
> > +	 * @see RTE_FLOW_ACTION_TYPE_METER
> > +	 */
> > +	uint32_t nb_meters;
> 
> Same
Ditto.

> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Retrieve configuration attributes supported by the port.
> 
> Description should be a bit more flow API aware.
> Right now it sounds too generic.
Ok, how about
"Get information about flow engine pre-configurable resources."
 
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[out] port_info
> > + *   A pointer to a structure of type *rte_flow_port_info*
> > + *   to be filled with the contextual information of the port.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_info_get(uint16_t port_id,
> > +		  struct rte_flow_port_info *port_info,
> > +		  struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Resource pre-allocation and pre-configuration settings.
> 
> What is the difference between pre-allocation and pre-configuration?
> Why are both mentioned above, but just pre-configured actions are
> mentioned below?
Please see answer to this question above.
 
> > + * The zero value means on demand resource allocations only.
> > + *
> > + */
> > +struct rte_flow_port_attr {
> > +	/**
> > +	 * Number of counter actions pre-configured.
> > +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> > +	 */
> > +	uint32_t nb_counters;
> > +	/**
> > +	 * Number of aging flows actions pre-configured.
> > +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> > +	 */
> > +	uint32_t nb_aging_flows;
> > +	/**
> > +	 * Number of traffic metering actions pre-configured.
> > +	 * @see RTE_FLOW_ACTION_TYPE_METER
> > +	 */
> > +	uint32_t nb_meters;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Configure the port's flow API engine.
> > + *
> > + * This API can only be invoked before the application
> > + * starts using the rest of the flow library functions.
> > + *
> > + * The API can be invoked multiple times to change the
> > + * settings. The port, however, may reject the changes.
> > + *
> > + * Parameters in configuration attributes must not exceed
> > + * numbers of resources returned by the rte_flow_info_get API.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] port_attr
> > + *   Port configuration attributes.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_configure(uint16_t port_id,
> > +		   const struct rte_flow_port_attr *port_attr,
> > +		   struct rte_flow_error *error);
> > +
> >   #ifdef __cplusplus
> >   }
> >   #endif
> > diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> > index f691b04af4..7c29930d0f 100644
> > --- a/lib/ethdev/rte_flow_driver.h
> > +++ b/lib/ethdev/rte_flow_driver.h
> > @@ -152,6 +152,16 @@ struct rte_flow_ops {
> >   		(struct rte_eth_dev *dev,
> >   		 const struct rte_flow_item_flex_handle *handle,
> >   		 struct rte_flow_error *error);
> > +	/** See rte_flow_info_get() */
> > +	int (*info_get)
> > +		(struct rte_eth_dev *dev,
> > +		 struct rte_flow_port_info *port_info,
> > +		 struct rte_flow_error *err);
> > +	/** See rte_flow_configure() */
> > +	int (*configure)
> > +		(struct rte_eth_dev *dev,
> > +		 const struct rte_flow_port_attr *port_attr,
> > +		 struct rte_flow_error *err);
> >   };
> >
> >   /**
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index cd0c4c428d..f1235aa913 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -260,6 +260,8 @@ EXPERIMENTAL {
> >   	# added in 22.03
> >   	rte_eth_dev_priority_flow_ctrl_queue_configure;
> >   	rte_eth_dev_priority_flow_ctrl_queue_info_get;
> > +	rte_flow_info_get;
> > +	rte_flow_configure;
> >   };
> >
> >   INTERNAL {


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-11 11:27       ` Andrew Rybchenko
@ 2022-02-11 22:25         ` Alexander Kozyrev
  2022-02-16 13:14           ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-11 22:25 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Fri, Feb 11, 2022 6:27 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
> On 2/11/22 05:26, Alexander Kozyrev wrote:
> > Treating every single flow rule as a completely independent and separate
> > entity negatively impacts the flow rules insertion rate. Oftentimes in an
> > application, many flow rules share a common structure (the same item mask
> > and/or action list) so they can be grouped and classified together.
> > This knowledge may be used as a source of optimization by a PMD/HW.
> >
> > The pattern template defines common matching fields (the item mask) without
> > values. The actions template holds a list of action types that will be used
> > together in the same rule. The specific values for items and actions will
> > be given only during the rule creation.
> >
> > A table combines pattern and actions templates along with shared flow rule
> > attributes (group ID, priority and traffic direction). This way a PMD/HW
> > can prepare all the resources needed for efficient flow rules creation in
> > the datapath. To avoid any hiccups due to memory reallocation, the maximum
> > number of flow rules is defined at the table creation time.
> >
> > The flow rule creation is done by selecting a table, a pattern template
> > and an actions template (which are bound to the table), and setting unique
> > values for the items and actions.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> > ---
> >   doc/guides/prog_guide/rte_flow.rst     | 124 ++++++++++++
> >   doc/guides/rel_notes/release_22_03.rst |   8 +
> >   lib/ethdev/rte_flow.c                  | 147 ++++++++++++++
> >   lib/ethdev/rte_flow.h                  | 260 +++++++++++++++++++++++++
> >   lib/ethdev/rte_flow_driver.h           |  37 ++++
> >   lib/ethdev/version.map                 |   6 +
> >   6 files changed, 582 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index 72fb1132ac..5391648833 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -3626,6 +3626,130 @@ of pre-configurable resources for a given port on
> a system.
> >                        struct rte_flow_port_info *port_info,
> >                        struct rte_flow_error *error);
> >
> > +Flow templates
> > +~~~~~~~~~~~~~~
> > +
> > +Oftentimes in an application, many flow rules share a common structure
> > +(the same pattern and/or action list) so they can be grouped and classified
> > +together. This knowledge may be used as a source of optimization by a
> PMD/HW.
> > +The flow rule creation is done by selecting a table, a pattern template
> > +and an actions template (which are bound to the table), and setting unique
> > +values for the items and actions. This API is not thread-safe.
> > +
> > +Pattern templates
> > +^^^^^^^^^^^^^^^^^
> > +
> > +The pattern template defines a common pattern (the item mask) without
> values.
> > +The mask value is used to select a field to match on, spec/last are ignored.
> > +The pattern template may be used by multiple tables and must not be
> destroyed
> > +until all these tables are destroyed first.
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow_pattern_template *
> > +	rte_flow_pattern_template_create(uint16_t port_id,
> > +				const struct rte_flow_pattern_template_attr
> *template_attr,
> > +				const struct rte_flow_item pattern[],
> > +				struct rte_flow_error *error);
> > +
> > +For example, to create a pattern template to match on the destination MAC:
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow_item pattern[2] = {{0}};
> > +	struct rte_flow_item_eth eth_m = {0};
> > +	pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
> > +	eth_m.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
> > +	pattern[0].mask = &eth_m;
> > +	pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
> > +
> > +	struct rte_flow_pattern_template *pattern_template =
> > +		rte_flow_pattern_template_create(port, &itr, &pattern,
> &error);
> 
> itr?

Will add it's declaration for clarity.

> > +
> > +The concrete value to match on will be provided at the rule creation.
> > +
> > +Actions templates
> > +^^^^^^^^^^^^^^^^^
> > +
> > +The actions template holds a list of action types to be used in flow rules.
> > +The mask parameter allows specifying a shared constant value for every rule.
> > +The actions template may be used by multiple tables and must not be
> destroyed
> > +until all these tables are destroyed first.
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow_actions_template *
> > +	rte_flow_actions_template_create(uint16_t port_id,
> > +				const struct rte_flow_actions_template_attr
> *template_attr,
> > +				const struct rte_flow_action actions[],
> > +				const struct rte_flow_action masks[],
> > +				struct rte_flow_error *error);
> > +
> > +For example, to create an actions template with the same Mark ID
> > +but different Queue Index for every rule:
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow_action actions[] = {
> > +		/* Mark ID is constant (4) for every rule, Queue Index is unique
> */
> > +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> > +			   .conf = &(struct rte_flow_action_mark){.id = 4}},
> > +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> > +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> > +	};
> > +	struct rte_flow_action masks[] = {
> > +		/* Assign to MARK mask any non-zero value to make it constant
> */
> > +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> > +			   .conf = &(struct rte_flow_action_mark){.id = 1}},
> > +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> > +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> > +	};
> > +
> > +	struct rte_flow_actions_template *at =
> > +		rte_flow_actions_template_create(port, &atr, &actions,
> &masks, &error);
> 
> atr?

Same

> > +
> > +The concrete value for Queue Index will be provided at the rule creation.
> > +
> > +Template table
> > +^^^^^^^^^^^^^^
> > +
> > +A template table combines a number of pattern and actions templates along
> with
> > +shared flow rule attributes (group ID, priority and traffic direction).
> > +This way a PMD/HW can prepare all the resources needed for efficient flow
> rules
> > +creation in the datapath. To avoid any hiccups due to memory reallocation,
> > +the maximum number of flow rules is defined at table creation time.
> > +Any flow rule creation beyond the maximum table size is rejected.
> > +Application may create another table to accommodate more rules in this
> case.
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow_template_table *
> > +	rte_flow_template_table_create(uint16_t port_id,
> > +				const struct rte_flow_template_table_attr
> *table_attr,
> > +				struct rte_flow_pattern_template
> *pattern_templates[],
> 
> const?

No, pattern templates refcount is updated by the table API.

> > +				uint8_t nb_pattern_templates,
> > +				struct rte_flow_actions_template
> *actions_templates[],
> 
> const?

Again, refcount is updated inside.

> 
> > +				uint8_t nb_actions_templates,
> > +				struct rte_flow_error *error);
> > +
> > +A table can be created only after the Flow Rules management is configured
> > +and pattern and actions templates are created.
> > +
> > +.. code-block:: c
> > +
> > +	rte_flow_configure(port, *port_attr, *error);
> 
> 
> Why do you have '*' before port_attr and error above?

Typo, thanks for noticing.

> 
> > +
> > +	struct rte_flow_pattern_template *pattern_templates[0] =
> 
> Definition of zero size array looks wrong.
> 
> > +		rte_flow_pattern_template_create(port, &itr, &pattern,
> &error);
> 
> itr?
> 
> > +	struct rte_flow_actions_template *actions_templates[0] =
> 
> Zero size array?
> 
> > +		rte_flow_actions_template_create(port, &atr, &actions,
> &masks, &error);
> 
> atr?
> 
> > +
> > +	struct rte_flow_template_table *table =
> > +		rte_flow_template_table_create(port, *table_attr,
> > +				*pattern_templates, nb_pattern_templates,
> > +				*actions_templates, nb_actions_templates,
> > +				*error);
> 
> Similar question here.

Rewriting this snippet to fix everything.

> 
> > +
> >   .. _flow_isolated_mode:
> >
> >   Flow isolated mode
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> > index 2a47a37f0a..6656b35295 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -75,6 +75,14 @@ New Features
> >       engine, allowing to pre-allocate some resources for better performance.
> >       Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
> >
> > +  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
> > +    with the same flow attributes and common matching patterns and actions
> > +    defined by ``rte_flow_pattern_template_create`` and
> > +    ``rte_flow_actions_template_create`` respectively.
> > +    Corresponding functions to destroy these entities are:
> > +    ``rte_flow_template_table_destroy``,
> ``rte_flow_pattern_template_destroy``
> > +    and ``rte_flow_actions_template_destroy``.
> > +
> >   * **Updated AF_XDP PMD**
> >
> >     * Added support for libxdp >=v1.2.2.
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index 66614ae29b..b53f8c9b89 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id,
> >   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >   				  NULL, rte_strerror(ENOTSUP));
> >   }
> > +
> > +struct rte_flow_pattern_template *
> > +rte_flow_pattern_template_create(uint16_t port_id,
> > +		const struct rte_flow_pattern_template_attr *template_attr,
> > +		const struct rte_flow_item pattern[],
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_pattern_template *template;
> > +
> > +	if (unlikely(!ops))
> > +		return NULL;
> > +	if (likely(!!ops->pattern_template_create)) {
> 
> Don't we need any state checks?
> 
> Check pattern vs NULL?

Still the same situation, no NULL checks elsewhere in rte flow API.

> 
> > +		template = ops->pattern_template_create(dev, template_attr,
> > +						     pattern, error);
> > +		if (template == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return template;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_pattern_template_destroy(uint16_t port_id,
> > +		struct rte_flow_pattern_template *pattern_template,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->pattern_template_destroy)) {
> 
> IMHO we should return success here if pattern_template is NULL

Just like in rte_flow_destroy() it is up to PMD driver to decide.

> > +		return flow_err(port_id,
> > +				ops->pattern_template_destroy(dev,
> > +							      pattern_template,
> > +							      error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +struct rte_flow_actions_template *
> > +rte_flow_actions_template_create(uint16_t port_id,
> > +			const struct rte_flow_actions_template_attr
> *template_attr,
> > +			const struct rte_flow_action actions[],
> > +			const struct rte_flow_action masks[],
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_actions_template *template;
> > +
> > +	if (unlikely(!ops))
> > +		return NULL;
> > +	if (likely(!!ops->actions_template_create)) {
> 
> State checks?
> 
> Check actions and masks vs NULL?

No, sorry.

> 
> > +		template = ops->actions_template_create(dev, template_attr,
> > +							actions, masks, error);
> > +		if (template == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return template;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_actions_template_destroy(uint16_t port_id,
> > +			struct rte_flow_actions_template *actions_template,
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->actions_template_destroy)) {
> 
> IMHO we should return success here if actions_template is NULL

Just like in rte_flow_destroy() it is up to PMD driver to decide.

> 
> > +		return flow_err(port_id,
> > +				ops->actions_template_destroy(dev,
> > +							      actions_template,
> > +							      error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +struct rte_flow_template_table *
> > +rte_flow_template_table_create(uint16_t port_id,
> > +			const struct rte_flow_template_table_attr *table_attr,
> > +			struct rte_flow_pattern_template
> *pattern_templates[],
> > +			uint8_t nb_pattern_templates,
> > +			struct rte_flow_actions_template
> *actions_templates[],
> > +			uint8_t nb_actions_templates,
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_template_table *table;
> > +
> > +	if (unlikely(!ops))
> > +		return NULL;
> > +	if (likely(!!ops->template_table_create)) {
> 
> Argument sanity checks here. array NULL when size is not 0.

Hate to say no so many times, but I cannot help it.

> 
> > +		table = ops->template_table_create(dev, table_attr,
> > +					pattern_templates,
> nb_pattern_templates,
> > +					actions_templates,
> nb_actions_templates,
> > +					error);
> > +		if (table == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return table;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_template_table_destroy(uint16_t port_id,
> > +				struct rte_flow_template_table
> *template_table,
> > +				struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->template_table_destroy)) {
> 
> Return success if template_table is NULL

Just like in rte_flow_destroy() it is up to PMD driver to decide.
 
> > +		return flow_err(port_id,
> > +				ops->template_table_destroy(dev,
> > +							    template_table,
> > +							    error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index 92be2a9a89..e87db5a540 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> >   		   struct rte_flow_error *error);
> >
> > +/**
> > + * Opaque type returned after successful creation of pattern template.
> > + * This handle can be used to manage the created pattern template.
> > + */
> > +struct rte_flow_pattern_template;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Flow pattern template attributes.
> > + */
> > +__extension__
> > +struct rte_flow_pattern_template_attr {
> > +	/**
> > +	 * Relaxed matching policy.
> > +	 * - PMD may match only on items with mask member set and skip
> > +	 * matching on protocol layers specified without any masks.
> > +	 * - If not set, PMD will match on protocol layers
> > +	 * specified without any masks as well.
> > +	 * - Packet data must be stacked in the same order as the
> > +	 * protocol layers to match inside packets, starting from the lowest.
> > +	 */
> > +	uint32_t relaxed_matching:1;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create pattern template.
> 
> Create flow pattern template.

Ok.

> > + *
> > + * The pattern template defines common matching fields without values.
> > + * For example, matching on 5 tuple TCP flow, the template will be
> > + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> > + * while values for each rule will be set during the flow rule creation.
> > + * The number and order of items in the template must be the same
> > + * at the rule creation.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template_attr
> > + *   Pattern template attributes.
> > + * @param[in] pattern
> > + *   Pattern specification (list terminated by the END pattern item).
> > + *   The spec member of an item is not used unless the end member is used.
> 
> Interpretation of the pattern may depend on transfer vs non-transfer
> rule to be used. It is essential information and we should provide it
> when pattern template is created.
> 
> The information is provided on table stage, but it is too late.

Why is it too late? Application knows which template goes to which table.
And the pattern is generic to accommodate anything, user just need to put it
into the right table.

> 
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_pattern_template *
> > +rte_flow_pattern_template_create(uint16_t port_id,
> > +		const struct rte_flow_pattern_template_attr *template_attr,
> > +		const struct rte_flow_item pattern[],
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Destroy pattern template.
> 
> Destroy flow pattern template.

Ok.

> > + *
> > + * This function may be called only when
> > + * there are no more tables referencing this template.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] pattern_template
> > + *   Handle of the template to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_pattern_template_destroy(uint16_t port_id,
> > +		struct rte_flow_pattern_template *pattern_template,
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * Opaque type returned after successful creation of actions template.
> > + * This handle can be used to manage the created actions template.
> > + */
> > +struct rte_flow_actions_template;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Flow actions template attributes.
> > + */
> > +struct rte_flow_actions_template_attr;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create actions template.
> 
> Create flow rule actions template.

Yes, finally compensating for multiple no's.

> > + *
> > + * The actions template holds a list of action types without values.
> > + * For example, the template to change TCP ports is TCP(s_port + d_port),
> > + * while values for each rule will be set during the flow rule creation.
> > + * The number and order of actions in the template must be the same
> > + * at the rule creation.
> 
> Again, it highly depends on transfer vs non-transfer. Moreover,
> application definitely know it. So, it should say if the action
> is intended for transfer or non-transfer flow rule.

It is up to application to define which pattern it is going to use in different tables.

> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template_attr
> > + *   Template attributes.
> > + * @param[in] actions
> > + *   Associated actions (list terminated by the END action).
> > + *   The spec member is only used if @p masks spec is non-zero.
> > + * @param[in] masks
> > + *   List of actions that marks which of the action's member is constant.
> > + *   A mask has the same format as the corresponding action.
> > + *   If the action field in @p masks is not 0,
> > + *   the corresponding value in an action from @p actions will be the part
> > + *   of the template and used in all flow rules.
> > + *   The order of actions in @p masks is the same as in @p actions.
> > + *   In case of indirect actions present in @p actions,
> > + *   the actual action type should be present in @p mask.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_actions_template *
> > +rte_flow_actions_template_create(uint16_t port_id,
> > +		const struct rte_flow_actions_template_attr *template_attr,
> > +		const struct rte_flow_action actions[],
> > +		const struct rte_flow_action masks[],
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Destroy actions template.
> 
> Destroy flow rule actions template.

Yes, again.

> 
> > + *
> > + * This function may be called only when
> > + * there are no more tables referencing this template.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] actions_template
> > + *   Handle to the template to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_actions_template_destroy(uint16_t port_id,
> > +		struct rte_flow_actions_template *actions_template,
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * Opaque type returned after successful creation of a template table.
> > + * This handle can be used to manage the created template table.
> > + */
> > +struct rte_flow_template_table;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Table attributes.
> > + */
> > +struct rte_flow_template_table_attr {
> > +	/**
> > +	 * Flow attributes to be used in each rule generated from this table.
> > +	 */
> > +	struct rte_flow_attr flow_attr;
> > +	/**
> > +	 * Maximum number of flow rules that this table holds.
> > +	 */
> > +	uint32_t nb_flows;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create template table.
> > + *
> > + * A template table consists of multiple pattern templates and actions
> > + * templates associated with a single set of rule attributes (group ID,
> > + * priority and traffic direction).
> > + *
> > + * Each rule is free to use any combination of pattern and actions templates
> > + * and specify particular values for items and actions it would like to change.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] table_attr
> > + *   Template table attributes.
> > + * @param[in] pattern_templates
> > + *   Array of pattern templates to be used in this table.
> > + * @param[in] nb_pattern_templates
> > + *   The number of pattern templates in the pattern_templates array.
> > + * @param[in] actions_templates
> > + *   Array of actions templates to be used in this table.
> > + * @param[in] nb_actions_templates
> > + *   The number of actions templates in the actions_templates array.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_template_table *
> > +rte_flow_template_table_create(uint16_t port_id,
> > +		const struct rte_flow_template_table_attr *table_attr,
> > +		struct rte_flow_pattern_template *pattern_templates[],
> > +		uint8_t nb_pattern_templates,
> > +		struct rte_flow_actions_template *actions_templates[],
> > +		uint8_t nb_actions_templates,
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Destroy template table.
> > + *
> > + * This function may be called only when
> > + * there are no more flow rules referencing this table.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template_table
> > + *   Handle to the table to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_template_table_destroy(uint16_t port_id,
> > +		struct rte_flow_template_table *template_table,
> > +		struct rte_flow_error *error);
> > +
> >   #ifdef __cplusplus
> >   }
> >   #endif
> > diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> > index 7c29930d0f..2d96db1dc7 100644
> > --- a/lib/ethdev/rte_flow_driver.h
> > +++ b/lib/ethdev/rte_flow_driver.h
> > @@ -162,6 +162,43 @@ struct rte_flow_ops {
> >   		(struct rte_eth_dev *dev,
> >   		 const struct rte_flow_port_attr *port_attr,
> >   		 struct rte_flow_error *err);
> > +	/** See rte_flow_pattern_template_create() */
> > +	struct rte_flow_pattern_template *(*pattern_template_create)
> > +		(struct rte_eth_dev *dev,
> > +		 const struct rte_flow_pattern_template_attr *template_attr,
> > +		 const struct rte_flow_item pattern[],
> > +		 struct rte_flow_error *err);
> > +	/** See rte_flow_pattern_template_destroy() */
> > +	int (*pattern_template_destroy)
> > +		(struct rte_eth_dev *dev,
> > +		 struct rte_flow_pattern_template *pattern_template,
> > +		 struct rte_flow_error *err);
> > +	/** See rte_flow_actions_template_create() */
> > +	struct rte_flow_actions_template *(*actions_template_create)
> > +		(struct rte_eth_dev *dev,
> > +		 const struct rte_flow_actions_template_attr *template_attr,
> > +		 const struct rte_flow_action actions[],
> > +		 const struct rte_flow_action masks[],
> > +		 struct rte_flow_error *err);
> > +	/** See rte_flow_actions_template_destroy() */
> > +	int (*actions_template_destroy)
> > +		(struct rte_eth_dev *dev,
> > +		 struct rte_flow_actions_template *actions_template,
> > +		 struct rte_flow_error *err);
> > +	/** See rte_flow_template_table_create() */
> > +	struct rte_flow_template_table *(*template_table_create)
> > +		(struct rte_eth_dev *dev,
> > +		 const struct rte_flow_template_table_attr *table_attr,
> > +		 struct rte_flow_pattern_template *pattern_templates[],
> > +		 uint8_t nb_pattern_templates,
> > +		 struct rte_flow_actions_template *actions_templates[],
> > +		 uint8_t nb_actions_templates,
> > +		 struct rte_flow_error *err);
> > +	/** See rte_flow_template_table_destroy() */
> > +	int (*template_table_destroy)
> > +		(struct rte_eth_dev *dev,
> > +		 struct rte_flow_template_table *template_table,
> > +		 struct rte_flow_error *err);
> >   };
> >
> >   /**
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index f1235aa913..5fd2108895 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -262,6 +262,12 @@ EXPERIMENTAL {
> >   	rte_eth_dev_priority_flow_ctrl_queue_info_get;
> >   	rte_flow_info_get;
> >   	rte_flow_configure;
> > +	rte_flow_pattern_template_create;
> > +	rte_flow_pattern_template_destroy;
> > +	rte_flow_actions_template_create;
> > +	rte_flow_actions_template_destroy;
> > +	rte_flow_template_table_create;
> > +	rte_flow_template_table_destroy;
> >   };
> >
> >   INTERNAL {


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-11 12:42       ` Andrew Rybchenko
@ 2022-02-12  2:19         ` Alexander Kozyrev
  2022-02-12  9:25           ` Thomas Monjalon
  2022-02-16 13:34           ` Andrew Rybchenko
  0 siblings, 2 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  2:19 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> On 2/11/22 05:26, Alexander Kozyrev wrote:
> > A new, faster, queue-based flow rules management mechanism is needed
> for
> > applications offloading rules inside the datapath. This asynchronous
> > and lockless mechanism frees the CPU for further packet processing and
> > reduces the performance impact of the flow rules creation/destruction
> > on the datapath. Note that queues are not thread-safe and the queue
> > should be accessed from the same thread for all queue operations.
> > It is the responsibility of the app to sync the queue functions in case
> > of multi-threaded access to the same queue.
> >
> > The rte_flow_q_flow_create() function enqueues a flow creation to the
> > requested queue. It benefits from already configured resources and sets
> > unique values on top of item and action templates. A flow rule is enqueued
> > on the specified flow queue and offloaded asynchronously to the
> hardware.
> > The function returns immediately to spare CPU for further packet
> > processing. The application must invoke the rte_flow_q_pull() function
> > to complete the flow rule operation offloading, to clear the queue, and to
> > receive the operation status. The rte_flow_q_flow_destroy() function
> > enqueues a flow destruction to the requested queue.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> > ---
> >   doc/guides/prog_guide/img/rte_flow_q_init.svg | 205 ++++++++++
> >   .../prog_guide/img/rte_flow_q_usage.svg       | 351
> ++++++++++++++++++
> >   doc/guides/prog_guide/rte_flow.rst            | 167 ++++++++-
> >   doc/guides/rel_notes/release_22_03.rst        |   8 +
> >   lib/ethdev/rte_flow.c                         | 175 ++++++++-
> >   lib/ethdev/rte_flow.h                         | 334 +++++++++++++++++
> >   lib/ethdev/rte_flow_driver.h                  |  55 +++
> >   lib/ethdev/version.map                        |   7 +
> >   8 files changed, 1300 insertions(+), 2 deletions(-)
> >   create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
> >   create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg
> >
> > diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg
> b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> > new file mode 100644
> > index 0000000000..96160bde42
> > --- /dev/null
> > +++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
> > @@ -0,0 +1,205 @@
> > +<?xml version="1.0" encoding="UTF-8" standalone="no"?>
> > +<!-- SPDX-License-Identifier: BSD-3-Clause -->
> > +
> > +<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
> > +
> > +<svg
> > +   width="485"
> > +   height="535"
> > +   overflow="hidden"
> > +   version="1.1"
> > +   id="svg61"
> > +   sodipodi:docname="rte_flow_q_init.svg"
> > +   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
> > +
> xmlns:inkscape="https://nam11.safelinks.protection.outlook.com/?url=http
> %3A%2F%2Fwww.inkscape.org%2Fnamespaces%2Finkscape&amp;data=04
> %7C01%7Cakozyrev%40nvidia.com%7C17305c0b25ed4450b31008d9ed5bf333
> %7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637801801473234111
> %7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiL
> CJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=xpBAPvwrE5t1eHUc
> YHt%2FzLJ5seo%2F%2FomPwGvplYtAgv0%3D&amp;reserved=0"
> > +
> xmlns:sodipodi="https://nam11.safelinks.protection.outlook.com/?url=http
> %3A%2F%2Fsodipodi.sourceforge.net%2FDTD%2Fsodipodi-
> 0.dtd&amp;data=04%7C01%7Cakozyrev%40nvidia.com%7C17305c0b25ed445
> 0b31008d9ed5bf333%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C
> 637801801473234111%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM
> DAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata
> =yFZ8tT7Zge2JgcuCKPGPe8MhGNEfsD2fdWpHdkO8qoc%3D&amp;reserved=
> 0"
> > +
> xmlns="https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F
> %2Fwww.w3.org%2F2000%2Fsvg&amp;data=04%7C01%7Cakozyrev%40nvidi
> a.com%7C17305c0b25ed4450b31008d9ed5bf333%7C43083d15727340c1b7db3
> 9efd9ccc17a%7C0%7C0%7C637801801473234111%7CUnknown%7CTWFpbGZs
> b3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn
> 0%3D%7C3000&amp;sdata=e2WMYQltdZPW%2FaZEm0RgJXXyaRFxJH%2B279
> J1xTp9eg4%3D&amp;reserved=0"
> > +
> xmlns:svg="https://nam11.safelinks.protection.outlook.com/?url=http%3A%
> 2F%2Fwww.w3.org%2F2000%2Fsvg&amp;data=04%7C01%7Cakozyrev%40nvi
> dia.com%7C17305c0b25ed4450b31008d9ed5bf333%7C43083d15727340c1b7d
> b39efd9ccc17a%7C0%7C0%7C637801801473234111%7CUnknown%7CTWFpb
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI
> 6Mn0%3D%7C3000&amp;sdata=e2WMYQltdZPW%2FaZEm0RgJXXyaRFxJH%2
> B279J1xTp9eg4%3D&amp;reserved=0">
> > +  <sodipodi:namedview
> > +     id="namedview63"
> > +     pagecolor="#ffffff"
> > +     bordercolor="#666666"
> > +     borderopacity="1.0"
> > +     inkscape:pageshadow="2"
> > +     inkscape:pageopacity="0.0"
> > +     inkscape:pagecheckerboard="0"
> > +     showgrid="false"
> > +     inkscape:zoom="1.517757"
> > +     inkscape:cx="242.79249"
> > +     inkscape:cy="267.17057"
> > +     inkscape:window-width="2400"
> > +     inkscape:window-height="1271"
> > +     inkscape:window-x="2391"
> > +     inkscape:window-y="-9"
> > +     inkscape:window-maximized="1"
> > +     inkscape:current-layer="g59" />
> > +  <defs
> > +     id="defs5">
> > +    <clipPath
> > +       id="clip0">
> > +      <rect
> > +         x="0"
> > +         y="0"
> > +         width="485"
> > +         height="535"
> > +         id="rect2" />
> > +    </clipPath>
> > +  </defs>
> > +  <g
> > +     clip-path="url(#clip0)"
> > +     id="g59">
> > +    <rect
> > +       x="0"
> > +       y="0"
> > +       width="485"
> > +       height="535"
> > +       fill="#FFFFFF"
> > +       id="rect7" />
> > +    <rect
> > +       x="0.500053"
> > +       y="79.5001"
> > +       width="482"
> > +       height="59"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#A6A6A6"
> > +       id="rect9" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="24"
> > +       transform="translate(121.6 116)"
> > +       id="text13">
> > +         rte_eth_dev_configure
> > +         <tspan
> > +   font-size="24"
> > +   x="224.007"
> > +   y="0"
> > +   id="tspan11">()</tspan></text>
> > +    <rect
> > +       x="0.500053"
> > +       y="158.5"
> > +       width="482"
> > +       height="59"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect15" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="24"
> > +       transform="translate(140.273 195)"
> > +       id="text17">
> > +         rte_flow_configure()
> > +      </text>
> > +    <rect
> > +       x="0.500053"
> > +       y="236.5"
> > +       width="482"
> > +       height="60"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect19" />
> > +    <text
> > +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="24px"
> > +       id="text21"
> > +       x="63.425903"
> > +       y="274">rte_flow_pattern_template_create()</text>
> > +    <rect
> > +       x="0.500053"
> > +       y="316.5"
> > +       width="482"
> > +       height="59"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect23" />
> > +    <text
> > +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="24px"
> > +       id="text27"
> > +       x="69.379204"
> > +       y="353">rte_flow_actions_template_create()</text>
> > +    <rect
> > +       x="0.500053"
> > +       y="0.500053"
> > +       width="482"
> > +       height="60"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#A6A6A6"
> > +       id="rect29" />
> > +    <text
> > +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="24px"
> > +       transform="translate(177.233,37)"
> > +       id="text33">rte_eal_init()</text>
> > +    <path
> > +       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-
> 05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
> > +       transform="matrix(-1 0 0 1 241 60)"
> > +       id="path35" />
> > +    <path
> > +       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-
> 05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
> > +       transform="matrix(-1 0 0 1 241 138)"
> > +       id="path37" />
> > +    <path
> > +       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-
> 05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
> > +       transform="matrix(-1 0 0 1 241 217)"
> > +       id="path39" />
> > +    <rect
> > +       x="0.500053"
> > +       y="395.5"
> > +       width="482"
> > +       height="59"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect41" />
> > +    <text
> > +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="24px"
> > +       id="text47"
> > +       x="76.988998"
> > +       y="432">rte_flow_template_table_create()</text>
> > +    <path
> > +       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-
> 05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
> > +       transform="matrix(-1 0 0 1 241 296)"
> > +       id="path49" />
> > +    <path
> > +       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241
> 394.191 235 382.191Z"
> > +       id="path51" />
> > +    <rect
> > +       x="0.500053"
> > +       y="473.5"
> > +       width="482"
> > +       height="60"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#A6A6A6"
> > +       id="rect53" />
> > +    <text
> > +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="24px"
> > +       id="text55"
> > +       x="149.30299"
> > +       y="511">rte_eth_dev_start()</text>
> > +    <path
> > +       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243
> 473.191 237 461.191Z"
> > +       id="path57" />
> > +  </g>
> > +</svg>
> > diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg
> b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
> > new file mode 100644
> > index 0000000000..a1f6c0a0a8
> > --- /dev/null
> > +++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
> > @@ -0,0 +1,351 @@
> > +<?xml version="1.0" encoding="UTF-8" standalone="no"?>
> > +<!-- SPDX-License-Identifier: BSD-3-Clause -->
> > +
> > +<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
> > +
> > +<svg
> > +   width="880"
> > +   height="610"
> > +   overflow="hidden"
> > +   version="1.1"
> > +   id="svg103"
> > +   sodipodi:docname="rte_flow_q_usage.svg"
> > +   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
> > +
> xmlns:inkscape="https://nam11.safelinks.protection.outlook.com/?url=http
> %3A%2F%2Fwww.inkscape.org%2Fnamespaces%2Finkscape&amp;data=04
> %7C01%7Cakozyrev%40nvidia.com%7C17305c0b25ed4450b31008d9ed5bf333
> %7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637801801473234111
> %7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiL
> CJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=xpBAPvwrE5t1eHUc
> YHt%2FzLJ5seo%2F%2FomPwGvplYtAgv0%3D&amp;reserved=0"
> > +
> xmlns:sodipodi="https://nam11.safelinks.protection.outlook.com/?url=http
> %3A%2F%2Fsodipodi.sourceforge.net%2FDTD%2Fsodipodi-
> 0.dtd&amp;data=04%7C01%7Cakozyrev%40nvidia.com%7C17305c0b25ed445
> 0b31008d9ed5bf333%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C
> 637801801473234111%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwM
> DAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata
> =yFZ8tT7Zge2JgcuCKPGPe8MhGNEfsD2fdWpHdkO8qoc%3D&amp;reserved=
> 0"
> > +
> xmlns="https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F
> %2Fwww.w3.org%2F2000%2Fsvg&amp;data=04%7C01%7Cakozyrev%40nvidi
> a.com%7C17305c0b25ed4450b31008d9ed5bf333%7C43083d15727340c1b7db3
> 9efd9ccc17a%7C0%7C0%7C637801801473234111%7CUnknown%7CTWFpbGZs
> b3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn
> 0%3D%7C3000&amp;sdata=e2WMYQltdZPW%2FaZEm0RgJXXyaRFxJH%2B279
> J1xTp9eg4%3D&amp;reserved=0"
> > +
> xmlns:svg="https://nam11.safelinks.protection.outlook.com/?url=http%3A%
> 2F%2Fwww.w3.org%2F2000%2Fsvg&amp;data=04%7C01%7Cakozyrev%40nvi
> dia.com%7C17305c0b25ed4450b31008d9ed5bf333%7C43083d15727340c1b7d
> b39efd9ccc17a%7C0%7C0%7C637801801473234111%7CUnknown%7CTWFpb
> GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI
> 6Mn0%3D%7C3000&amp;sdata=e2WMYQltdZPW%2FaZEm0RgJXXyaRFxJH%2
> B279J1xTp9eg4%3D&amp;reserved=0">
> > +  <sodipodi:namedview
> > +     id="namedview105"
> > +     pagecolor="#ffffff"
> > +     bordercolor="#666666"
> > +     borderopacity="1.0"
> > +     inkscape:pageshadow="2"
> > +     inkscape:pageopacity="0.0"
> > +     inkscape:pagecheckerboard="0"
> > +     showgrid="false"
> > +     inkscape:zoom="1.3311475"
> > +     inkscape:cx="439.84606"
> > +     inkscape:cy="305.37562"
> > +     inkscape:window-width="2400"
> > +     inkscape:window-height="1271"
> > +     inkscape:window-x="2391"
> > +     inkscape:window-y="-9"
> > +     inkscape:window-maximized="1"
> > +     inkscape:current-layer="g101" />
> > +  <defs
> > +     id="defs5">
> > +    <clipPath
> > +       id="clip0">
> > +      <rect
> > +         x="0"
> > +         y="0"
> > +         width="880"
> > +         height="610"
> > +         id="rect2" />
> > +    </clipPath>
> > +  </defs>
> > +  <g
> > +     clip-path="url(#clip0)"
> > +     id="g101">
> > +    <rect
> > +       x="0"
> > +       y="0"
> > +       width="880"
> > +       height="610"
> > +       fill="#FFFFFF"
> > +       id="rect7" />
> > +    <rect
> > +       x="333.5"
> > +       y="0.500053"
> > +       width="234"
> > +       height="45"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#A6A6A6"
> > +       id="rect9" />
> > +    <text
> > +       font-family="Consolas, Consolas_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="19px"
> > +       transform="translate(357.196,29)"
> > +       id="text11">rte_eth_rx_burst()</text>
> > +    <rect
> > +       x="333.5"
> > +       y="63.5001"
> > +       width="234"
> > +       height="45"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect13" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(394.666 91)"
> > +       id="text17">analyze <tspan
> > +   font-size="19"
> > +   x="60.9267"
> > +   y="0"
> > +   id="tspan15">packet </tspan></text>
> > +    <rect
> > +       x="572.5"
> > +       y="279.5"
> > +       width="234"
> > +       height="46"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect19" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(591.429 308)"
> > +       id="text21">rte_flow_q_flow_create()</text>
> > +    <path
> > +       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       fill-rule="evenodd"
> > +       id="path23" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(430.069 378)"
> > +       id="text27">more <tspan
> > +   font-size="19"
> > +   x="-12.94"
> > +   y="23"
> > +   id="tspan25">packets?</tspan></text>
> > +    <path
> > +       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069
> 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582
> 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
> > +       id="path29" />
> > +    <path
> > +       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167
> 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
> > +       id="path31" />
> > +    <path
> > +       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167
> 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
> > +       id="path33" />
> > +    <path
> > +       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167
> 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
> > +       id="path35" />
> > +    <path
> > +       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993
> 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315
> 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966
> 125.649 562.966 117.649 566.966 125.649Z"
> > +       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
> > +       id="path37" />
> > +    <path
> > +       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       fill-rule="evenodd"
> > +       id="path39" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(417.576 155)"
> > +       id="text43">add new <tspan
> > +   font-size="19"
> > +   x="13.2867"
> > +   y="23"
> > +   id="tspan41">rule?</tspan></text>
> > +    <path
> > +       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228
> 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894
> 684.933 271.894Z"
> > +       id="path45" />
> > +    <rect
> > +       x="602.5"
> > +       y="127.5"
> > +       width="46"
> > +       height="30"
> > +       stroke="#000000"
> > +       stroke-width="0.666667"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect47" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(611.34 148)"
> > +       id="text49">yes</text>
> > +    <rect
> > +       x="254.5"
> > +       y="126.5"
> > +       width="46"
> > +       height="31"
> > +       stroke="#000000"
> > +       stroke-width="0.666667"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect51" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(267.182 147)"
> > +       id="text53">no</text>
> > +    <path
> > +       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328
> 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0
> 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
> > +       transform="matrix(1 0 0 -1 567.5 383.495)"
> > +       id="path55" />
> > +    <path
> > +       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       fill-rule="evenodd"
> > +       id="path57" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(159.155 208)"
> > +       id="text61">destroy the <tspan
> > +   font-size="19"
> > +   x="24.0333"
> > +   y="23"
> > +   id="tspan59">rule?</tspan></text>
> > +    <path
> > +       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778
> 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445
> 126.696 11.6445Z"
> > +       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
> > +       id="path63" />
> > +    <rect
> > +       x="81.5001"
> > +       y="280.5"
> > +       width="234"
> > +       height="45"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect65" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(96.2282 308)"
> > +       id="text67">rte_flow_q_flow_destroy()</text>
> > +    <path
> > +       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001
> 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498
> 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415
> 58.6247 121.415 66.6247 117.415 58.6247Z"
> > +       transform="matrix(-1 0 0 1 319.915 213.5)"
> > +       id="path69" />
> > +    <path
> > +       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95
> 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668
> 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
> > +       id="path71" />
> > +    <path
> > +       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778
> 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0
> 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
> > +       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
> > +       id="path73" />
> > +    <rect
> > +       x="334.5"
> > +       y="540.5"
> > +       width="234"
> > +       height="45"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect75" />
> > +    <text
> > +       font-family="Calibri, Calibri_MSFontService, sans-serif"
> > +       font-weight="400"
> > +       font-size="19px"
> > +       id="text77"
> > +       x="385.08301"
> > +       y="569">rte_flow_q_pull()</text>
> > +    <rect
> > +       x="334.5"
> > +       y="462.5"
> > +       width="234"
> > +       height="45"
> > +       stroke="#000000"
> > +       stroke-width="1.33333"
> > +       stroke-miterlimit="8"
> > +       fill="#FFFFFF"
> > +       id="rect79" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(379.19 491)"
> > +       id="text81">rte_flow_q_push()</text>
> > +    <path
> > +       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167
> 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
> > +       id="path83" />
> > +    <rect
> > +       x="0.500053"
> > +       y="287.5"
> > +       width="46"
> > +       height="30"
> > +       stroke="#000000"
> > +       stroke-width="0.666667"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect85" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(12.8617 308)"
> > +       id="text87">no</text>
> > +    <rect
> > +       x="357.5"
> > +       y="223.5"
> > +       width="47"
> > +       height="31"
> > +       stroke="#000000"
> > +       stroke-width="0.666667"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect89" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(367.001 244)"
> > +       id="text91">yes</text>
> > +    <rect
> > +       x="469.5"
> > +       y="421.5"
> > +       width="46"
> > +       height="30"
> > +       stroke="#000000"
> > +       stroke-width="0.666667"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect93" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(481.872 442)"
> > +       id="text95">no</text>
> > +    <rect
> > +       x="832.5"
> > +       y="223.5"
> > +       width="46"
> > +       height="31"
> > +       stroke="#000000"
> > +       stroke-width="0.666667"
> > +       stroke-miterlimit="8"
> > +       fill="#D9D9D9"
> > +       id="rect97" />
> > +    <text
> > +       font-family="Calibri,Calibri_MSFontService,sans-serif"
> > +       font-weight="400"
> > +       font-size="19"
> > +       transform="translate(841.777 244)"
> > +       id="text99">yes</text>
> > +  </g>
> > +</svg>
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index 5391648833..5d47f3bd21 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -3607,12 +3607,16 @@ Expected number of counters or meters in an
> application, for example,
> >   allow PMD to prepare and optimize NIC memory layout in advance.
> >   ``rte_flow_configure()`` must be called before any flow rule is created,
> >   but after an Ethernet device is configured.
> > +It also creates flow queues for asynchronous flow rules operations via
> > +queue-based API, see `Asynchronous operations`_ section.
> >
> >   .. code-block:: c
> >
> >      int
> >      rte_flow_configure(uint16_t port_id,
> >                        const struct rte_flow_port_attr *port_attr,
> > +                     uint16_t nb_queue,
> > +                     const struct rte_flow_queue_attr *queue_attr[],
> >                        struct rte_flow_error *error);
> >
> >   Information about resources that can benefit from pre-allocation can be
> > @@ -3737,7 +3741,7 @@ and pattern and actions templates are created.
> >
> >   .. code-block:: c
> >
> > -	rte_flow_configure(port, *port_attr, *error);
> > +	rte_flow_configure(port, *port_attr, nb_queue, *queue_attr,
> *error);
> 
> * before queue_attr looks strange

Yes, it is a typo.

> >
> >   	struct rte_flow_pattern_template *pattern_templates[0] =
> >   		rte_flow_pattern_template_create(port, &itr, &pattern,
> &error);
> > @@ -3750,6 +3754,167 @@ and pattern and actions templates are created.
> >   				*actions_templates, nb_actions_templates,
> >   				*error);
> >
> > +Asynchronous operations
> > +-----------------------
> > +
> > +Flow rules management can be done via special lockless flow
> management queues.
> > +- Queue operations are asynchronous and not thread-safe.
> > +
> > +- Operations can thus be invoked by the app's datapath,
> > +  packet processing can continue while queue operations are processed by
> NIC.
> > +
> > +- The queue number is configured at initialization stage.
> 
> I read "the queue number" as some number for a specific queue.
> May be "Number of queues is configured..."

No problem.

> > +
> > +- Available operation types: rule creation, rule destruction,
> > +  indirect rule creation, indirect rule destruction, indirect rule update.
> > +
> > +- Operations may be reordered within a queue.
> 
> Do we want to have barriers?
> E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule
> lives forever.

API design is crafter with the throughput as the main goal in mind.
We allow user to enforce any ordering outside these functions.
Another point that not all PMDs/NIC will have this out-of-order execution.


> > +
> > +- Operations can be postponed and pushed to NIC in batches.
> > +
> > +- Results pulling must be done on time to avoid queue overflows.
> 
> polling? (as libc poll() which checks status of file descriptors)
> it is not pulling the door to open it :)

poll waits for some event on a file descriptor as it title says.
And then user has to invoke read() to actually get any info from the fd.
The point of our function is to return the result immediately, thus pulling.
We had many names appearing in the thread for these functions.
As we know, naming variables is the second hardest thing in programming.
I wanted this pull for results pulling be a counterpart for the push for
pushing the operations to a NIC. Another idea is pop/push pair, but they are
more like for operations only, not for results.
Having said that I'm at the point of accepting any name here.

> > +
> > +- User data is returned as part of the result to identify an operation.
> > +
> > +- Flow handle is valid once the creation operation is enqueued and must
> be
> > +  destroyed even if the operation is not successful and the rule is not
> inserted.
> > +
> > +The asynchronous flow rule insertion logic can be broken into two phases.
> > +
> > +1. Initialization stage as shown here:
> > +
> > +.. _figure_rte_flow_q_init:
> > +
> > +.. figure:: img/rte_flow_q_init.*
> > +
> > +2. Main loop as presented on a datapath application example:
> > +
> > +.. _figure_rte_flow_q_usage:
> > +
> > +.. figure:: img/rte_flow_q_usage.*
> > +
> > +Enqueue creation operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Enqueueing a flow rule creation operation is similar to simple creation.
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow *
> > +	rte_flow_q_flow_create(uint16_t port_id,
> > +				uint32_t queue_id,
> > +				const struct rte_flow_q_ops_attr
> *q_ops_attr,
> > +				struct rte_flow_template_table
> *template_table,
> > +				const struct rte_flow_item pattern[],
> > +				uint8_t pattern_template_index,
> > +				const struct rte_flow_action actions[],
> > +				uint8_t actions_template_index,
> > +				struct rte_flow_error *error);
> > +
> > +A valid handle in case of success is returned. It must be destroyed later
> > +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by
> HW.
> > +
> > +Enqueue destruction operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Enqueueing a flow rule destruction operation is similar to simple
> destruction.
> > +
> > +.. code-block:: c
> > +
> > +	int
> > +	rte_flow_q_flow_destroy(uint16_t port_id,
> > +				uint32_t queue_id,
> > +				const struct rte_flow_q_ops_attr
> *q_ops_attr,
> > +				struct rte_flow *flow,
> > +				struct rte_flow_error *error);
> > +
> > +Push enqueued operations
> > +~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Pushing all internally stored rules from a queue to the NIC.
> > +
> > +.. code-block:: c
> > +
> > +	int
> > +	rte_flow_q_push(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			struct rte_flow_error *error);
> > +
> > +There is the postpone attribute in the queue operation attributes.
> > +When it is set, multiple operations can be bulked together and not sent to
> HW
> > +right away to save SW/HW interactions and prioritize throughput over
> latency.
> > +The application must invoke this function to actually push all outstanding
> > +operations to HW in this case.
> > +
> > +Pull enqueued operations
> > +~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Pulling asynchronous operations results.
> > +
> > +The application must invoke this function in order to complete
> asynchronous
> > +flow rule operations and to receive flow rule operations statuses.
> > +
> > +.. code-block:: c
> > +
> > +	int
> > +	rte_flow_q_pull(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			struct rte_flow_q_op_res res[],
> > +			uint16_t n_res,
> > +			struct rte_flow_error *error);
> > +
> > +Multiple outstanding operation results can be pulled simultaneously.
> > +User data may be provided during a flow creation/destruction in order
> > +to distinguish between multiple operations. User data is returned as part
> > +of the result to provide a method to detect which operation is completed.
> > +
> > +Enqueue indirect action creation operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Asynchronous version of indirect action creation API.
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow_action_handle *
> > +	rte_flow_q_action_handle_create(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			const struct rte_flow_q_ops_attr *q_ops_attr,
> > +			const struct rte_flow_indir_action_conf
> *indir_action_conf,
> > +			const struct rte_flow_action *action,
> > +			struct rte_flow_error *error);
> > +
> > +A valid handle in case of success is returned. It must be destroyed later by
> > +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is
> rejected.
> > +
> > +Enqueue indirect action destruction operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Asynchronous version of indirect action destruction API.
> > +
> > +.. code-block:: c
> > +
> > +	int
> > +	rte_flow_q_action_handle_destroy(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			const struct rte_flow_q_ops_attr *q_ops_attr,
> > +			struct rte_flow_action_handle *action_handle,
> > +			struct rte_flow_error *error);
> > +
> > +Enqueue indirect action update operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Asynchronous version of indirect action update API.
> > +
> > +.. code-block:: c
> > +
> > +	int
> > +	rte_flow_q_action_handle_update(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			const struct rte_flow_q_ops_attr *q_ops_attr,
> > +			struct rte_flow_action_handle *action_handle,
> > +			const void *update,
> > +			struct rte_flow_error *error);
> > +
> >   .. _flow_isolated_mode:
> >
> >   Flow isolated mode
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> b/doc/guides/rel_notes/release_22_03.rst
> > index 6656b35295..87cea8a966 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -83,6 +83,14 @@ New Features
> >       ``rte_flow_template_table_destroy``,
> ``rte_flow_pattern_template_destroy``
> >       and ``rte_flow_actions_template_destroy``.
> >
> > +  * ethdev: Added ``rte_flow_q_flow_create`` and
> ``rte_flow_q_flow_destroy``
> > +    API to enqueue flow creaion/destruction operations asynchronously as
> well
> > +    as ``rte_flow_q_pull`` to poll and retrieve results of these operations
> > +    and ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
> > +    Introduced asynchronous API for indirect actions management as well:
> > +    ``rte_flow_q_action_handle_create``,
> ``rte_flow_q_action_handle_destroy``
> > +    and ``rte_flow_q_action_handle_update``.
> > +
> >   * **Updated AF_XDP PMD**
> >
> >     * Added support for libxdp >=v1.2.2.
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index b53f8c9b89..aca5bac2da 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1415,6 +1415,8 @@ rte_flow_info_get(uint16_t port_id,
> >   int
> >   rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> > +		   uint16_t nb_queue,
> > +		   const struct rte_flow_queue_attr *queue_attr[],
> >   		   struct rte_flow_error *error)
> >   {
> >   	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > @@ -1424,7 +1426,8 @@ rte_flow_configure(uint16_t port_id,
> >   		return -rte_errno;
> >   	if (likely(!!ops->configure)) {
> >   		return flow_err(port_id,
> > -				ops->configure(dev, port_attr, error),
> > +				ops->configure(dev, port_attr,
> > +					       nb_queue, queue_attr, error),
> >   				error);
> >   	}
> >   	return rte_flow_error_set(error, ENOTSUP,
> > @@ -1578,3 +1581,173 @@ rte_flow_template_table_destroy(uint16_t
> port_id,
> >   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >   				  NULL, rte_strerror(ENOTSUP));
> >   }
> > +
> > +struct rte_flow *
> > +rte_flow_q_flow_create(uint16_t port_id,
> > +		       uint32_t queue_id,
> > +		       const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		       struct rte_flow_template_table *template_table,
> > +		       const struct rte_flow_item pattern[],
> > +		       uint8_t pattern_template_index,
> > +		       const struct rte_flow_action actions[],
> > +		       uint8_t actions_template_index,
> > +		       struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow *flow;
> > +
> > +	if (unlikely(!ops))
> > +		return NULL;
> > +	if (likely(!!ops->q_flow_create)) {
> > +		flow = ops->q_flow_create(dev, queue_id,
> > +					  q_ops_attr, template_table,
> > +					  pattern, pattern_template_index,
> > +					  actions, actions_template_index,
> > +					  error);
> > +		if (flow == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return flow;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_q_flow_destroy(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			const struct rte_flow_q_ops_attr *q_ops_attr,
> > +			struct rte_flow *flow,
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->q_flow_destroy)) {
> > +		return flow_err(port_id,
> > +				ops->q_flow_destroy(dev, queue_id,
> > +						    q_ops_attr, flow, error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +struct rte_flow_action_handle *
> > +rte_flow_q_action_handle_create(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		const struct rte_flow_indir_action_conf *indir_action_conf,
> > +		const struct rte_flow_action *action,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_action_handle *handle;
> > +
> > +	if (unlikely(!ops))
> > +		return NULL;
> > +	if (unlikely(!ops->q_action_handle_create)) {
> > +		rte_flow_error_set(error, ENOSYS,
> > +				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> NULL,
> > +				   rte_strerror(ENOSYS));
> > +		return NULL;
> > +	}
> > +	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
> > +					     indir_action_conf, action, error);
> > +	if (handle == NULL)
> > +		flow_err(port_id, -rte_errno, error);
> > +	return handle;
> > +}
> > +
> > +int
> > +rte_flow_q_action_handle_destroy(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		struct rte_flow_action_handle *action_handle,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	int ret;
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (unlikely(!ops->q_action_handle_destroy))
> > +		return rte_flow_error_set(error, ENOSYS,
> > +
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +					  NULL, rte_strerror(ENOSYS));
> > +	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
> > +					   action_handle, error);
> > +	return flow_err(port_id, ret, error);
> > +}
> > +
> > +int
> > +rte_flow_q_action_handle_update(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		struct rte_flow_action_handle *action_handle,
> > +		const void *update,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	int ret;
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (unlikely(!ops->q_action_handle_update))
> > +		return rte_flow_error_set(error, ENOSYS,
> > +
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +					  NULL, rte_strerror(ENOSYS));
> > +	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
> > +					  action_handle, update, error);
> > +	return flow_err(port_id, ret, error);
> > +}
> > +
> > +int
> > +rte_flow_q_push(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->q_push)) {
> > +		return flow_err(port_id,
> > +				ops->q_push(dev, queue_id, error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_q_pull(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		struct rte_flow_q_op_res res[],
> > +		uint16_t n_res,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	int ret;
> > +
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->q_pull)) {
> > +		ret = ops->q_pull(dev, queue_id, res, n_res, error);
> > +		return ret ? ret : flow_err(port_id, ret, error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index e87db5a540..b0d4f33bfd 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4862,6 +4862,10 @@ rte_flow_flex_item_release(uint16_t port_id,
> >    *
> >    */
> >   struct rte_flow_port_info {
> > +	/**
> > +	 * Number of queues for asynchronous operations.
> 
> Is it a maximum number of queues?

Yes, it is a maximum supported number of flow queues. Will rename.

> 
> > +	 */
> > +	uint32_t nb_queues;
> >   	/**
> >   	 * Number of pre-configurable counter actions.
> >   	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> > @@ -4879,6 +4883,17 @@ struct rte_flow_port_info {
> >   	uint32_t nb_meters;
> >   };
> >
> > +/**
> > + * Flow engine queue configuration.
> > + */
> > +__extension__
> > +struct rte_flow_queue_attr {
> > +	/**
> > +	 * Number of flow rule operations a queue can hold.
> > +	 */
> > +	uint32_t size;
> 
> Whar are the min/max sizes? 0 as the default size, if yes, do we need
> an API to find actual size?

Good catch, will extend rte_flow_info_get() to obtain this number.

> 
> > +};
> > +
> >   /**
> >    * @warning
> >    * @b EXPERIMENTAL: this API may change without prior notice.
> > @@ -4948,6 +4963,11 @@ struct rte_flow_port_attr {
> >    *   Port identifier of Ethernet device.
> >    * @param[in] port_attr
> >    *   Port configuration attributes.
> > + * @param[in] nb_queue
> > + *   Number of flow queues to be configured.
> > + * @param[in] queue_attr
> > + *   Array that holds attributes for each flow queue.
> > + *   Number of elements is set in @p port_attr.nb_queues.
> >    * @param[out] error
> >    *   Perform verbose error reporting if not NULL.
> >    *   PMDs initialize this structure in case of error only.
> > @@ -4959,6 +4979,8 @@ __rte_experimental
> >   int
> >   rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> > +		   uint16_t nb_queue,
> > +		   const struct rte_flow_queue_attr *queue_attr[],
> >   		   struct rte_flow_error *error);
> >
> >   /**
> > @@ -5221,6 +5243,318 @@ rte_flow_template_table_destroy(uint16_t
> port_id,
> >   		struct rte_flow_template_table *template_table,
> >   		struct rte_flow_error *error);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Queue operation attributes.
> > + */
> > +struct rte_flow_q_ops_attr {
> > +	/**
> > +	 * The user data that will be returned on the completion events.
> > +	 */
> > +	void *user_data;
> 
> IMHO it must not be hiddne in attrs. It is a key information
> which is used to understand the opration result. It should
> be passed separately.

Maybe, on the other hand it is optional and may not be needed by an application.

> > +	 /**
> > +	  * When set, the requested action will not be sent to the HW
> immediately.
> > +	  * The application must call the rte_flow_queue_push to actually
> send it.
> 
> Will the next operation without the attribute set implicitly push it?
> Is it mandatory for the driver to respect it? Or is it just a possible
> optimization hint?

Yes, it will be pushed with all the operations in a queue once the postpone is cleared.
It is not mandatory to respect this bit, PMD can use other optimization technics.

> 
> > +	  */
> > +	uint32_t postpone:1;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue rule creation operation.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param queue_id
> > + *   Flow queue used to insert the rule.
> > + * @param[in] q_ops_attr
> > + *   Rule creation operation attributes.
> > + * @param[in] template_table
> > + *   Template table to select templates from.
> 
> IMHO it should be done optional. I.e. NULL allows.
> If NULL, indecies are ignored and pattern+actions are full
> specificiation as in rte_flow_create(). The only missing bit
> is attributes.
> Basically I'm sure that hardwiring queue-based flow rule control
> to template is the right solution. It should be possible without
> templates. May be it should be a separate API to be added later
> if/when required.

That ruins the whole point of templates - to use pre-existing hardware paths.
But I agree less performant API may be added if need arises.

 
> > + * @param[in] pattern
> > + *   List of pattern items to be used.
> > + *   The list order should match the order in the pattern template.
> > + *   The spec is the only relevant member of the item that is being used.
> > + * @param[in] pattern_template_index
> > + *   Pattern template index in the table.
> > + * @param[in] actions
> > + *   List of actions to be used.
> > + *   The list order should match the order in the actions template.
> > + * @param[in] actions_template_index
> > + *   Actions template index in the table.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + *   The rule handle doesn't mean that the rule was offloaded.
> 
> "was offloaded" sounds ambiguous. API says nothing about any kind
> of offloading before. "has been populated" or "has been
> created" (since API says "create").

Ok.


> > + *   Only completion result indicates that the rule was offloaded.
> > + */
> > +__rte_experimental
> > +struct rte_flow *
> > +rte_flow_q_flow_create(uint16_t port_id,
> 
> flow_q_flow does not sound like a good nameing, consider:
> rte_flow_q_rule_create() is <subsystem>_<subtype>_<object>_<action>

More like:
<subsystem>_<subtype>_<object>_<action>
 <rte>_<flow>_<rule_create_operation>_<queue>
Which is pretty lengthy name as for me.


> > +		       uint32_t queue_id,
> > +		       const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		       struct rte_flow_template_table *template_table,
> > +		       const struct rte_flow_item pattern[],
> > +		       uint8_t pattern_template_index,
> > +		       const struct rte_flow_action actions[],
> > +		       uint8_t actions_template_index,
> > +		       struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue rule destruction operation.
> > + *
> > + * This function enqueues a destruction operation on the queue.
> > + * Application should assume that after calling this function
> > + * the rule handle is not valid anymore.
> > + * Completion indicates the full removal of the rule from the HW.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param queue_id
> > + *   Flow queue which is used to destroy the rule.
> > + *   This must match the queue on which the rule was created.
> > + * @param[in] q_ops_attr
> > + *   Rule destroy operation attributes.
> > + * @param[in] flow
> > + *   Flow handle to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_q_flow_destroy(uint16_t port_id,
> > +			uint32_t queue_id,
> > +			const struct rte_flow_q_ops_attr *q_ops_attr,
> > +			struct rte_flow *flow,
> > +			struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue indirect action creation operation.
> > + * @see rte_flow_action_handle_create
> > + *
> > + * @param[in] port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] queue_id
> > + *   Flow queue which is used to create the rule.
> > + * @param[in] q_ops_attr
> > + *   Queue operation attributes.
> > + * @param[in] indir_action_conf
> > + *   Action configuration for the indirect action object creation.
> > + * @param[in] action
> > + *   Specific configuration of the indirect action object.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   - (0) if success.
> 
> Hold on. Pointer is returned by the function.

That is an error.

> 
> > + *   - (-ENODEV) if *port_id* invalid.
> > + *   - (-ENOSYS) if underlying device does not support this functionality.
> > + *   - (-EIO) if underlying device is removed.
> > + *   - (-ENOENT) if action pointed by *action* handle was not found.
> > + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> > + *   rte_errno is also set.
> 
> Which error code should be used if too many ops are enqueued (overflow)?

EAGAIN

> 
> > + */
> > +__rte_experimental
> > +struct rte_flow_action_handle *
> > +rte_flow_q_action_handle_create(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		const struct rte_flow_indir_action_conf *indir_action_conf,
> > +		const struct rte_flow_action *action,
> 
> I don't understand why it differs so much from rule creation.
> Why is action template not used?
> IMHO indirect actions should be dropped from the patch
> and added separately since it is a separate feature.

I agree, they deserve a sperate patch since they are rather resource creations.
But, I'm afraid it is too late for RC1.

> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue indirect action destruction operation.
> > + * The destroy queue must be the same
> > + * as the queue on which the action was created.
> > + *
> > + * @param[in] port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] queue_id
> > + *   Flow queue which is used to destroy the rule.
> > + * @param[in] q_ops_attr
> > + *   Queue operation attributes.
> > + * @param[in] action_handle
> > + *   Handle for the indirect action object to be destroyed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   - (0) if success.
> > + *   - (-ENODEV) if *port_id* invalid.
> > + *   - (-ENOSYS) if underlying device does not support this functionality.
> > + *   - (-EIO) if underlying device is removed.
> > + *   - (-ENOENT) if action pointed by *action* handle was not found.
> > + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> > + *   rte_errno is also set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_q_action_handle_destroy(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		struct rte_flow_action_handle *action_handle,
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Enqueue indirect action update operation.
> > + * @see rte_flow_action_handle_create
> > + *
> > + * @param[in] port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] queue_id
> > + *   Flow queue which is used to update the rule.
> > + * @param[in] q_ops_attr
> > + *   Queue operation attributes.
> > + * @param[in] action_handle
> > + *   Handle for the indirect action object to be updated.
> > + * @param[in] update
> > + *   Update profile specification used to modify the action pointed by
> handle.
> > + *   *update* could be with the same type of the immediate action
> corresponding
> > + *   to the *handle* argument when creating, or a wrapper structure
> includes
> > + *   action configuration to be updated and bit fields to indicate the
> member
> > + *   of fields inside the action to update.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   - (0) if success.
> > + *   - (-ENODEV) if *port_id* invalid.
> > + *   - (-ENOSYS) if underlying device does not support this functionality.
> > + *   - (-EIO) if underlying device is removed.
> > + *   - (-ENOENT) if action pointed by *action* handle was not found.
> > + *   - (-EBUSY) if action pointed by *action* handle still used by some rules
> > + *   rte_errno is also set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_q_action_handle_update(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		struct rte_flow_action_handle *action_handle,
> > +		const void *update,
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Push all internally stored rules to the HW.
> > + * Postponed rules are rules that were inserted with the postpone flag
> set.
> > + * Can be used to notify the HW about batch of rules prepared by the SW
> to
> > + * reduce the number of communications between the HW and SW.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param queue_id
> > + *   Flow queue to be pushed.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *    0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_q_push(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		struct rte_flow_error *error);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Queue operation status.
> > + */
> > +enum rte_flow_q_op_status {
> > +	/**
> > +	 * The operation was completed successfully.
> > +	 */
> > +	RTE_FLOW_Q_OP_SUCCESS,
> > +	/**
> > +	 * The operation was not completed successfully.
> > +	 */
> > +	RTE_FLOW_Q_OP_ERROR,
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Queue operation results.
> > + */
> > +__extension__
> > +struct rte_flow_q_op_res {
> > +	/**
> > +	 * Returns the status of the operation that this completion signals.
> > +	 */
> > +	enum rte_flow_q_op_status status;
> > +	/**
> > +	 * The user data that will be returned on the completion events.
> > +	 */
> > +	void *user_data;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Pull a rte flow operation.
> > + * The application must invoke this function in order to complete
> > + * the flow rule offloading and to retrieve the flow rule operation status.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param queue_id
> > + *   Flow queue which is used to pull the operation.
> > + * @param[out] res
> > + *   Array of results that will be set.
> > + * @param[in] n_res
> > + *   Maximum number of results that can be returned.
> > + *   This value is equal to the size of the res array.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Number of results that were pulled,
> > + *   a negative errno value otherwise and rte_errno is set.
> 
> Don't we want to define negative error code meaning?

They are all standard, don't think we need another copy-paste here.

> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_q_pull(uint16_t port_id,
> > +		uint32_t queue_id,
> > +		struct rte_flow_q_op_res res[],
> > +		uint16_t n_res,
> > +		struct rte_flow_error *error);
> > +
> >   #ifdef __cplusplus
> >   }
> >   #endif

^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 00/10] ethdev: datapath-focused flow rules management
  2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
                       ` (9 preceding siblings ...)
  2022-02-11  2:26     ` [PATCH v5 10/10] app/testpmd: add async indirect actions creation/destruction Alexander Kozyrev
@ 2022-02-12  4:19     ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
                         ` (10 more replies)
  10 siblings, 11 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
v6: addressed more review comments
- fixed typos
- rewrote code snippets
- add a way to get queue size
- renamed port/queue attibutes parameters

v5: changed titles for testpmd commits

v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (10):
  ethdev: introduce flow pre-configuration hints
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  app/testpmd: add flow engine configuration
  app/testpmd: add flow template management
  app/testpmd: add flow table management
  app/testpmd: add async flow create/destroy operations
  app/testpmd: add flow queue push operation
  app/testpmd: add flow queue pull operation
  app/testpmd: add async indirect actions creation/destruction

 app/test-pmd/cmdline_flow.c                   | 1496 ++++++++++++++++-
 app/test-pmd/config.c                         |  778 +++++++++
 app/test-pmd/testpmd.h                        |   66 +
 doc/guides/prog_guide/img/rte_flow_q_init.svg |  205 +++
 .../prog_guide/img/rte_flow_q_usage.svg       |  351 ++++
 doc/guides/prog_guide/rte_flow.rst            |  338 ++++
 doc/guides/rel_notes/release_22_03.rst        |   22 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  379 ++++-
 lib/ethdev/rte_flow.c                         |  361 ++++
 lib/ethdev/rte_flow.h                         |  728 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  103 ++
 lib/ethdev/version.map                        |   15 +
 12 files changed, 4822 insertions(+), 20 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 02/10] ethdev: add flow item/action templates Alexander Kozyrev
                         ` (9 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  37 +++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/rte_flow.c                  |  40 +++++++++
 lib/ethdev/rte_flow.h                  | 108 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 6 files changed, 203 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index b4aa9c47c2..72fb1132ac 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3589,6 +3589,43 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API management configuration and
+pre-allocates needed resources beforehand to avoid costly allocations later.
+Expected number of counters or meters in an application, for example,
+allow PMD to prepare and optimize NIC memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                     const struct rte_flow_port_attr *port_attr,
+                     struct rte_flow_error *error);
+
+Information about resources that can benefit from pre-allocation can be
+retrieved via ``rte_flow_info_get()`` API. It returns the maximum number
+of pre-configurable resources for a given port on a system.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index f03183ee86..2a47a37f0a 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -69,6 +69,12 @@ New Features
   New APIs, ``rte_eth_dev_priority_flow_ctrl_queue_info_get()`` and
   ``rte_eth_dev_priority_flow_ctrl_queue_configure()``, was added.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index a93f68abbc..66614ae29b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		return flow_err(port_id,
+				ops->configure(dev, port_attr, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 1031fb246b..c25d46b866 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine pre-configurable resources.
+ * The zero value means a resource cannot be pre-configured.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Maximum number of pre-configurable counter actions.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t max_nb_counter_actions;
+	/**
+	 * Maximum number of pre-configurable aging flows actions.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t max_nb_aging_actions;
+	/**
+	 * Maximum number of pre-configurable traffic metering actions.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t max_nb_meter_actions;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get information about flow engine pre-configurable resources.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the resources information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine pre-configurable resources settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counter actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counter_actions;
+	/**
+	 * Number of aging flows actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_actions;
+	/**
+	 * Number of traffic metering actions pre-configured.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meter_actions;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the
+ * settings. The port, however, may reject the changes.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index cd0c4c428d..f1235aa913 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -260,6 +260,8 @@ EXPERIMENTAL {
 	# added in 22.03
 	rte_eth_dev_priority_flow_ctrl_queue_configure;
 	rte_eth_dev_priority_flow_ctrl_queue_info_get;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 02/10] ethdev: add flow item/action templates
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                         ` (8 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 135 +++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 147 ++++++++++++++
 lib/ethdev/rte_flow.h                  | 260 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 593 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 72fb1132ac..8aa33300a3 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3626,6 +3626,141 @@ of pre-configurable resources for a given port on a system.
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+				const struct rte_flow_pattern_template_attr *template_attr,
+				const struct rte_flow_item pattern[],
+				struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	const struct rte_flow_pattern_template_attr attr = {0};
+	struct rte_flow_item_eth eth_m = {
+		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	};
+	struct rte_flow_item pattern[] = {
+		[0] = {.type = RTE_FLOW_ITEM_TYPE_ETH,
+			   .mask = &eth_m},
+		[1] = {.type = RTE_FLOW_ITEM_TYPE_END,},
+	};
+	struct rte_flow_error error;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &pt_attr, &pattern, &error);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+				const struct rte_flow_actions_template_attr *template_attr,
+				const struct rte_flow_action actions[],
+				const struct rte_flow_action masks[],
+				struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	rte_flow_actions_template_attr attr = {0};
+	struct rte_flow_action actions[] = {
+		/* Mark ID is constant (4) for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action masks[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+			   .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_error error;
+
+	struct rte_flow_actions_template *actions_template =
+		rte_flow_actions_template_create(port, &attr, &actions, &masks, &err);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+				const struct rte_flow_template_table_attr *table_attr,
+				struct rte_flow_pattern_template *pattern_templates[],
+				uint8_t nb_pattern_templates,
+				struct rte_flow_actions_template *actions_templates[],
+				uint8_t nb_actions_templates,
+				struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_template_table_attr table_attr = {
+		.flow_attr.ingress = 1,
+		.nb_flows = 10000;
+	};
+	uint8_t nb_pattern_templates = 1;
+	struct rte_flow_pattern_template *pattern_templates[nb_pattern_templates];
+	pattern_templates[0] = pattern_template;
+	uint8_t nb_actions_templates = 1;
+	struct rte_flow_actions_template *actions_templates[nb_actions_templates];
+	actions_templates[0] = actions_template;
+	struct rte_flow_error error;
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, &table_attr,
+				&pattern_templates, nb_pattern_templates,
+				&actions_templates, nb_actions_templates,
+				&error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 2a47a37f0a..6656b35295 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -75,6 +75,14 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve pre-configurable resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 66614ae29b..b53f8c9b89 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+						     pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index c25d46b866..b4c5e3cd9d 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - PMD may match only on items with mask member set and skip
+	 * matching on protocol layers specified without any masks.
+	 * - If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * - Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets, starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+struct rte_flow_actions_template_attr;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index f1235aa913..5fd2108895 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -262,6 +262,12 @@ EXPERIMENTAL {
 	rte_eth_dev_priority_flow_ctrl_queue_info_get;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 02/10] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 04/10] app/testpmd: add flow engine configuration Alexander Kozyrev
                         ` (7 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_q_flow_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_q_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_q_flow_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/img/rte_flow_q_init.svg | 205 ++++++++++
 .../prog_guide/img/rte_flow_q_usage.svg       | 351 +++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 166 ++++++++
 doc/guides/rel_notes/release_22_03.rst        |   8 +
 lib/ethdev/rte_flow.c                         | 178 ++++++++-
 lib/ethdev/rte_flow.h                         | 360 ++++++++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  56 +++
 lib/ethdev/version.map                        |   7 +
 8 files changed, 1329 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg
new file mode 100644
index 0000000000..96160bde42
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_q_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
new file mode 100644
index 0000000000..a1f6c0a0a8
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg
@@ -0,0 +1,351 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_q_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84606"
+     inkscape:cy="305.37562"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="572.5"
+       y="279.5"
+       width="234"
+       height="46"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(591.429 308)"
+       id="text21">rte_flow_q_flow_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="81.5001"
+       y="280.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect65" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(96.2282 308)"
+       id="text67">rte_flow_q_flow_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="334.5"
+       y="540.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="385.08301"
+       y="569">rte_flow_q_pull()</text>
+    <rect
+       x="334.5"
+       y="462.5"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect79" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(379.19 491)"
+       id="text81">rte_flow_q_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 8aa33300a3..e4648b60ea 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3607,12 +3607,16 @@ Expected number of counters or meters in an application, for example,
 allow PMD to prepare and optimize NIC memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                      const struct rte_flow_port_attr *port_attr,
+                     uint16_t nb_queue,
+                     const struct rte_flow_queue_attr *queue_attr[],
                      struct rte_flow_error *error);
 
 Information about resources that can benefit from pre-allocation can be
@@ -3624,6 +3628,7 @@ of pre-configurable resources for a given port on a system.
    int
    rte_flow_info_get(uint16_t port_id,
                      struct rte_flow_port_info *port_info,
+					 struct rte_flow_queue_info *queue_info,
                      struct rte_flow_error *error);
 
 Flow templates
@@ -3761,6 +3766,167 @@ and pattern and actions templates are created.
 				&actions_templates, nb_actions_templates,
 				&error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+  packet processing can continue while queue operations are processed by NIC.
+
+- Number of flow queues is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+  indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+  destroyed even if the operation is not successful and the rule is not inserted.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_q_init:
+
+.. figure:: img/rte_flow_q_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_q_usage:
+
+.. figure:: img/rte_flow_q_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_q_flow_create(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow_template_table *template_table,
+				const struct rte_flow_item pattern[],
+				uint8_t pattern_template_index,
+				const struct rte_flow_action actions[],
+				uint8_t actions_template_index,
+				struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_flow_destroy(uint16_t port_id,
+				uint32_t queue_id,
+				const struct rte_flow_q_ops_attr *q_ops_attr,
+				struct rte_flow *flow,
+				struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_push(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_pull(uint16_t port_id,
+			uint32_t queue_id,
+			struct rte_flow_q_op_res res[],
+			uint16_t n_res,
+			struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_q_action_handle_create(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			const struct rte_flow_indir_action_conf *indir_action_conf,
+			const struct rte_flow_action *action,
+			struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_q_action_handle_update(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow_action_handle *action_handle,
+			const void *update,
+			struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6656b35295..87cea8a966 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -83,6 +83,14 @@ New Features
     ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+  * ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy``
+    API to enqueue flow creaion/destruction operations asynchronously as well
+    as ``rte_flow_q_pull`` to poll and retrieve results of these operations
+    and ``rte_flow_q_push`` to push all the in-flight operations to the NIC.
+    Introduced asynchronous API for indirect actions management as well:
+    ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy``
+    and ``rte_flow_q_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index b53f8c9b89..a3b6547281 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1395,6 +1395,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1404,7 +1405,7 @@ rte_flow_info_get(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->info_get)) {
 		return flow_err(port_id,
-				ops->info_get(dev, port_info, error),
+				ops->info_get(dev, port_info, queue_info, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1415,6 +1416,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1424,7 +1427,8 @@ rte_flow_configure(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
 		return flow_err(port_id,
-				ops->configure(dev, port_attr, error),
+				ops->configure(dev, port_attr,
+					       nb_queue, queue_attr, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1578,3 +1582,173 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_template_table *template_table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->q_flow_create)) {
+		flow = ops->q_flow_create(dev, queue_id,
+					  q_ops_attr, template_table,
+					  pattern, pattern_template_index,
+					  actions, actions_template_index,
+					  error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_flow_destroy)) {
+		return flow_err(port_id,
+				ops->q_flow_destroy(dev, queue_id,
+						    q_ops_attr, flow, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->q_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->q_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_push)) {
+		return flow_err(port_id,
+				ops->q_push(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->q_pull)) {
+		ret = ops->q_pull(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index b4c5e3cd9d..ec5637f42d 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4862,6 +4862,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Maximum umber of queues for asynchronous operations.
+	 */
+	uint32_t max_nb_queues;
 	/**
 	 * Maximum number of pre-configurable counter actions.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4879,6 +4883,20 @@ struct rte_flow_port_info {
 	uint32_t max_nb_meter_actions;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine asynchronous queues.
+ * The value only valid if @p port_attr.max_nb_queues is not zero.
+ */
+struct rte_flow_queue_info {
+	/**
+	 * Maximum number of flow rule operations a queue can hold.
+	 */
+	uint32_t max_size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4890,6 +4908,9 @@ struct rte_flow_port_info {
  * @param[out] port_info
  *   A pointer to a structure of type *rte_flow_port_info*
  *   to be filled with the resources information of the port.
+ * @param[out] queue_info
+ *   A pointer to a structure of type *rte_flow_queue_info*
+ *   to be filled with the asynchronous queues information.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4901,6 +4922,7 @@ __rte_experimental
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error);
 
 /**
@@ -4929,6 +4951,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meter_actions;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine asynchronous queues settings.
+ * The value means default value picked by PMD.
+ *
+ */
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4948,6 +4985,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4959,6 +5001,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5221,6 +5265,322 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+__extension__
+struct rte_flow_q_ops_attr {
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule has been populated.
+ *   Only completion result indicates that if there was success or failure.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_q_flow_create(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow_template_table *template_table,
+		       const struct rte_flow_item pattern[],
+		       uint8_t pattern_template_index,
+		       const struct rte_flow_action actions[],
+		       uint8_t actions_template_index,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_flow_destroy(uint16_t port_id,
+			uint32_t queue_id,
+			const struct rte_flow_q_ops_attr *q_ops_attr,
+			struct rte_flow *flow,
+			struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   A valid handle in case of success, NULL otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *action* invalid.
+ *   - (ENOTSUP) if *action* valid but unsupported.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_q_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+int
+rte_flow_q_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *    0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_push(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation results.
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_q_pull(uint16_t port_id,
+		uint32_t queue_id,
+		struct rte_flow_q_op_res res[],
+		uint16_t n_res,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..88c214ed33 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -156,11 +156,14 @@ struct rte_flow_ops {
 	int (*info_get)
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_queue_info *queue_info,
 		 struct rte_flow_error *err);
 	/** See rte_flow_configure() */
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +202,59 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_create() */
+	struct rte_flow *(*q_flow_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_flow_destroy() */
+	int (*q_flow_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_create() */
+	struct rte_flow_action_handle *(*q_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_action_handle_destroy() */
+	int (*q_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_action_handle_update() */
+	int (*q_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 struct rte_flow_error *error);
+	/** See rte_flow_q_push() */
+	int (*q_push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_q_pull() */
+	int (*q_pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 5fd2108895..46a4151053 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -268,6 +268,13 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_q_flow_create;
+	rte_flow_q_flow_destroy;
+	rte_flow_q_action_handle_create;
+	rte_flow_q_action_handle_destroy;
+	rte_flow_q_action_handle_update;
+	rte_flow_q_push;
+	rte_flow_q_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 04/10] app/testpmd: add flow engine configuration
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (2 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 05/10] app/testpmd: add flow template management Alexander Kozyrev
                         ` (6 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  61 ++++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  61 +++++++++-
 4 files changed, 252 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7b56b1b0ff..f1f0076853 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -847,6 +856,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -928,6 +942,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_COUNTERS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1964,6 +1988,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2189,7 +2216,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2204,6 +2233,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counter actions",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counter_actions)),
+	},
+	[CONFIG_AGING_COUNTERS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging actions",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_actions)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meter actions",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meter_actions)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7480,6 +7568,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8708,6 +8823,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e812f57151..bcf1944bb7 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1609,6 +1609,67 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_queue_info queue_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	memset(&port_info, 0, sizeof(port_info));
+	memset(&queue_info, 0, sizeof(queue_info));
+	if (rte_flow_info_get(port_id, &port_info, &queue_info, &error))
+		return port_flow_complain(&error);
+	printf("Flow engine resources on port %u:\n"
+	       "Number of queues: %d\n"
+		   "Size of queues: %d\n"
+	       "Number of counter actions: %d\n"
+	       "Number of aging actions: %d\n"
+	       "Number of meter actions: %d\n",
+	       port_id, port_info.max_nb_queues,
+		   queue_info.max_size,
+	       port_info.max_nb_counter_actions,
+	       port_info.max_nb_aging_actions,
+	       port_info.max_nb_meter_actions);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index b2e98df6e1..5de6a1c9ef 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,51 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Flow engine resources on port #[...]:
+   Number of queues: #[...]
+   Size of queues: #[...]
+   Number of counter actions: #[...]
+   Number of aging actions: #[...]
+   Number of meter actions: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 05/10] app/testpmd: add flow template management
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (3 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 04/10] app/testpmd: add flow engine configuration Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 06/10] app/testpmd: add flow table management Alexander Kozyrev
                         ` (5 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 376 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++++
 app/test-pmd/testpmd.h                      |  23 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  97 +++++
 4 files changed, 697 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f1f0076853..4655dd13e0 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,22 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -861,6 +881,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -869,10 +893,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -952,6 +979,43 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -1991,6 +2055,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2060,6 +2130,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2210,6 +2284,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2218,6 +2306,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2292,6 +2382,112 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meter_actions)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2614,7 +2810,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5731,7 +5927,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7595,6 +7793,114 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8564,6 +8870,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -8832,6 +9186,24 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port, in->args.vc.pat_templ_id,
+				in->args.vc.attr.reserved, in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port, in->args.vc.act_templ_id,
+				in->args.vc.actions, in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index bcf1944bb7..a576af6bf3 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1609,6 +1609,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2085,6 +2128,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id, bool relaxed,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_pattern_template_attr attr = {
+					.relaxed_matching = relaxed };
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						&attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						NULL, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..c70b1fa4e8 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,16 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      bool relaxed,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 5de6a1c9ef..86bebe8b06 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,24 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3448,6 +3466,85 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 06/10] app/testpmd: add flow table management
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (4 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 05/10] app/testpmd: add flow template management Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 07/10] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
                         ` (4 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4655dd13e0..f41f242e41 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -112,6 +114,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -885,6 +901,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1016,6 +1044,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2061,6 +2115,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2134,6 +2193,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2298,6 +2359,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2308,6 +2376,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2488,6 +2557,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7901,6 +8068,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8918,6 +9198,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9204,6 +9508,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index a576af6bf3..5f7b7a801c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1652,6 +1652,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2288,6 +2331,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index c70b1fa4e8..4c6e775bad 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -915,6 +926,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 86bebe8b06..48d6ebaddd 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3362,6 +3362,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3545,6 +3558,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 07/10] app/testpmd: add async flow create/destroy operations
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (5 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 06/10] app/testpmd: add flow table management Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 08/10] app/testpmd: add flow queue push operation Alexander Kozyrev
                         ` (3 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index f41f242e41..5b96578ff8 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -114,6 +116,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -891,6 +909,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -921,6 +941,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1070,6 +1091,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2120,6 +2153,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2195,6 +2234,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2366,6 +2407,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2388,7 +2436,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2655,6 +2704,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8181,6 +8308,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9222,6 +9454,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9519,6 +9773,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 5f7b7a801c..be8eb9a485 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2459,6 +2459,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_q_flow_create(port_id, queue_id, &ops_attr,
+		pt->table, pattern, pattern_idx, actions, actions_idx, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_q_flow_destroy(port_id, queue_id, &op_attr,
+						    pf->flow, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_q_pull(port_id, queue_id,
+							 &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 4c6e775bad..d0e1e3eeec 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -932,6 +932,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 48d6ebaddd..f767137d3c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3382,6 +3382,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3704,6 +3718,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_q_flow_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4419,6 +4457,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_q_flow_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 08/10] app/testpmd: add flow queue push operation
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (6 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 07/10] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 09/10] app/testpmd: add flow queue pull operation Alexander Kozyrev
                         ` (2 subsequent siblings)
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5b96578ff8..62eca1271f 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -132,6 +133,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2159,6 +2163,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2437,7 +2444,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2782,6 +2790,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8413,6 +8436,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9784,6 +9835,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index be8eb9a485..05cb7cbe83 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2625,6 +2625,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_q_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index d0e1e3eeec..03f135ff46 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f767137d3c..bcd8fd43b5 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3396,6 +3396,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3612,6 +3616,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_q_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 09/10] app/testpmd: add flow queue pull operation
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (7 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 08/10] app/testpmd: add flow queue push operation Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-12  4:19       ` [PATCH v6 10/10] app/testpmd: add async indirect actions creation/destruction Alexander Kozyrev
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 62eca1271f..5aeb2d1dce 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -136,6 +137,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2166,6 +2170,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2445,7 +2452,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2805,6 +2813,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8464,6 +8487,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9838,6 +9889,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 05cb7cbe83..e49b171086 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2468,14 +2468,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2538,16 +2536,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_q_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2562,7 +2550,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2598,21 +2585,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_q_pull(port_id, queue_id,
-							 &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2653,6 +2625,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_q_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 03f135ff46..6fe829edab 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index bcd8fd43b5..81d1b464b3 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3400,6 +3400,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3633,6 +3637,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_q_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3763,6 +3784,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4497,6 +4520,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v6 10/10] app/testpmd: add async indirect actions creation/destruction
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (8 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 09/10] app/testpmd: add flow queue pull operation Alexander Kozyrev
@ 2022-02-12  4:19       ` Alexander Kozyrev
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  10 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-12  4:19 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5aeb2d1dce..733e646bbc 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -121,6 +121,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -134,6 +135,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1102,6 +1123,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1111,6 +1133,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2167,6 +2219,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2744,6 +2802,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2797,6 +2862,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6209,6 +6358,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -9892,6 +10145,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e49b171086..ab2d7cbdf2 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2597,6 +2597,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_q_action_handle_create(port_id, queue_id, &attr,
+						      conf, action, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_q_action_handle_destroy(port_id, queue_id,
+						&attr, pia->handle, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_q_action_handle_update(port_id, queue_id, &attr,
+					    action_handle, action, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 6fe829edab..167f1741dc 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -939,6 +939,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 81d1b464b3..9526d62d3c 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4781,6 +4781,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4810,6 +4835,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_q_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4833,6 +4877,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_q_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-12  2:19         ` Alexander Kozyrev
@ 2022-02-12  9:25           ` Thomas Monjalon
  2022-02-16 22:49             ` Alexander Kozyrev
  2022-02-16 13:34           ` Andrew Rybchenko
  1 sibling, 1 reply; 220+ messages in thread
From: Thomas Monjalon @ 2022-02-12  9:25 UTC (permalink / raw)
  To: Andrew Rybchenko, dev, Alexander Kozyrev
  Cc: Ori Kam, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

12/02/2022 03:19, Alexander Kozyrev:
> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> > On 2/11/22 05:26, Alexander Kozyrev wrote:
> > > +__rte_experimental
> > > +struct rte_flow *
> > > +rte_flow_q_flow_create(uint16_t port_id,
> > 
> > flow_q_flow does not sound like a good nameing, consider:
> > rte_flow_q_rule_create() is <subsystem>_<subtype>_<object>_<action>
> 
> More like:
> <subsystem>_<subtype>_<object>_<action>
>  <rte>_<flow>_<rule_create_operation>_<queue>
> Which is pretty lengthy name as for me.

Naming :)
This one may be improved I think.
What is the problem with replacing "flow" with "rule"?
Is it the right meaning?

> > > +__rte_experimental
> > > +struct rte_flow_action_handle *
> > > +rte_flow_q_action_handle_create(uint16_t port_id,
> > > +		uint32_t queue_id,
> > > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > > +		const struct rte_flow_indir_action_conf *indir_action_conf,
> > > +		const struct rte_flow_action *action,
> > 
> > I don't understand why it differs so much from rule creation.
> > Why is action template not used?
> > IMHO indirect actions should be dropped from the patch
> > and added separately since it is a separate feature.
> 
> I agree, they deserve a sperate patch since they are rather resource creations.
> But, I'm afraid it is too late for RC1.

I think it could be done for RC2.

> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Pull a rte flow operation.
> > > + * The application must invoke this function in order to complete
> > > + * the flow rule offloading and to retrieve the flow rule operation status.
> > > + *
> > > + * @param port_id
> > > + *   Port identifier of Ethernet device.
> > > + * @param queue_id
> > > + *   Flow queue which is used to pull the operation.
> > > + * @param[out] res
> > > + *   Array of results that will be set.
> > > + * @param[in] n_res
> > > + *   Maximum number of results that can be returned.
> > > + *   This value is equal to the size of the res array.
> > > + * @param[out] error
> > > + *   Perform verbose error reporting if not NULL.
> > > + *   PMDs initialize this structure in case of error only.
> > > + *
> > > + * @return
> > > + *   Number of results that were pulled,
> > > + *   a negative errno value otherwise and rte_errno is set.
> > 
> > Don't we want to define negative error code meaning?
> 
> They are all standard, don't think we need another copy-paste here.

That's an API, it needs to be all explicit.
I missed it before, we should add the error codes here.




^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-11 18:47         ` Alexander Kozyrev
@ 2022-02-16 13:03           ` Andrew Rybchenko
  2022-02-16 22:17             ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-16 13:03 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/11/22 21:47, Alexander Kozyrev wrote:
> On Friday, February 11, 2022 5:17 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>> Sent: Friday, February 11, 2022 5:17
>> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
>> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL)
>> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru; ferruh.yigit@intel.com;
>> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com; jerinj@marvell.com;
>> ajit.khaparde@broadcom.com; bruce.richardson@intel.com
>> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
>>
>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>> The flow rules creation/destruction at a large scale incurs a performance
>>> penalty and may negatively impact the packet processing when used
>>> as part of the datapath logic. This is mainly because software/hardware
>>> resources are allocated and prepared during the flow rule creation.
>>>
>>> In order to optimize the insertion rate, PMD may use some hints provided
>>> by the application at the initialization phase. The rte_flow_configure()
>>> function allows to pre-allocate all the needed resources beforehand.
>>> These resources can be used at a later stage without costly allocations.
>>> Every PMD may use only the subset of hints and ignore unused ones or
>>> fail in case the requested configuration is not supported.
>>>
>>> The rte_flow_info_get() is available to retrieve the information about
>>> supported pre-configurable resources. Both these functions must be called
>>> before any other usage of the flow API engine.
>>>
>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>> Acked-by: Ori Kam <orika@nvidia.com>

[snip]

>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>> index a93f68abbc..66614ae29b 100644
>>> --- a/lib/ethdev/rte_flow.c
>>> +++ b/lib/ethdev/rte_flow.c
>>> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
>>>    	ret = ops->flex_item_release(dev, handle, error);
>>>    	return flow_err(port_id, ret, error);
>>>    }
>>> +
>>> +int
>>> +rte_flow_info_get(uint16_t port_id,
>>> +		  struct rte_flow_port_info *port_info,
>>> +		  struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>> +	if (likely(!!ops->info_get)) {
>>
>> expected ethdev state must be validated. Just configured?
>>
>>> +		return flow_err(port_id,
>>> +				ops->info_get(dev, port_info, error),
>>
>> port_info must be checked vs NULL
> 
> We don’t have any NULL checks for parameters in the whole ret flow API library.
> See rte_flow_create() for example. attributes, pattern and actions are passed to PMD unchecked.

IMHO it is hardly a good reason to have no such check here.
The API is pure control path. So, it must validate all input
arguments and it is better to do it in a generic place.

>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> +
>>> +int
>>> +rte_flow_configure(uint16_t port_id,
>>> +		   const struct rte_flow_port_attr *port_attr,
>>> +		   struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>> +	if (likely(!!ops->configure)) {
>>
>> The API must validate ethdev state. configured and not started?
> Again, we have no such validation for any rte flow API today.

Same here. If documentation defines in which state the API
should be called, generic code must guarantee it.

>>
>>> +		return flow_err(port_id,
>>> +				ops->configure(dev, port_attr, error),
>>
>> port_attr must be checked vs NULL
> Same.
> 
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>> index 1031fb246b..92be2a9a89 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t port_id,
>>>    			   const struct rte_flow_item_flex_handle *handle,
>>>    			   struct rte_flow_error *error);
>>>
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Information about available pre-configurable resources.
>>> + * The zero value means a resource cannot be pre-allocated.
>>> + *
>>> + */
>>> +struct rte_flow_port_info {
>>> +	/**
>>> +	 * Number of pre-configurable counter actions.
>>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
>>> +	 */
>>> +	uint32_t nb_counters;
>>
>> Name says that it is a number of counters, but description
>> says that it is about actions.
>> Also I don't understand what does "pre-configurable" mean.
>> Isn't it a maximum number of available counters?
>> If no, how can I find a maximum?
> It is number of pre-allocated and pre-configured actions.
> How are they pr-configured is up to PDM driver.
> But let's change to "pre-configured" everywhere.
> Configuration includes some memory allocation anyway.

Sorry, but I still don't understand. I guess HW has
a hard limit on a number of counters. How can I get
the information?

>>
>>> +	/**
>>> +	 * Number of pre-configurable aging flows actions.
>>> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
>>> +	 */
>>> +	uint32_t nb_aging_flows;
>>
>> Same
> Ditto.
>   
>>> +	/**
>>> +	 * Number of pre-configurable traffic metering actions.
>>> +	 * @see RTE_FLOW_ACTION_TYPE_METER
>>> +	 */
>>> +	uint32_t nb_meters;
>>
>> Same
> Ditto.
> 
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Retrieve configuration attributes supported by the port.
>>
>> Description should be a bit more flow API aware.
>> Right now it sounds too generic.
> Ok, how about
> "Get information about flow engine pre-configurable resources."
>   
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[out] port_info
>>> + *   A pointer to a structure of type *rte_flow_port_info*
>>> + *   to be filled with the contextual information of the port.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_flow_info_get(uint16_t port_id,
>>> +		  struct rte_flow_port_info *port_info,
>>> +		  struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Resource pre-allocation and pre-configuration settings.
>>
>> What is the difference between pre-allocation and pre-configuration?
>> Why are both mentioned above, but just pre-configured actions are
>> mentioned below?
> Please see answer to this question above.
>   
>>> + * The zero value means on demand resource allocations only.
>>> + *
>>> + */
>>> +struct rte_flow_port_attr {
>>> +	/**
>>> +	 * Number of counter actions pre-configured.
>>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
>>> +	 */
>>> +	uint32_t nb_counters;
>>> +	/**
>>> +	 * Number of aging flows actions pre-configured.
>>> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
>>> +	 */
>>> +	uint32_t nb_aging_flows;
>>> +	/**
>>> +	 * Number of traffic metering actions pre-configured.
>>> +	 * @see RTE_FLOW_ACTION_TYPE_METER
>>> +	 */
>>> +	uint32_t nb_meters;
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Configure the port's flow API engine.
>>> + *
>>> + * This API can only be invoked before the application
>>> + * starts using the rest of the flow library functions.
>>> + *
>>> + * The API can be invoked multiple times to change the
>>> + * settings. The port, however, may reject the changes.
>>> + *
>>> + * Parameters in configuration attributes must not exceed
>>> + * numbers of resources returned by the rte_flow_info_get API.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] port_attr
>>> + *   Port configuration attributes.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_flow_configure(uint16_t port_id,
>>> +		   const struct rte_flow_port_attr *port_attr,
>>> +		   struct rte_flow_error *error);
>>> +
>>>    #ifdef __cplusplus
>>>    }
>>>    #endif

[snip]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-11 22:25         ` Alexander Kozyrev
@ 2022-02-16 13:14           ` Andrew Rybchenko
  2022-02-16 14:18             ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-16 13:14 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/12/22 01:25, Alexander Kozyrev wrote:
> On Fri, Feb 11, 2022 6:27 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>> Treating every single flow rule as a completely independent and separate
>>> entity negatively impacts the flow rules insertion rate. Oftentimes in an
>>> application, many flow rules share a common structure (the same item mask
>>> and/or action list) so they can be grouped and classified together.
>>> This knowledge may be used as a source of optimization by a PMD/HW.
>>>
>>> The pattern template defines common matching fields (the item mask) without
>>> values. The actions template holds a list of action types that will be used
>>> together in the same rule. The specific values for items and actions will
>>> be given only during the rule creation.
>>>
>>> A table combines pattern and actions templates along with shared flow rule
>>> attributes (group ID, priority and traffic direction). This way a PMD/HW
>>> can prepare all the resources needed for efficient flow rules creation in
>>> the datapath. To avoid any hiccups due to memory reallocation, the maximum
>>> number of flow rules is defined at the table creation time.
>>>
>>> The flow rule creation is done by selecting a table, a pattern template
>>> and an actions template (which are bound to the table), and setting unique
>>> values for the items and actions.
>>>
>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>> Acked-by: Ori Kam <orika@nvidia.com>

[snip]

>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>> index 66614ae29b..b53f8c9b89 100644
>>> --- a/lib/ethdev/rte_flow.c
>>> +++ b/lib/ethdev/rte_flow.c
>>> @@ -1431,3 +1431,150 @@ rte_flow_configure(uint16_t port_id,
>>>    				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>    				  NULL, rte_strerror(ENOTSUP));
>>>    }
>>> +
>>> +struct rte_flow_pattern_template *
>>> +rte_flow_pattern_template_create(uint16_t port_id,
>>> +		const struct rte_flow_pattern_template_attr *template_attr,
>>> +		const struct rte_flow_item pattern[],
>>> +		struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +	struct rte_flow_pattern_template *template;
>>> +
>>> +	if (unlikely(!ops))
>>> +		return NULL;
>>> +	if (likely(!!ops->pattern_template_create)) {
>>
>> Don't we need any state checks?
>>
>> Check pattern vs NULL?
> 
> Still the same situation, no NULL checks elsewhere in rte flow API.

I still think that it is wrong as explained.
Same is applicable to many review notes below.

> 
>>
>>> +		template = ops->pattern_template_create(dev, template_attr,
>>> +						     pattern, error);
>>> +		if (template == NULL)
>>> +			flow_err(port_id, -rte_errno, error);
>>> +		return template;
>>> +	}
>>> +	rte_flow_error_set(error, ENOTSUP,
>>> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +			   NULL, rte_strerror(ENOTSUP));
>>> +	return NULL;
>>> +}
>>> +
>>> +int
>>> +rte_flow_pattern_template_destroy(uint16_t port_id,
>>> +		struct rte_flow_pattern_template *pattern_template,
>>> +		struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>> +	if (likely(!!ops->pattern_template_destroy)) {
>>
>> IMHO we should return success here if pattern_template is NULL
> 
> Just like in rte_flow_destroy() it is up to PMD driver to decide.

Why? We must define behaviour in the case of NULL and guarantee
it.

>>> +		return flow_err(port_id,
>>> +				ops->pattern_template_destroy(dev,
>>> +							      pattern_template,
>>> +							      error),
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> +
>>> +struct rte_flow_actions_template *
>>> +rte_flow_actions_template_create(uint16_t port_id,
>>> +			const struct rte_flow_actions_template_attr
>> *template_attr,
>>> +			const struct rte_flow_action actions[],
>>> +			const struct rte_flow_action masks[],
>>> +			struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +	struct rte_flow_actions_template *template;
>>> +
>>> +	if (unlikely(!ops))
>>> +		return NULL;
>>> +	if (likely(!!ops->actions_template_create)) {
>>
>> State checks?
>>
>> Check actions and masks vs NULL?
> 
> No, sorry.
> 
>>
>>> +		template = ops->actions_template_create(dev, template_attr,
>>> +							actions, masks, error);
>>> +		if (template == NULL)
>>> +			flow_err(port_id, -rte_errno, error);
>>> +		return template;
>>> +	}
>>> +	rte_flow_error_set(error, ENOTSUP,
>>> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +			   NULL, rte_strerror(ENOTSUP));
>>> +	return NULL;
>>> +}
>>> +
>>> +int
>>> +rte_flow_actions_template_destroy(uint16_t port_id,
>>> +			struct rte_flow_actions_template *actions_template,
>>> +			struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>> +	if (likely(!!ops->actions_template_destroy)) {
>>
>> IMHO we should return success here if actions_template is NULL
> 
> Just like in rte_flow_destroy() it is up to PMD driver to decide.

Same

>>
>>> +		return flow_err(port_id,
>>> +				ops->actions_template_destroy(dev,
>>> +							      actions_template,
>>> +							      error),
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> +
>>> +struct rte_flow_template_table *
>>> +rte_flow_template_table_create(uint16_t port_id,
>>> +			const struct rte_flow_template_table_attr *table_attr,
>>> +			struct rte_flow_pattern_template
>> *pattern_templates[],
>>> +			uint8_t nb_pattern_templates,
>>> +			struct rte_flow_actions_template
>> *actions_templates[],
>>> +			uint8_t nb_actions_templates,
>>> +			struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +	struct rte_flow_template_table *table;
>>> +
>>> +	if (unlikely(!ops))
>>> +		return NULL;
>>> +	if (likely(!!ops->template_table_create)) {
>>
>> Argument sanity checks here. array NULL when size is not 0.
> 
> Hate to say no so many times, but I cannot help it.
> 
>>
>>> +		table = ops->template_table_create(dev, table_attr,
>>> +					pattern_templates,
>> nb_pattern_templates,
>>> +					actions_templates,
>> nb_actions_templates,
>>> +					error);
>>> +		if (table == NULL)
>>> +			flow_err(port_id, -rte_errno, error);
>>> +		return table;
>>> +	}
>>> +	rte_flow_error_set(error, ENOTSUP,
>>> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +			   NULL, rte_strerror(ENOTSUP));
>>> +	return NULL;
>>> +}
>>> +
>>> +int
>>> +rte_flow_template_table_destroy(uint16_t port_id,
>>> +				struct rte_flow_template_table
>> *template_table,
>>> +				struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>> +	if (likely(!!ops->template_table_destroy)) {
>>
>> Return success if template_table is NULL
> 
> Just like in rte_flow_destroy() it is up to PMD driver to decide.

Same

>   
>>> +		return flow_err(port_id,
>>> +				ops->template_table_destroy(dev,
>>> +							    template_table,
>>> +							    error),
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>> index 92be2a9a89..e87db5a540 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -4961,6 +4961,266 @@ rte_flow_configure(uint16_t port_id,
>>>    		   const struct rte_flow_port_attr *port_attr,
>>>    		   struct rte_flow_error *error);
>>>
>>> +/**
>>> + * Opaque type returned after successful creation of pattern template.
>>> + * This handle can be used to manage the created pattern template.
>>> + */
>>> +struct rte_flow_pattern_template;
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Flow pattern template attributes.
>>> + */
>>> +__extension__
>>> +struct rte_flow_pattern_template_attr {
>>> +	/**
>>> +	 * Relaxed matching policy.
>>> +	 * - PMD may match only on items with mask member set and skip
>>> +	 * matching on protocol layers specified without any masks.
>>> +	 * - If not set, PMD will match on protocol layers
>>> +	 * specified without any masks as well.
>>> +	 * - Packet data must be stacked in the same order as the
>>> +	 * protocol layers to match inside packets, starting from the lowest.
>>> +	 */
>>> +	uint32_t relaxed_matching:1;
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Create pattern template.
>>
>> Create flow pattern template.
> 
> Ok.
> 
>>> + *
>>> + * The pattern template defines common matching fields without values.
>>> + * For example, matching on 5 tuple TCP flow, the template will be
>>> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
>>> + * while values for each rule will be set during the flow rule creation.
>>> + * The number and order of items in the template must be the same
>>> + * at the rule creation.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] template_attr
>>> + *   Pattern template attributes.
>>> + * @param[in] pattern
>>> + *   Pattern specification (list terminated by the END pattern item).
>>> + *   The spec member of an item is not used unless the end member is used.
>>
>> Interpretation of the pattern may depend on transfer vs non-transfer
>> rule to be used. It is essential information and we should provide it
>> when pattern template is created.
>>
>> The information is provided on table stage, but it is too late.
> 
> Why is it too late? Application knows which template goes to which table.
> And the pattern is generic to accommodate anything, user just need to put it
> into the right table.

Because it is more convenient to handle it when individual
template is processed. Otherwise error reporting will be
complicated since it could be just one template which is
wrong.

Otherwise, I see no point to have driver callbacks
template creation API. I can do nothing here since
I have no enough context. What's the problem to add
the context?

> 
>>
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   Handle on success, NULL otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +struct rte_flow_pattern_template *
>>> +rte_flow_pattern_template_create(uint16_t port_id,
>>> +		const struct rte_flow_pattern_template_attr *template_attr,
>>> +		const struct rte_flow_item pattern[],
>>> +		struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Destroy pattern template.
>>
>> Destroy flow pattern template.
> 
> Ok.
> 
>>> + *
>>> + * This function may be called only when
>>> + * there are no more tables referencing this template.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] pattern_template
>>> + *   Handle of the template to be destroyed.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +int
>>> +rte_flow_pattern_template_destroy(uint16_t port_id,
>>> +		struct rte_flow_pattern_template *pattern_template,
>>> +		struct rte_flow_error *error);
>>> +
>>> +/**
>>> + * Opaque type returned after successful creation of actions template.
>>> + * This handle can be used to manage the created actions template.
>>> + */
>>> +struct rte_flow_actions_template;
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Flow actions template attributes.
>>> + */
>>> +struct rte_flow_actions_template_attr;
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Create actions template.
>>
>> Create flow rule actions template.
> 
> Yes, finally compensating for multiple no's.
> 
>>> + *
>>> + * The actions template holds a list of action types without values.
>>> + * For example, the template to change TCP ports is TCP(s_port + d_port),
>>> + * while values for each rule will be set during the flow rule creation.
>>> + * The number and order of actions in the template must be the same
>>> + * at the rule creation.
>>
>> Again, it highly depends on transfer vs non-transfer. Moreover,
>> application definitely know it. So, it should say if the action
>> is intended for transfer or non-transfer flow rule.
> 
> It is up to application to define which pattern it is going to use in different tables.

Same as above.

> 
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] template_attr
>>> + *   Template attributes.
>>> + * @param[in] actions
>>> + *   Associated actions (list terminated by the END action).
>>> + *   The spec member is only used if @p masks spec is non-zero.
>>> + * @param[in] masks
>>> + *   List of actions that marks which of the action's member is constant.
>>> + *   A mask has the same format as the corresponding action.
>>> + *   If the action field in @p masks is not 0,
>>> + *   the corresponding value in an action from @p actions will be the part
>>> + *   of the template and used in all flow rules.
>>> + *   The order of actions in @p masks is the same as in @p actions.
>>> + *   In case of indirect actions present in @p actions,
>>> + *   the actual action type should be present in @p mask.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   Handle on success, NULL otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +struct rte_flow_actions_template *
>>> +rte_flow_actions_template_create(uint16_t port_id,
>>> +		const struct rte_flow_actions_template_attr *template_attr,
>>> +		const struct rte_flow_action actions[],
>>> +		const struct rte_flow_action masks[],
>>> +		struct rte_flow_error *error);

[snip]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-12  2:19         ` Alexander Kozyrev
  2022-02-12  9:25           ` Thomas Monjalon
@ 2022-02-16 13:34           ` Andrew Rybchenko
  2022-02-16 14:53             ` Ori Kam
  2022-02-16 15:15             ` Ori Kam
  1 sibling, 2 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-16 13:34 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/12/22 05:19, Alexander Kozyrev wrote:
> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>> A new, faster, queue-based flow rules management mechanism is needed
>> for
>>> applications offloading rules inside the datapath. This asynchronous
>>> and lockless mechanism frees the CPU for further packet processing and
>>> reduces the performance impact of the flow rules creation/destruction
>>> on the datapath. Note that queues are not thread-safe and the queue
>>> should be accessed from the same thread for all queue operations.
>>> It is the responsibility of the app to sync the queue functions in case
>>> of multi-threaded access to the same queue.
>>>
>>> The rte_flow_q_flow_create() function enqueues a flow creation to the
>>> requested queue. It benefits from already configured resources and sets
>>> unique values on top of item and action templates. A flow rule is enqueued
>>> on the specified flow queue and offloaded asynchronously to the
>> hardware.
>>> The function returns immediately to spare CPU for further packet
>>> processing. The application must invoke the rte_flow_q_pull() function
>>> to complete the flow rule operation offloading, to clear the queue, and to
>>> receive the operation status. The rte_flow_q_flow_destroy() function
>>> enqueues a flow destruction to the requested queue.
>>>
>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>> Acked-by: Ori Kam <orika@nvidia.com>

[snip]

>>> +
>>> +- Available operation types: rule creation, rule destruction,
>>> +  indirect rule creation, indirect rule destruction, indirect rule update.
>>> +
>>> +- Operations may be reordered within a queue.
>>
>> Do we want to have barriers?
>> E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule
>> lives forever.
> 
> API design is crafter with the throughput as the main goal in mind.
> We allow user to enforce any ordering outside these functions.
> Another point that not all PMDs/NIC will have this out-of-order execution.

Throughput is nice, but there more important requirements
which must be satistied before talking about performance.
Could you explain me what I should do based on which
information from NIC in order to solve above problem?

>>> +
>>> +- Operations can be postponed and pushed to NIC in batches.
>>> +
>>> +- Results pulling must be done on time to avoid queue overflows.
>>
>> polling? (as libc poll() which checks status of file descriptors)
>> it is not pulling the door to open it :)
> 
> poll waits for some event on a file descriptor as it title says.
> And then user has to invoke read() to actually get any info from the fd.
> The point of our function is to return the result immediately, thus pulling.
> We had many names appearing in the thread for these functions.
> As we know, naming variables is the second hardest thing in programming.
> I wanted this pull for results pulling be a counterpart for the push for
> pushing the operations to a NIC. Another idea is pop/push pair, but they are
> more like for operations only, not for results.
> Having said that I'm at the point of accepting any name here.

I agree that it is hard to choose good naming.
Just want to say that polling is not alway waiting.

poll - check the status of (a device), especially as part of a repeated 
cycle.

Here we're checking status of flow engine requests and yes,
finally in a repeated cycle.

[snip]

>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Queue operation attributes.
>>> + */
>>> +struct rte_flow_q_ops_attr {
>>> +	/**
>>> +	 * The user data that will be returned on the completion events.
>>> +	 */
>>> +	void *user_data;
>>
>> IMHO it must not be hiddne in attrs. It is a key information
>> which is used to understand the opration result. It should
>> be passed separately.
> 
> Maybe, on the other hand it is optional and may not be needed by an application.

I don't understand how it is possible. Without it application
don't know fate of its requests.

>>> +	 /**
>>> +	  * When set, the requested action will not be sent to the HW
>> immediately.
>>> +	  * The application must call the rte_flow_queue_push to actually
>> send it.
>>
>> Will the next operation without the attribute set implicitly push it?
>> Is it mandatory for the driver to respect it? Or is it just a possible
>> optimization hint?
> 
> Yes, it will be pushed with all the operations in a queue once the postpone is cleared.
> It is not mandatory to respect this bit, PMD can use other optimization technics.

Could you clarify it in the description.

>>
>>> +	  */
>>> +	uint32_t postpone:1;
>>> +};
[snip]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-16 13:14           ` Andrew Rybchenko
@ 2022-02-16 14:18             ` Ori Kam
  2022-02-17 10:44               ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-02-16 14:18 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v5 02/10] ethdev: add flow item/action templates
> 
> On 2/12/22 01:25, Alexander Kozyrev wrote:
> > On Fri, Feb 11, 2022 6:27 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
> >> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>> Treating every single flow rule as a completely independent and separate
> >>> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> >>> application, many flow rules share a common structure (the same item mask
> >>> and/or action list) so they can be grouped and classified together.
> >>> This knowledge may be used as a source of optimization by a PMD/HW.
> >>>
> >>> The pattern template defines common matching fields (the item mask) without
> >>> values. The actions template holds a list of action types that will be used
> >>> together in the same rule. The specific values for items and actions will
> >>> be given only during the rule creation.
> >>>
> >>> A table combines pattern and actions templates along with shared flow rule
> >>> attributes (group ID, priority and traffic direction). This way a PMD/HW
> >>> can prepare all the resources needed for efficient flow rules creation in
> >>> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> >>> number of flow rules is defined at the table creation time.
> >>>
> >>> The flow rule creation is done by selecting a table, a pattern template
> >>> and an actions template (which are bound to the table), and setting unique
> >>> values for the items and actions.
> >>>
> >>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >>> Acked-by: Ori Kam <orika@nvidia.com>
> 
> [snip]
> 

[Snip]

> >>> + *
> >>> + * The pattern template defines common matching fields without values.
> >>> + * For example, matching on 5 tuple TCP flow, the template will be
> >>> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> >>> + * while values for each rule will be set during the flow rule creation.
> >>> + * The number and order of items in the template must be the same
> >>> + * at the rule creation.
> >>> + *
> >>> + * @param port_id
> >>> + *   Port identifier of Ethernet device.
> >>> + * @param[in] template_attr
> >>> + *   Pattern template attributes.
> >>> + * @param[in] pattern
> >>> + *   Pattern specification (list terminated by the END pattern item).
> >>> + *   The spec member of an item is not used unless the end member is used.
> >>
> >> Interpretation of the pattern may depend on transfer vs non-transfer
> >> rule to be used. It is essential information and we should provide it
> >> when pattern template is created.
> >>
> >> The information is provided on table stage, but it is too late.
> >
> > Why is it too late? Application knows which template goes to which table.
> > And the pattern is generic to accommodate anything, user just need to put it
> > into the right table.
> 
> Because it is more convenient to handle it when individual
> template is processed. Otherwise error reporting will be
> complicated since it could be just one template which is
> wrong.
> 
> Otherwise, I see no point to have driver callbacks
> template creation API. I can do nothing here since
> I have no enough context. What's the problem to add
> the context?
> 

The idea is that the same template can be used in different
domains (ingress/egress and transfer)
May be we can add on which domains this template is expected to be used.
What do you think?

> >
> >>
> >>> + * @param[out] error
> >>> + *   Perform verbose error reporting if not NULL.
> >>> + *   PMDs initialize this structure in case of error only.
> >>> + *
> >>> + * @return
> >>> + *   Handle on success, NULL otherwise and rte_errno is set.
> >>> + */
> >>> +__rte_experimental
> >>> +struct rte_flow_pattern_template *
> >>> +rte_flow_pattern_template_create(uint16_t port_id,
> >>> +		const struct rte_flow_pattern_template_attr *template_attr,
> >>> +		const struct rte_flow_item pattern[],
> >>> +		struct rte_flow_error *error);
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Destroy pattern template.
> >>
> >> Destroy flow pattern template.
> >
> > Ok.
> >
> >>> + *
> >>> + * This function may be called only when
> >>> + * there are no more tables referencing this template.
> >>> + *
> >>> + * @param port_id
> >>> + *   Port identifier of Ethernet device.
> >>> + * @param[in] pattern_template
> >>> + *   Handle of the template to be destroyed.
> >>> + * @param[out] error
> >>> + *   Perform verbose error reporting if not NULL.
> >>> + *   PMDs initialize this structure in case of error only.
> >>> + *
> >>> + * @return
> >>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> >>> + */
> >>> +__rte_experimental
> >>> +int
> >>> +rte_flow_pattern_template_destroy(uint16_t port_id,
> >>> +		struct rte_flow_pattern_template *pattern_template,
> >>> +		struct rte_flow_error *error);
> >>> +
> >>> +/**
> >>> + * Opaque type returned after successful creation of actions template.
> >>> + * This handle can be used to manage the created actions template.
> >>> + */
> >>> +struct rte_flow_actions_template;
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Flow actions template attributes.
> >>> + */
> >>> +struct rte_flow_actions_template_attr;
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Create actions template.
> >>
> >> Create flow rule actions template.
> >
> > Yes, finally compensating for multiple no's.
> >
> >>> + *
> >>> + * The actions template holds a list of action types without values.
> >>> + * For example, the template to change TCP ports is TCP(s_port + d_port),
> >>> + * while values for each rule will be set during the flow rule creation.
> >>> + * The number and order of actions in the template must be the same
> >>> + * at the rule creation.
> >>
> >> Again, it highly depends on transfer vs non-transfer. Moreover,
> >> application definitely know it. So, it should say if the action
> >> is intended for transfer or non-transfer flow rule.
> >
> > It is up to application to define which pattern it is going to use in different tables.
> 
> Same as above.
> 
Same comment as above, what do you think?

> >
> >>> + *
> >>> + * @param port_id
> >>> + *   Port identifier of Ethernet device.
> >>> + * @param[in] template_attr
> >>> + *   Template attributes.
> >>> + * @param[in] actions
> >>> + *   Associated actions (list terminated by the END action).
> >>> + *   The spec member is only used if @p masks spec is non-zero.
> >>> + * @param[in] masks
> >>> + *   List of actions that marks which of the action's member is constant.
> >>> + *   A mask has the same format as the corresponding action.
> >>> + *   If the action field in @p masks is not 0,
> >>> + *   the corresponding value in an action from @p actions will be the part
> >>> + *   of the template and used in all flow rules.
> >>> + *   The order of actions in @p masks is the same as in @p actions.
> >>> + *   In case of indirect actions present in @p actions,
> >>> + *   the actual action type should be present in @p mask.
> >>> + * @param[out] error
> >>> + *   Perform verbose error reporting if not NULL.
> >>> + *   PMDs initialize this structure in case of error only.
> >>> + *
> >>> + * @return
> >>> + *   Handle on success, NULL otherwise and rte_errno is set.
> >>> + */
> >>> +__rte_experimental
> >>> +struct rte_flow_actions_template *
> >>> +rte_flow_actions_template_create(uint16_t port_id,
> >>> +		const struct rte_flow_actions_template_attr *template_attr,
> >>> +		const struct rte_flow_action actions[],
> >>> +		const struct rte_flow_action masks[],
> >>> +		struct rte_flow_error *error);
> 
> [snip]

Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-16 13:34           ` Andrew Rybchenko
@ 2022-02-16 14:53             ` Ori Kam
  2022-02-17 10:52               ` Andrew Rybchenko
  2022-02-16 15:15             ` Ori Kam
  1 sibling, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-02-16 14:53 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, February 16, 2022 3:34 PM
> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> 
> On 2/12/22 05:19, Alexander Kozyrev wrote:
> > On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> >> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>> A new, faster, queue-based flow rules management mechanism is needed
> >> for
> >>> applications offloading rules inside the datapath. This asynchronous
> >>> and lockless mechanism frees the CPU for further packet processing and
> >>> reduces the performance impact of the flow rules creation/destruction
> >>> on the datapath. Note that queues are not thread-safe and the queue
> >>> should be accessed from the same thread for all queue operations.
> >>> It is the responsibility of the app to sync the queue functions in case
> >>> of multi-threaded access to the same queue.
> >>>
> >>> The rte_flow_q_flow_create() function enqueues a flow creation to the
> >>> requested queue. It benefits from already configured resources and sets
> >>> unique values on top of item and action templates. A flow rule is enqueued
> >>> on the specified flow queue and offloaded asynchronously to the
> >> hardware.
> >>> The function returns immediately to spare CPU for further packet
> >>> processing. The application must invoke the rte_flow_q_pull() function
> >>> to complete the flow rule operation offloading, to clear the queue, and to
> >>> receive the operation status. The rte_flow_q_flow_destroy() function
> >>> enqueues a flow destruction to the requested queue.
> >>>
> >>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >>> Acked-by: Ori Kam <orika@nvidia.com>
> 
> [snip]
> 
> >>> +
> >>> +- Available operation types: rule creation, rule destruction,
> >>> +  indirect rule creation, indirect rule destruction, indirect rule update.
> >>> +
> >>> +- Operations may be reordered within a queue.
> >>
> >> Do we want to have barriers?
> >> E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule
> >> lives forever.
> >
> > API design is crafter with the throughput as the main goal in mind.
> > We allow user to enforce any ordering outside these functions.
> > Another point that not all PMDs/NIC will have this out-of-order execution.
> 
> Throughput is nice, but there more important requirements
> which must be satistied before talking about performance.
> Could you explain me what I should do based on which
> information from NIC in order to solve above problem?
> 

The idea is that if application has dependency between the rules/ rules operations.
It should wait for the completion of the operation before sending the dependent operation.
In the example you provided above, according to the documeation application should wait
for the completion of the flow creation before destroying it.

> >>> +
> >>> +- Operations can be postponed and pushed to NIC in batches.
> >>> +
> >>> +- Results pulling must be done on time to avoid queue overflows.
> >>
> >> polling? (as libc poll() which checks status of file descriptors)
> >> it is not pulling the door to open it :)
> >
> > poll waits for some event on a file descriptor as it title says.
> > And then user has to invoke read() to actually get any info from the fd.
> > The point of our function is to return the result immediately, thus pulling.
> > We had many names appearing in the thread for these functions.
> > As we know, naming variables is the second hardest thing in programming.
> > I wanted this pull for results pulling be a counterpart for the push for
> > pushing the operations to a NIC. Another idea is pop/push pair, but they are
> > more like for operations only, not for results.
> > Having said that I'm at the point of accepting any name here.
> 
> I agree that it is hard to choose good naming.
> Just want to say that polling is not alway waiting.
> 
> poll - check the status of (a device), especially as part of a repeated
> cycle.
> 
> Here we're checking status of flow engine requests and yes,
> finally in a repeated cycle.
> 
> [snip]
> 
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Queue operation attributes.
> >>> + */
> >>> +struct rte_flow_q_ops_attr {
> >>> +	/**
> >>> +	 * The user data that will be returned on the completion events.
> >>> +	 */
> >>> +	void *user_data;
> >>
> >> IMHO it must not be hiddne in attrs. It is a key information
> >> which is used to understand the opration result. It should
> >> be passed separately.
> >
> > Maybe, on the other hand it is optional and may not be needed by an application.
> 
> I don't understand how it is possible. Without it application
> don't know fate of its requests.
> 
IMHO since user_data should be in all related operations API
along with the attr, splitting the user_data will just add extra parameter
to each function call. Since we have number of functions and will add
more in future I think it will be best to keep it in this location.

> >>> +	 /**
> >>> +	  * When set, the requested action will not be sent to the HW
> >> immediately.
> >>> +	  * The application must call the rte_flow_queue_push to actually
> >> send it.
> >>
> >> Will the next operation without the attribute set implicitly push it?
> >> Is it mandatory for the driver to respect it? Or is it just a possible
> >> optimization hint?
> >
> > Yes, it will be pushed with all the operations in a queue once the postpone is cleared.
> > It is not mandatory to respect this bit, PMD can use other optimization technics.
> 
> Could you clarify it in the description.
> 
> >>
> >>> +	  */
> >>> +	uint32_t postpone:1;
> >>> +};
> [snip]

Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-16 13:34           ` Andrew Rybchenko
  2022-02-16 14:53             ` Ori Kam
@ 2022-02-16 15:15             ` Ori Kam
  2022-02-17 11:10               ` Andrew Rybchenko
  1 sibling, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-02-16 15:15 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andew,

I missed on comments PSB,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> 
> On 2/12/22 05:19, Alexander Kozyrev wrote:
> > On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> >> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>> A new, faster, queue-based flow rules management mechanism is needed

[Snip]


> >>> +
> >>> +- Operations can be postponed and pushed to NIC in batches.
> >>> +
> >>> +- Results pulling must be done on time to avoid queue overflows.
> >>
> >> polling? (as libc poll() which checks status of file descriptors)
> >> it is not pulling the door to open it :)
> >
> > poll waits for some event on a file descriptor as it title says.
> > And then user has to invoke read() to actually get any info from the fd.
> > The point of our function is to return the result immediately, thus pulling.
> > We had many names appearing in the thread for these functions.
> > As we know, naming variables is the second hardest thing in programming.
> > I wanted this pull for results pulling be a counterpart for the push for
> > pushing the operations to a NIC. Another idea is pop/push pair, but they are
> > more like for operations only, not for results.
> > Having said that I'm at the point of accepting any name here.
> 
> I agree that it is hard to choose good naming.
> Just want to say that polling is not alway waiting.
> 
> poll - check the status of (a device), especially as part of a repeated
> cycle.
> 
> Here we're checking status of flow engine requests and yes,
> finally in a repeated cycle.
> 
I think the best name should be dequeue since it means that
the calling app gets back info and also free space in the the qeueue.
My second option is the pull, since again it implies that we are getting back
something from the queue and not just waiting for event.

Best,
Ori


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-16 13:03           ` Andrew Rybchenko
@ 2022-02-16 22:17             ` Alexander Kozyrev
  2022-02-17 10:35               ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-16 22:17 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Wed, Feb 16, 2022 8:03 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> On 2/11/22 21:47, Alexander Kozyrev wrote:
> > On Friday, February 11, 2022 5:17 Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru> wrote:
> >> Sent: Friday, February 11, 2022 5:17
> >> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
> >> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL)
> >> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> ferruh.yigit@intel.com;
> >> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com;
> jerinj@marvell.com;
> >> ajit.khaparde@broadcom.com; bruce.richardson@intel.com
> >> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration
> hints
> >>
> >> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>> The flow rules creation/destruction at a large scale incurs a performance
> >>> penalty and may negatively impact the packet processing when used
> >>> as part of the datapath logic. This is mainly because software/hardware
> >>> resources are allocated and prepared during the flow rule creation.
> >>>
> >>> In order to optimize the insertion rate, PMD may use some hints
> provided
> >>> by the application at the initialization phase. The rte_flow_configure()
> >>> function allows to pre-allocate all the needed resources beforehand.
> >>> These resources can be used at a later stage without costly allocations.
> >>> Every PMD may use only the subset of hints and ignore unused ones or
> >>> fail in case the requested configuration is not supported.
> >>>
> >>> The rte_flow_info_get() is available to retrieve the information about
> >>> supported pre-configurable resources. Both these functions must be
> called
> >>> before any other usage of the flow API engine.
> >>>
> >>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >>> Acked-by: Ori Kam <orika@nvidia.com>
> 
> [snip]
> 
> >>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>> index a93f68abbc..66614ae29b 100644
> >>> --- a/lib/ethdev/rte_flow.c
> >>> +++ b/lib/ethdev/rte_flow.c
> >>> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
> >>>    	ret = ops->flex_item_release(dev, handle, error);
> >>>    	return flow_err(port_id, ret, error);
> >>>    }
> >>> +
> >>> +int
> >>> +rte_flow_info_get(uint16_t port_id,
> >>> +		  struct rte_flow_port_info *port_info,
> >>> +		  struct rte_flow_error *error)
> >>> +{
> >>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> >>> +
> >>> +	if (unlikely(!ops))
> >>> +		return -rte_errno;
> >>> +	if (likely(!!ops->info_get)) {
> >>
> >> expected ethdev state must be validated. Just configured?
> >>
> >>> +		return flow_err(port_id,
> >>> +				ops->info_get(dev, port_info, error),
> >>
> >> port_info must be checked vs NULL
> >
> > We don’t have any NULL checks for parameters in the whole ret flow API
> library.
> > See rte_flow_create() for example. attributes, pattern and actions are
> passed to PMD unchecked.
> 
> IMHO it is hardly a good reason to have no such check here.
> The API is pure control path. So, it must validate all input
> arguments and it is better to do it in a generic place.

Agree, I have no objections to introduce these validation checks on control path.
My only concern is the data-path performance, so I'm reluctant to add them to
rte_flow_q_create/destroy functions. But let's add NULL checks to configuration
routines, ok?

> >>> +				error);
> >>> +	}
> >>> +	return rte_flow_error_set(error, ENOTSUP,
> >>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >>> +				  NULL, rte_strerror(ENOTSUP));
> >>> +}
> >>> +
> >>> +int
> >>> +rte_flow_configure(uint16_t port_id,
> >>> +		   const struct rte_flow_port_attr *port_attr,
> >>> +		   struct rte_flow_error *error)
> >>> +{
> >>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> >>> +
> >>> +	if (unlikely(!ops))
> >>> +		return -rte_errno;
> >>> +	if (likely(!!ops->configure)) {
> >>
> >> The API must validate ethdev state. configured and not started?
> > Again, we have no such validation for any rte flow API today.
> 
> Same here. If documentation defines in which state the API
> should be called, generic code must guarantee it.

Ok, as long as it stays in the configuration phase only.

> >>
> >>> +		return flow_err(port_id,
> >>> +				ops->configure(dev, port_attr, error),
> >>
> >> port_attr must be checked vs NULL
> > Same.
> >
> >>> +				error);
> >>> +	}
> >>> +	return rte_flow_error_set(error, ENOTSUP,
> >>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >>> +				  NULL, rte_strerror(ENOTSUP));
> >>> +}
> >>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>> index 1031fb246b..92be2a9a89 100644
> >>> --- a/lib/ethdev/rte_flow.h
> >>> +++ b/lib/ethdev/rte_flow.h
> >>> @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t
> port_id,
> >>>    			   const struct rte_flow_item_flex_handle *handle,
> >>>    			   struct rte_flow_error *error);
> >>>
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Information about available pre-configurable resources.
> >>> + * The zero value means a resource cannot be pre-allocated.
> >>> + *
> >>> + */
> >>> +struct rte_flow_port_info {
> >>> +	/**
> >>> +	 * Number of pre-configurable counter actions.
> >>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> >>> +	 */
> >>> +	uint32_t nb_counters;
> >>
> >> Name says that it is a number of counters, but description
> >> says that it is about actions.
> >> Also I don't understand what does "pre-configurable" mean.
> >> Isn't it a maximum number of available counters?
> >> If no, how can I find a maximum?
> > It is number of pre-allocated and pre-configured actions.
> > How are they pr-configured is up to PDM driver.
> > But let's change to "pre-configured" everywhere.
> > Configuration includes some memory allocation anyway.
> 
> Sorry, but I still don't understand. I guess HW has
> a hard limit on a number of counters. How can I get
> the information?

Sorry for not being clear. These are resources/objects limitation.
It may be the hard HW limit on number of counter objects, for example.
Or the system has a little of memory and NIC is constrained in memory
in its attempt to create these counter objects as another example.
In any case, the info_get() API should return the limit to a user.

> >>
> >>> +	/**
> >>> +	 * Number of pre-configurable aging flows actions.
> >>> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> >>> +	 */
> >>> +	uint32_t nb_aging_flows;
> >>
> >> Same
> > Ditto.
> >
> >>> +	/**
> >>> +	 * Number of pre-configurable traffic metering actions.
> >>> +	 * @see RTE_FLOW_ACTION_TYPE_METER
> >>> +	 */
> >>> +	uint32_t nb_meters;
> >>
> >> Same
> > Ditto.
> >
> >>> +};
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Retrieve configuration attributes supported by the port.
> >>
> >> Description should be a bit more flow API aware.
> >> Right now it sounds too generic.
> > Ok, how about
> > "Get information about flow engine pre-configurable resources."
> >
> >>> + *
> >>> + * @param port_id
> >>> + *   Port identifier of Ethernet device.
> >>> + * @param[out] port_info
> >>> + *   A pointer to a structure of type *rte_flow_port_info*
> >>> + *   to be filled with the contextual information of the port.
> >>> + * @param[out] error
> >>> + *   Perform verbose error reporting if not NULL.
> >>> + *   PMDs initialize this structure in case of error only.
> >>> + *
> >>> + * @return
> >>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> >>> + */
> >>> +__rte_experimental
> >>> +int
> >>> +rte_flow_info_get(uint16_t port_id,
> >>> +		  struct rte_flow_port_info *port_info,
> >>> +		  struct rte_flow_error *error);
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Resource pre-allocation and pre-configuration settings.
> >>
> >> What is the difference between pre-allocation and pre-configuration?
> >> Why are both mentioned above, but just pre-configured actions are
> >> mentioned below?
> > Please see answer to this question above.
> >
> >>> + * The zero value means on demand resource allocations only.
> >>> + *
> >>> + */
> >>> +struct rte_flow_port_attr {
> >>> +	/**
> >>> +	 * Number of counter actions pre-configured.
> >>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> >>> +	 */
> >>> +	uint32_t nb_counters;
> >>> +	/**
> >>> +	 * Number of aging flows actions pre-configured.
> >>> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> >>> +	 */
> >>> +	uint32_t nb_aging_flows;
> >>> +	/**
> >>> +	 * Number of traffic metering actions pre-configured.
> >>> +	 * @see RTE_FLOW_ACTION_TYPE_METER
> >>> +	 */
> >>> +	uint32_t nb_meters;
> >>> +};
> >>> +
> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Configure the port's flow API engine.
> >>> + *
> >>> + * This API can only be invoked before the application
> >>> + * starts using the rest of the flow library functions.
> >>> + *
> >>> + * The API can be invoked multiple times to change the
> >>> + * settings. The port, however, may reject the changes.
> >>> + *
> >>> + * Parameters in configuration attributes must not exceed
> >>> + * numbers of resources returned by the rte_flow_info_get API.
> >>> + *
> >>> + * @param port_id
> >>> + *   Port identifier of Ethernet device.
> >>> + * @param[in] port_attr
> >>> + *   Port configuration attributes.
> >>> + * @param[out] error
> >>> + *   Perform verbose error reporting if not NULL.
> >>> + *   PMDs initialize this structure in case of error only.
> >>> + *
> >>> + * @return
> >>> + *   0 on success, a negative errno value otherwise and rte_errno is set.
> >>> + */
> >>> +__rte_experimental
> >>> +int
> >>> +rte_flow_configure(uint16_t port_id,
> >>> +		   const struct rte_flow_port_attr *port_attr,
> >>> +		   struct rte_flow_error *error);
> >>> +
> >>>    #ifdef __cplusplus
> >>>    }
> >>>    #endif
> 
> [snip]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-12  9:25           ` Thomas Monjalon
@ 2022-02-16 22:49             ` Alexander Kozyrev
  2022-02-17  8:18               ` Thomas Monjalon
  0 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-16 22:49 UTC (permalink / raw)
  To: NBU-Contact-Thomas Monjalon (EXTERNAL), Andrew Rybchenko, dev
  Cc: Ori Kam, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On Sat, Feb 12, 2022 4:25 Thomas Monjalon <thomas@monjalon.net> wrote:
> 12/02/2022 03:19, Alexander Kozyrev:
> > On Fri, Feb 11, 2022 7:42 Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>:
> > > On 2/11/22 05:26, Alexander Kozyrev wrote:
> > > > +__rte_experimental
> > > > +struct rte_flow *
> > > > +rte_flow_q_flow_create(uint16_t port_id,
> > >
> > > flow_q_flow does not sound like a good nameing, consider:
> > > rte_flow_q_rule_create() is
> <subsystem>_<subtype>_<object>_<action>
> >
> > More like:
> > <subsystem>_<subtype>_<object>_<action>
> >  <rte>_<flow>_<rule_create_operation>_<queue>
> > Which is pretty lengthy name as for me.
> 
> Naming :)
> This one may be improved I think.
> What is the problem with replacing "flow" with "rule"?
> Is it the right meaning?

I've got a better naming for all the functions. What do you think about this?
Asynchronous rte_flow_async_create and rte_flow_async_destroy functions
as an extension of synchronous rte_flow_create/ rte_flow_destroy API.
The same is true for asynchronous API for indirect actions:
	rte_flow_async_action_handle_create;
	rte_flow_async_action_handle_destroy;
	rte_flow_async_action_handle_update;
And rte_flow_push/rte_flow_pull without "_q_" part to make them clearer.
And yes, I'm still thinking pull is better than poll since we are actually retrieving
something, not just checking if it has something we can retrieve.
Let me know if we can agree on this scheme? Look pretty close to existing one.

> > > > +__rte_experimental
> > > > +struct rte_flow_action_handle *
> > > > +rte_flow_q_action_handle_create(uint16_t port_id,
> > > > +		uint32_t queue_id,
> > > > +		const struct rte_flow_q_ops_attr *q_ops_attr,
> > > > +		const struct rte_flow_indir_action_conf *indir_action_conf,
> > > > +		const struct rte_flow_action *action,
> > >
> > > I don't understand why it differs so much from rule creation.
> > > Why is action template not used?
> > > IMHO indirect actions should be dropped from the patch
> > > and added separately since it is a separate feature.
> >
> > I agree, they deserve a sperate patch since they are rather resource
> creations.
> > But, I'm afraid it is too late for RC1.
> 
> I think it could be done for RC2.

No problem, I'll create a separate commit for indirect actions.

> 
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > > + *
> > > > + * Pull a rte flow operation.
> > > > + * The application must invoke this function in order to complete
> > > > + * the flow rule offloading and to retrieve the flow rule operation
> status.
> > > > + *
> > > > + * @param port_id
> > > > + *   Port identifier of Ethernet device.
> > > > + * @param queue_id
> > > > + *   Flow queue which is used to pull the operation.
> > > > + * @param[out] res
> > > > + *   Array of results that will be set.
> > > > + * @param[in] n_res
> > > > + *   Maximum number of results that can be returned.
> > > > + *   This value is equal to the size of the res array.
> > > > + * @param[out] error
> > > > + *   Perform verbose error reporting if not NULL.
> > > > + *   PMDs initialize this structure in case of error only.
> > > > + *
> > > > + * @return
> > > > + *   Number of results that were pulled,
> > > > + *   a negative errno value otherwise and rte_errno is set.
> > >
> > > Don't we want to define negative error code meaning?
> >
> > They are all standard, don't think we need another copy-paste here.
> 
> That's an API, it needs to be all explicit.
> I missed it before, we should add the error codes here.

I'll add if you want to see them listed.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-16 22:49             ` Alexander Kozyrev
@ 2022-02-17  8:18               ` Thomas Monjalon
  2022-02-17 11:02                 ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Thomas Monjalon @ 2022-02-17  8:18 UTC (permalink / raw)
  To: Ori Kam
  Cc: Andrew Rybchenko, dev, ivan.malov, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson, Alexander Kozyrev

16/02/2022 23:49, Alexander Kozyrev:
> On Sat, Feb 12, 2022 4:25 Thomas Monjalon <thomas@monjalon.net> wrote:
> > 12/02/2022 03:19, Alexander Kozyrev:
> > > On Fri, Feb 11, 2022 7:42 Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru>:
> > > > On 2/11/22 05:26, Alexander Kozyrev wrote:
> > > > > +__rte_experimental
> > > > > +struct rte_flow *
> > > > > +rte_flow_q_flow_create(uint16_t port_id,
> > > >
> > > > flow_q_flow does not sound like a good nameing, consider:
> > > > rte_flow_q_rule_create() is
> > <subsystem>_<subtype>_<object>_<action>
> > >
> > > More like:
> > > <subsystem>_<subtype>_<object>_<action>
> > >  <rte>_<flow>_<rule_create_operation>_<queue>
> > > Which is pretty lengthy name as for me.
> > 
> > Naming :)
> > This one may be improved I think.
> > What is the problem with replacing "flow" with "rule"?
> > Is it the right meaning?
> 
> I've got a better naming for all the functions. What do you think about this?
> Asynchronous rte_flow_async_create and rte_flow_async_destroy functions
> as an extension of synchronous rte_flow_create/ rte_flow_destroy API.
> The same is true for asynchronous API for indirect actions:
> 	rte_flow_async_action_handle_create;
> 	rte_flow_async_action_handle_destroy;
> 	rte_flow_async_action_handle_update;
> And rte_flow_push/rte_flow_pull without "_q_" part to make them clearer.
> And yes, I'm still thinking pull is better than poll since we are actually retrieving
> something, not just checking if it has something we can retrieve.
> Let me know if we can agree on this scheme? Look pretty close to existing one.

I like the "async" word.

In summary, you propose this change for the functions of this patch:

	rte_flow_q_flow_create           -> rte_flow_async_create
	rte_flow_q_flow_destroy          -> rte_flow_async_destroy
	rte_flow_q_action_handle_create  -> rte_flow_async_action_handle_create
	rte_flow_q_action_handle_destroy -> rte_flow_async_action_handle_destroy
	rte_flow_q_action_handle_update  -> rte_flow_async_action_handle_update
	rte_flow_q_push                  -> rte_flow_push
	rte_flow_q_pull                  -> rte_flow_pull

They are close to the exisiting synchronous function names:

	rte_flow_create
	rte_flow_destroy
	rte_flow_action_handle_create
	rte_flow_action_handle_destroy
	rte_flow_action_handle_update

I think it is a good naming scheme.



^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-16 22:17             ` Alexander Kozyrev
@ 2022-02-17 10:35               ` Andrew Rybchenko
  2022-02-17 10:57                 ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-17 10:35 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/17/22 01:17, Alexander Kozyrev wrote:
> On Wed, Feb 16, 2022 8:03 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
>> On 2/11/22 21:47, Alexander Kozyrev wrote:
>>> On Friday, February 11, 2022 5:17 Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>> Sent: Friday, February 11, 2022 5:17
>>>> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
>>>> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
>> (EXTERNAL)
>>>> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
>> ferruh.yigit@intel.com;
>>>> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com;
>> jerinj@marvell.com;
>>>> ajit.khaparde@broadcom.com; bruce.richardson@intel.com
>>>> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration
>> hints
>>>>
>>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>>>> The flow rules creation/destruction at a large scale incurs a performance
>>>>> penalty and may negatively impact the packet processing when used
>>>>> as part of the datapath logic. This is mainly because software/hardware
>>>>> resources are allocated and prepared during the flow rule creation.
>>>>>
>>>>> In order to optimize the insertion rate, PMD may use some hints
>> provided
>>>>> by the application at the initialization phase. The rte_flow_configure()
>>>>> function allows to pre-allocate all the needed resources beforehand.
>>>>> These resources can be used at a later stage without costly allocations.
>>>>> Every PMD may use only the subset of hints and ignore unused ones or
>>>>> fail in case the requested configuration is not supported.
>>>>>
>>>>> The rte_flow_info_get() is available to retrieve the information about
>>>>> supported pre-configurable resources. Both these functions must be
>> called
>>>>> before any other usage of the flow API engine.
>>>>>
>>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>
>> [snip]
>>
>>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>>> index a93f68abbc..66614ae29b 100644
>>>>> --- a/lib/ethdev/rte_flow.c
>>>>> +++ b/lib/ethdev/rte_flow.c
>>>>> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
>>>>>     	ret = ops->flex_item_release(dev, handle, error);
>>>>>     	return flow_err(port_id, ret, error);
>>>>>     }
>>>>> +
>>>>> +int
>>>>> +rte_flow_info_get(uint16_t port_id,
>>>>> +		  struct rte_flow_port_info *port_info,
>>>>> +		  struct rte_flow_error *error)
>>>>> +{
>>>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>>>> +
>>>>> +	if (unlikely(!ops))
>>>>> +		return -rte_errno;
>>>>> +	if (likely(!!ops->info_get)) {
>>>>
>>>> expected ethdev state must be validated. Just configured?
>>>>
>>>>> +		return flow_err(port_id,
>>>>> +				ops->info_get(dev, port_info, error),
>>>>
>>>> port_info must be checked vs NULL
>>>
>>> We don’t have any NULL checks for parameters in the whole ret flow API
>> library.
>>> See rte_flow_create() for example. attributes, pattern and actions are
>> passed to PMD unchecked.
>>
>> IMHO it is hardly a good reason to have no such check here.
>> The API is pure control path. So, it must validate all input
>> arguments and it is better to do it in a generic place.
> 
> Agree, I have no objections to introduce these validation checks on control path.

Good, we have a progress.

> My only concern is the data-path performance, so I'm reluctant to add them to
> rte_flow_q_create/destroy functions. But let's add NULL checks to configuration
> routines, ok?

My opinion is not that strong on the aspect, but, personally,
I'd have sanity checks in the case of flow create/destroy as
well. First of all it is not a true datapath. Second, these
checks are very lightweight.

Anyway, if nobody supports me, I'm OK to go without these
checks in generic functions, but it would be very useful to
highlight it in the parameters description.

>>>>> +				error);
>>>>> +	}
>>>>> +	return rte_flow_error_set(error, ENOTSUP,
>>>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>>> +				  NULL, rte_strerror(ENOTSUP));
>>>>> +}
>>>>> +
>>>>> +int
>>>>> +rte_flow_configure(uint16_t port_id,
>>>>> +		   const struct rte_flow_port_attr *port_attr,
>>>>> +		   struct rte_flow_error *error)
>>>>> +{
>>>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>>>> +
>>>>> +	if (unlikely(!ops))
>>>>> +		return -rte_errno;
>>>>> +	if (likely(!!ops->configure)) {
>>>>
>>>> The API must validate ethdev state. configured and not started?
>>> Again, we have no such validation for any rte flow API today.
>>
>> Same here. If documentation defines in which state the API
>> should be called, generic code must guarantee it.
> 
> Ok, as long as it stays in the configuration phase only.
> 
>>>>
>>>>> +		return flow_err(port_id,
>>>>> +				ops->configure(dev, port_attr, error),
>>>>
>>>> port_attr must be checked vs NULL
>>> Same.
>>>
>>>>> +				error);
>>>>> +	}
>>>>> +	return rte_flow_error_set(error, ENOTSUP,
>>>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>>> +				  NULL, rte_strerror(ENOTSUP));
>>>>> +}
>>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>>>> index 1031fb246b..92be2a9a89 100644
>>>>> --- a/lib/ethdev/rte_flow.h
>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>> @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t
>> port_id,
>>>>>     			   const struct rte_flow_item_flex_handle *handle,
>>>>>     			   struct rte_flow_error *error);
>>>>>
>>>>> +/**
>>>>> + * @warning
>>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>>> + *
>>>>> + * Information about available pre-configurable resources.
>>>>> + * The zero value means a resource cannot be pre-allocated.
>>>>> + *
>>>>> + */
>>>>> +struct rte_flow_port_info {
>>>>> +	/**
>>>>> +	 * Number of pre-configurable counter actions.
>>>>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
>>>>> +	 */
>>>>> +	uint32_t nb_counters;
>>>>
>>>> Name says that it is a number of counters, but description
>>>> says that it is about actions.
>>>> Also I don't understand what does "pre-configurable" mean.
>>>> Isn't it a maximum number of available counters?
>>>> If no, how can I find a maximum?
>>> It is number of pre-allocated and pre-configured actions.
>>> How are they pr-configured is up to PDM driver.
>>> But let's change to "pre-configured" everywhere.
>>> Configuration includes some memory allocation anyway.
>>
>> Sorry, but I still don't understand. I guess HW has
>> a hard limit on a number of counters. How can I get
>> the information?
> 
> Sorry for not being clear. These are resources/objects limitation.
> It may be the hard HW limit on number of counter objects, for example.
> Or the system has a little of memory and NIC is constrained in memory
> in its attempt to create these counter objects as another example.
> In any case, the info_get() API should return the limit to a user.

Look. First of all it is confusing that description says
"counter actions". I remember that we have no shared
counters now (just shared actions), but it does not matter
a lot. IMHO it is a bit more clear to say that it is
a limit on a number of flow counters. I guess it better
express the nature of the limitation. May be I'm missing
something. If so, I'd like to understand what.

Second, "per-configurable" is confusing. May be it is better
just to drop it? I.e. "Information about available resources."
Otherwise it is necessary to explain who and when
pre-configures these resources. Is it really pre-configured?

"The zero value means a resource cannot be pre-allocated."
Does it mean that the action cannot be used at all?
I think it must be explicitly clarified in the case of any
answer.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-16 14:18             ` Ori Kam
@ 2022-02-17 10:44               ` Andrew Rybchenko
  2022-02-17 11:11                 ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-17 10:44 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Ori,

On 2/16/22 17:18, Ori Kam wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Subject: Re: [PATCH v5 02/10] ethdev: add flow item/action templates
>>
>> On 2/12/22 01:25, Alexander Kozyrev wrote:
>>> On Fri, Feb 11, 2022 6:27 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
>>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>>>> Treating every single flow rule as a completely independent and separate
>>>>> entity negatively impacts the flow rules insertion rate. Oftentimes in an
>>>>> application, many flow rules share a common structure (the same item mask
>>>>> and/or action list) so they can be grouped and classified together.
>>>>> This knowledge may be used as a source of optimization by a PMD/HW.
>>>>>
>>>>> The pattern template defines common matching fields (the item mask) without
>>>>> values. The actions template holds a list of action types that will be used
>>>>> together in the same rule. The specific values for items and actions will
>>>>> be given only during the rule creation.
>>>>>
>>>>> A table combines pattern and actions templates along with shared flow rule
>>>>> attributes (group ID, priority and traffic direction). This way a PMD/HW
>>>>> can prepare all the resources needed for efficient flow rules creation in
>>>>> the datapath. To avoid any hiccups due to memory reallocation, the maximum
>>>>> number of flow rules is defined at the table creation time.
>>>>>
>>>>> The flow rule creation is done by selecting a table, a pattern template
>>>>> and an actions template (which are bound to the table), and setting unique
>>>>> values for the items and actions.
>>>>>
>>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>
>> [snip]
>>
> 
> [Snip]
> 
>>>>> + *
>>>>> + * The pattern template defines common matching fields without values.
>>>>> + * For example, matching on 5 tuple TCP flow, the template will be
>>>>> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
>>>>> + * while values for each rule will be set during the flow rule creation.
>>>>> + * The number and order of items in the template must be the same
>>>>> + * at the rule creation.
>>>>> + *
>>>>> + * @param port_id
>>>>> + *   Port identifier of Ethernet device.
>>>>> + * @param[in] template_attr
>>>>> + *   Pattern template attributes.
>>>>> + * @param[in] pattern
>>>>> + *   Pattern specification (list terminated by the END pattern item).
>>>>> + *   The spec member of an item is not used unless the end member is used.
>>>>
>>>> Interpretation of the pattern may depend on transfer vs non-transfer
>>>> rule to be used. It is essential information and we should provide it
>>>> when pattern template is created.
>>>>
>>>> The information is provided on table stage, but it is too late.
>>>
>>> Why is it too late? Application knows which template goes to which table.
>>> And the pattern is generic to accommodate anything, user just need to put it
>>> into the right table.
>>
>> Because it is more convenient to handle it when individual
>> template is processed. Otherwise error reporting will be
>> complicated since it could be just one template which is
>> wrong.
>>
>> Otherwise, I see no point to have driver callbacks
>> template creation API. I can do nothing here since
>> I have no enough context. What's the problem to add
>> the context?
>>
> 
> The idea is that the same template can be used in different
> domains (ingress/egress and transfer)
> May be we can add on which domains this template is expected to be used.
> What do you think?

I see. IMHO if application is going to use the same template
in transfer and non-transfer rules, it is not a problem to
register it twice. Otherwise, if PMD needs the information and
template handling differs a lot in transfer and non-transfer
case, handling should be postponed and should be done on
table definition. In this case, we cannot provide feedback
to application which template it cannot handle. Even if the
information is somehow encoded in flow error, encoding must
be defined and it still could be inconvenient for the
application to handle it.

Yes, I agree that it is better to fully specify domain
including ingress and egress, not just transfer/non-transfer.

Andrew.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-16 14:53             ` Ori Kam
@ 2022-02-17 10:52               ` Andrew Rybchenko
  2022-02-17 11:08                 ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-17 10:52 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Ori,

On 2/16/22 17:53, Ori Kam wrote:
> Hi Andew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Wednesday, February 16, 2022 3:34 PM
>> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
>>
>> On 2/12/22 05:19, Alexander Kozyrev wrote:
>>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
>>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>>>> A new, faster, queue-based flow rules management mechanism is needed
>>>> for
>>>>> applications offloading rules inside the datapath. This asynchronous
>>>>> and lockless mechanism frees the CPU for further packet processing and
>>>>> reduces the performance impact of the flow rules creation/destruction
>>>>> on the datapath. Note that queues are not thread-safe and the queue
>>>>> should be accessed from the same thread for all queue operations.
>>>>> It is the responsibility of the app to sync the queue functions in case
>>>>> of multi-threaded access to the same queue.
>>>>>
>>>>> The rte_flow_q_flow_create() function enqueues a flow creation to the
>>>>> requested queue. It benefits from already configured resources and sets
>>>>> unique values on top of item and action templates. A flow rule is enqueued
>>>>> on the specified flow queue and offloaded asynchronously to the
>>>> hardware.
>>>>> The function returns immediately to spare CPU for further packet
>>>>> processing. The application must invoke the rte_flow_q_pull() function
>>>>> to complete the flow rule operation offloading, to clear the queue, and to
>>>>> receive the operation status. The rte_flow_q_flow_destroy() function
>>>>> enqueues a flow destruction to the requested queue.
>>>>>
>>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>
>> [snip]
>>
>>>>> +
>>>>> +- Available operation types: rule creation, rule destruction,
>>>>> +  indirect rule creation, indirect rule destruction, indirect rule update.
>>>>> +
>>>>> +- Operations may be reordered within a queue.
>>>>
>>>> Do we want to have barriers?
>>>> E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule
>>>> lives forever.
>>>
>>> API design is crafter with the throughput as the main goal in mind.
>>> We allow user to enforce any ordering outside these functions.
>>> Another point that not all PMDs/NIC will have this out-of-order execution.
>>
>> Throughput is nice, but there more important requirements
>> which must be satistied before talking about performance.
>> Could you explain me what I should do based on which
>> information from NIC in order to solve above problem?
>>
> 
> The idea is that if application has dependency between the rules/ rules operations.
> It should wait for the completion of the operation before sending the dependent operation.
> In the example you provided above, according to the documeation application should wait
> for the completion of the flow creation before destroying it.

I see, thanks. May be I read documentation not that attentive.
I'll reread on the next version review cycle.

>>>>> +
>>>>> +- Operations can be postponed and pushed to NIC in batches.
>>>>> +
>>>>> +- Results pulling must be done on time to avoid queue overflows.
>>>>
>>>> polling? (as libc poll() which checks status of file descriptors)
>>>> it is not pulling the door to open it :)
>>>
>>> poll waits for some event on a file descriptor as it title says.
>>> And then user has to invoke read() to actually get any info from the fd.
>>> The point of our function is to return the result immediately, thus pulling.
>>> We had many names appearing in the thread for these functions.
>>> As we know, naming variables is the second hardest thing in programming.
>>> I wanted this pull for results pulling be a counterpart for the push for
>>> pushing the operations to a NIC. Another idea is pop/push pair, but they are
>>> more like for operations only, not for results.
>>> Having said that I'm at the point of accepting any name here.
>>
>> I agree that it is hard to choose good naming.
>> Just want to say that polling is not alway waiting.
>>
>> poll - check the status of (a device), especially as part of a repeated
>> cycle.
>>
>> Here we're checking status of flow engine requests and yes,
>> finally in a repeated cycle.
>>
>> [snip]
>>
>>>>> +/**
>>>>> + * @warning
>>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>>> + *
>>>>> + * Queue operation attributes.
>>>>> + */
>>>>> +struct rte_flow_q_ops_attr {
>>>>> +	/**
>>>>> +	 * The user data that will be returned on the completion events.
>>>>> +	 */
>>>>> +	void *user_data;
>>>>
>>>> IMHO it must not be hiddne in attrs. It is a key information
>>>> which is used to understand the opration result. It should
>>>> be passed separately.
>>>
>>> Maybe, on the other hand it is optional and may not be needed by an application.
>>
>> I don't understand how it is possible. Without it application
>> don't know fate of its requests.
>>
> IMHO since user_data should be in all related operations API
> along with the attr, splitting the user_data will just add extra parameter
> to each function call. Since we have number of functions and will add
> more in future I think it will be best to keep it in this location.

My problem with hiding user_data inside attr is that
'user_data' is not an auxiliary attribute defining extra
properties of the request. It is a key information.
May be attr is not an ideal name for such grouping
of parameters. Unfortunately I have no better ideas right now.

Andrew.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-17 10:35               ` Andrew Rybchenko
@ 2022-02-17 10:57                 ` Ori Kam
  2022-02-17 11:04                   ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-02-17 10:57 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, February 17, 2022 12:35 PM
> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
> 
> On 2/17/22 01:17, Alexander Kozyrev wrote:
> > On Wed, Feb 16, 2022 8:03 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> >> On 2/11/22 21:47, Alexander Kozyrev wrote:
> >>> On Friday, February 11, 2022 5:17 Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru> wrote:
> >>>> Sent: Friday, February 11, 2022 5:17
> >>>> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
> >>>> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> (EXTERNAL)
> >>>> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> >> ferruh.yigit@intel.com;
> >>>> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com;
> >> jerinj@marvell.com;
> >>>> ajit.khaparde@broadcom.com; bruce.richardson@intel.com
> >>>> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration
> >> hints
> >>>>
> >>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>>>> The flow rules creation/destruction at a large scale incurs a performance
> >>>>> penalty and may negatively impact the packet processing when used
> >>>>> as part of the datapath logic. This is mainly because software/hardware
> >>>>> resources are allocated and prepared during the flow rule creation.
> >>>>>
> >>>>> In order to optimize the insertion rate, PMD may use some hints
> >> provided
> >>>>> by the application at the initialization phase. The rte_flow_configure()
> >>>>> function allows to pre-allocate all the needed resources beforehand.
> >>>>> These resources can be used at a later stage without costly allocations.
> >>>>> Every PMD may use only the subset of hints and ignore unused ones or
> >>>>> fail in case the requested configuration is not supported.
> >>>>>
> >>>>> The rte_flow_info_get() is available to retrieve the information about
> >>>>> supported pre-configurable resources. Both these functions must be
> >> called
> >>>>> before any other usage of the flow API engine.
> >>>>>
> >>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >>>>> Acked-by: Ori Kam <orika@nvidia.com>
> >>
> >> [snip]
> >>
> >>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >>>>> index a93f68abbc..66614ae29b 100644
> >>>>> --- a/lib/ethdev/rte_flow.c
> >>>>> +++ b/lib/ethdev/rte_flow.c
> >>>>> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
> >>>>>     	ret = ops->flex_item_release(dev, handle, error);
> >>>>>     	return flow_err(port_id, ret, error);
> >>>>>     }
> >>>>> +
> >>>>> +int
> >>>>> +rte_flow_info_get(uint16_t port_id,
> >>>>> +		  struct rte_flow_port_info *port_info,
> >>>>> +		  struct rte_flow_error *error)
> >>>>> +{
> >>>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >>>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> >>>>> +
> >>>>> +	if (unlikely(!ops))
> >>>>> +		return -rte_errno;
> >>>>> +	if (likely(!!ops->info_get)) {
> >>>>
> >>>> expected ethdev state must be validated. Just configured?
> >>>>
> >>>>> +		return flow_err(port_id,
> >>>>> +				ops->info_get(dev, port_info, error),
> >>>>
> >>>> port_info must be checked vs NULL
> >>>
> >>> We don’t have any NULL checks for parameters in the whole ret flow API
> >> library.
> >>> See rte_flow_create() for example. attributes, pattern and actions are
> >> passed to PMD unchecked.
> >>
> >> IMHO it is hardly a good reason to have no such check here.
> >> The API is pure control path. So, it must validate all input
> >> arguments and it is better to do it in a generic place.
> >
> > Agree, I have no objections to introduce these validation checks on control path.
> 
> Good, we have a progress.
> 
> > My only concern is the data-path performance, so I'm reluctant to add them to
> > rte_flow_q_create/destroy functions. But let's add NULL checks to configuration
> > routines, ok?
> 
> My opinion is not that strong on the aspect, but, personally,
> I'd have sanity checks in the case of flow create/destroy as
> well. First of all it is not a true datapath. Second, these
> checks are very lightweight.
> 
> Anyway, if nobody supports me, I'm OK to go without these
> checks in generic functions, but it would be very useful to
> highlight it in the parameters description.
> 
I vote for adding the checks only on configuration function.
From my point of view the rule functions are part of the data path
and should be treated this way.

> >>>>> +				error);
> >>>>> +	}
> >>>>> +	return rte_flow_error_set(error, ENOTSUP,
> >>>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >>>>> +				  NULL, rte_strerror(ENOTSUP));
> >>>>> +}
> >>>>> +
> >>>>> +int
> >>>>> +rte_flow_configure(uint16_t port_id,
> >>>>> +		   const struct rte_flow_port_attr *port_attr,
> >>>>> +		   struct rte_flow_error *error)
> >>>>> +{
> >>>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >>>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> >>>>> +
> >>>>> +	if (unlikely(!ops))
> >>>>> +		return -rte_errno;
> >>>>> +	if (likely(!!ops->configure)) {
> >>>>
> >>>> The API must validate ethdev state. configured and not started?
> >>> Again, we have no such validation for any rte flow API today.
> >>
> >> Same here. If documentation defines in which state the API
> >> should be called, generic code must guarantee it.
> >
> > Ok, as long as it stays in the configuration phase only.
> >
> >>>>
> >>>>> +		return flow_err(port_id,
> >>>>> +				ops->configure(dev, port_attr, error),
> >>>>
> >>>> port_attr must be checked vs NULL
> >>> Same.
> >>>
> >>>>> +				error);
> >>>>> +	}
> >>>>> +	return rte_flow_error_set(error, ENOTSUP,
> >>>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >>>>> +				  NULL, rte_strerror(ENOTSUP));
> >>>>> +}
> >>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> >>>>> index 1031fb246b..92be2a9a89 100644
> >>>>> --- a/lib/ethdev/rte_flow.h
> >>>>> +++ b/lib/ethdev/rte_flow.h
> >>>>> @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t
> >> port_id,
> >>>>>     			   const struct rte_flow_item_flex_handle *handle,
> >>>>>     			   struct rte_flow_error *error);
> >>>>>
> >>>>> +/**
> >>>>> + * @warning
> >>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>>>> + *
> >>>>> + * Information about available pre-configurable resources.
> >>>>> + * The zero value means a resource cannot be pre-allocated.
> >>>>> + *
> >>>>> + */
> >>>>> +struct rte_flow_port_info {
> >>>>> +	/**
> >>>>> +	 * Number of pre-configurable counter actions.
> >>>>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> >>>>> +	 */
> >>>>> +	uint32_t nb_counters;
> >>>>
> >>>> Name says that it is a number of counters, but description
> >>>> says that it is about actions.
> >>>> Also I don't understand what does "pre-configurable" mean.
> >>>> Isn't it a maximum number of available counters?
> >>>> If no, how can I find a maximum?
> >>> It is number of pre-allocated and pre-configured actions.
> >>> How are they pr-configured is up to PDM driver.
> >>> But let's change to "pre-configured" everywhere.
> >>> Configuration includes some memory allocation anyway.
> >>
> >> Sorry, but I still don't understand. I guess HW has
> >> a hard limit on a number of counters. How can I get
> >> the information?
> >
> > Sorry for not being clear. These are resources/objects limitation.
> > It may be the hard HW limit on number of counter objects, for example.
> > Or the system has a little of memory and NIC is constrained in memory
> > in its attempt to create these counter objects as another example.
> > In any case, the info_get() API should return the limit to a user.
> 
> Look. First of all it is confusing that description says
> "counter actions". I remember that we have no shared
> counters now (just shared actions), but it does not matter
> a lot. IMHO it is a bit more clear to say that it is
> a limit on a number of flow counters. I guess it better
> express the nature of the limitation. May be I'm missing
> something. If so, I'd like to understand what.
> 
From my view point, this should be the number of resource/objects that
the HW can allocate (in an ideal system).
For example if the HW can allocate 1M counters but due to limited memory
on the system the actual number can be less.

Like you said we also have the handle action, this means that
the same object can be shared between any number of rules.
as a result the limitation is not on the number of rules but on the number of
resources allocated.

In addition and even more importantly during this stage there is no knowlge on the
number of rules that will be inserted.

So can we agree to say resources?




> Second, "per-configurable" is confusing. May be it is better
> just to drop it? I.e. "Information about available resources."
> Otherwise it is necessary to explain who and when
> pre-configures these resources. Is it really pre-configured?
> 
I'm O.K with dropping the configuration part
It should just say number of counter objects

> "The zero value means a resource cannot be pre-allocated."
> Does it mean that the action cannot be used at all?
> I think it must be explicitly clarified in the case of any
> answer.

Agree, it should state that if the PMD report 0 in means that
it doesn’t support such an object.

Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-17  8:18               ` Thomas Monjalon
@ 2022-02-17 11:02                 ` Andrew Rybchenko
  0 siblings, 0 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-17 11:02 UTC (permalink / raw)
  To: Thomas Monjalon, Ori Kam
  Cc: dev, ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson, Alexander Kozyrev

On 2/17/22 11:18, Thomas Monjalon wrote:
> 16/02/2022 23:49, Alexander Kozyrev:
>> On Sat, Feb 12, 2022 4:25 Thomas Monjalon <thomas@monjalon.net> wrote:
>>> 12/02/2022 03:19, Alexander Kozyrev:
>>>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko
>>> <andrew.rybchenko@oktetlabs.ru>:
>>>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>>>>> +__rte_experimental
>>>>>> +struct rte_flow *
>>>>>> +rte_flow_q_flow_create(uint16_t port_id,
>>>>>
>>>>> flow_q_flow does not sound like a good nameing, consider:
>>>>> rte_flow_q_rule_create() is
>>> <subsystem>_<subtype>_<object>_<action>
>>>>
>>>> More like:
>>>> <subsystem>_<subtype>_<object>_<action>
>>>>   <rte>_<flow>_<rule_create_operation>_<queue>
>>>> Which is pretty lengthy name as for me.
>>>
>>> Naming :)
>>> This one may be improved I think.
>>> What is the problem with replacing "flow" with "rule"?
>>> Is it the right meaning?
>>
>> I've got a better naming for all the functions. What do you think about this?
>> Asynchronous rte_flow_async_create and rte_flow_async_destroy functions
>> as an extension of synchronous rte_flow_create/ rte_flow_destroy API.
>> The same is true for asynchronous API for indirect actions:
>> 	rte_flow_async_action_handle_create;
>> 	rte_flow_async_action_handle_destroy;
>> 	rte_flow_async_action_handle_update;
>> And rte_flow_push/rte_flow_pull without "_q_" part to make them clearer.
>> And yes, I'm still thinking pull is better than poll since we are actually retrieving
>> something, not just checking if it has something we can retrieve.
>> Let me know if we can agree on this scheme? Look pretty close to existing one.
> 
> I like the "async" word.
> 
> In summary, you propose this change for the functions of this patch:
> 
> 	rte_flow_q_flow_create           -> rte_flow_async_create
> 	rte_flow_q_flow_destroy          -> rte_flow_async_destroy
> 	rte_flow_q_action_handle_create  -> rte_flow_async_action_handle_create
> 	rte_flow_q_action_handle_destroy -> rte_flow_async_action_handle_destroy
> 	rte_flow_q_action_handle_update  -> rte_flow_async_action_handle_update
> 	rte_flow_q_push                  -> rte_flow_push
> 	rte_flow_q_pull                  -> rte_flow_pull
> 
> They are close to the exisiting synchronous function names:
> 
> 	rte_flow_create
> 	rte_flow_destroy
> 	rte_flow_action_handle_create
> 	rte_flow_action_handle_destroy
> 	rte_flow_action_handle_update
> 
> I think it is a good naming scheme.

+1


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
  2022-02-17 10:57                 ` Ori Kam
@ 2022-02-17 11:04                   ` Andrew Rybchenko
  0 siblings, 0 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-17 11:04 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/17/22 13:57, Ori Kam wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Thursday, February 17, 2022 12:35 PM
>> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints
>>
>> On 2/17/22 01:17, Alexander Kozyrev wrote:
>>> On Wed, Feb 16, 2022 8:03 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
>>>> On 2/11/22 21:47, Alexander Kozyrev wrote:
>>>>> On Friday, February 11, 2022 5:17 Andrew Rybchenko
>>>> <andrew.rybchenko@oktetlabs.ru> wrote:
>>>>>> Sent: Friday, February 11, 2022 5:17
>>>>>> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org
>>>>>> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>> (EXTERNAL)
>>>>>> <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
>>>> ferruh.yigit@intel.com;
>>>>>> mohammad.abdul.awal@intel.com; qi.z.zhang@intel.com;
>>>> jerinj@marvell.com;
>>>>>> ajit.khaparde@broadcom.com; bruce.richardson@intel.com
>>>>>> Subject: Re: [PATCH v5 01/10] ethdev: introduce flow pre-configuration
>>>> hints
>>>>>>
>>>>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>>>>>> The flow rules creation/destruction at a large scale incurs a performance
>>>>>>> penalty and may negatively impact the packet processing when used
>>>>>>> as part of the datapath logic. This is mainly because software/hardware
>>>>>>> resources are allocated and prepared during the flow rule creation.
>>>>>>>
>>>>>>> In order to optimize the insertion rate, PMD may use some hints
>>>> provided
>>>>>>> by the application at the initialization phase. The rte_flow_configure()
>>>>>>> function allows to pre-allocate all the needed resources beforehand.
>>>>>>> These resources can be used at a later stage without costly allocations.
>>>>>>> Every PMD may use only the subset of hints and ignore unused ones or
>>>>>>> fail in case the requested configuration is not supported.
>>>>>>>
>>>>>>> The rte_flow_info_get() is available to retrieve the information about
>>>>>>> supported pre-configurable resources. Both these functions must be
>>>> called
>>>>>>> before any other usage of the flow API engine.
>>>>>>>
>>>>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>>>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>>>
>>>> [snip]
>>>>
>>>>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>>>>> index a93f68abbc..66614ae29b 100644
>>>>>>> --- a/lib/ethdev/rte_flow.c
>>>>>>> +++ b/lib/ethdev/rte_flow.c
>>>>>>> @@ -1391,3 +1391,43 @@ rte_flow_flex_item_release(uint16_t port_id,
>>>>>>>      	ret = ops->flex_item_release(dev, handle, error);
>>>>>>>      	return flow_err(port_id, ret, error);
>>>>>>>      }
>>>>>>> +
>>>>>>> +int
>>>>>>> +rte_flow_info_get(uint16_t port_id,
>>>>>>> +		  struct rte_flow_port_info *port_info,
>>>>>>> +		  struct rte_flow_error *error)
>>>>>>> +{
>>>>>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>>>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>>>>>> +
>>>>>>> +	if (unlikely(!ops))
>>>>>>> +		return -rte_errno;
>>>>>>> +	if (likely(!!ops->info_get)) {
>>>>>>
>>>>>> expected ethdev state must be validated. Just configured?
>>>>>>
>>>>>>> +		return flow_err(port_id,
>>>>>>> +				ops->info_get(dev, port_info, error),
>>>>>>
>>>>>> port_info must be checked vs NULL
>>>>>
>>>>> We don’t have any NULL checks for parameters in the whole ret flow API
>>>> library.
>>>>> See rte_flow_create() for example. attributes, pattern and actions are
>>>> passed to PMD unchecked.
>>>>
>>>> IMHO it is hardly a good reason to have no such check here.
>>>> The API is pure control path. So, it must validate all input
>>>> arguments and it is better to do it in a generic place.
>>>
>>> Agree, I have no objections to introduce these validation checks on control path.
>>
>> Good, we have a progress.
>>
>>> My only concern is the data-path performance, so I'm reluctant to add them to
>>> rte_flow_q_create/destroy functions. But let's add NULL checks to configuration
>>> routines, ok?
>>
>> My opinion is not that strong on the aspect, but, personally,
>> I'd have sanity checks in the case of flow create/destroy as
>> well. First of all it is not a true datapath. Second, these
>> checks are very lightweight.
>>
>> Anyway, if nobody supports me, I'm OK to go without these
>> checks in generic functions, but it would be very useful to
>> highlight it in the parameters description.
>>
> I vote for adding the checks only on configuration function.
>  From my point of view the rule functions are part of the data path
> and should be treated this way.
> 
>>>>>>> +				error);
>>>>>>> +	}
>>>>>>> +	return rte_flow_error_set(error, ENOTSUP,
>>>>>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>>>>> +				  NULL, rte_strerror(ENOTSUP));
>>>>>>> +}
>>>>>>> +
>>>>>>> +int
>>>>>>> +rte_flow_configure(uint16_t port_id,
>>>>>>> +		   const struct rte_flow_port_attr *port_attr,
>>>>>>> +		   struct rte_flow_error *error)
>>>>>>> +{
>>>>>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>>>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>>>>>> +
>>>>>>> +	if (unlikely(!ops))
>>>>>>> +		return -rte_errno;
>>>>>>> +	if (likely(!!ops->configure)) {
>>>>>>
>>>>>> The API must validate ethdev state. configured and not started?
>>>>> Again, we have no such validation for any rte flow API today.
>>>>
>>>> Same here. If documentation defines in which state the API
>>>> should be called, generic code must guarantee it.
>>>
>>> Ok, as long as it stays in the configuration phase only.
>>>
>>>>>>
>>>>>>> +		return flow_err(port_id,
>>>>>>> +				ops->configure(dev, port_attr, error),
>>>>>>
>>>>>> port_attr must be checked vs NULL
>>>>> Same.
>>>>>
>>>>>>> +				error);
>>>>>>> +	}
>>>>>>> +	return rte_flow_error_set(error, ENOTSUP,
>>>>>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>>>>> +				  NULL, rte_strerror(ENOTSUP));
>>>>>>> +}
>>>>>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>>>>>> index 1031fb246b..92be2a9a89 100644
>>>>>>> --- a/lib/ethdev/rte_flow.h
>>>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>>>> @@ -4853,6 +4853,114 @@ rte_flow_flex_item_release(uint16_t
>>>> port_id,
>>>>>>>      			   const struct rte_flow_item_flex_handle *handle,
>>>>>>>      			   struct rte_flow_error *error);
>>>>>>>
>>>>>>> +/**
>>>>>>> + * @warning
>>>>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>>>>> + *
>>>>>>> + * Information about available pre-configurable resources.
>>>>>>> + * The zero value means a resource cannot be pre-allocated.
>>>>>>> + *
>>>>>>> + */
>>>>>>> +struct rte_flow_port_info {
>>>>>>> +	/**
>>>>>>> +	 * Number of pre-configurable counter actions.
>>>>>>> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
>>>>>>> +	 */
>>>>>>> +	uint32_t nb_counters;
>>>>>>
>>>>>> Name says that it is a number of counters, but description
>>>>>> says that it is about actions.
>>>>>> Also I don't understand what does "pre-configurable" mean.
>>>>>> Isn't it a maximum number of available counters?
>>>>>> If no, how can I find a maximum?
>>>>> It is number of pre-allocated and pre-configured actions.
>>>>> How are they pr-configured is up to PDM driver.
>>>>> But let's change to "pre-configured" everywhere.
>>>>> Configuration includes some memory allocation anyway.
>>>>
>>>> Sorry, but I still don't understand. I guess HW has
>>>> a hard limit on a number of counters. How can I get
>>>> the information?
>>>
>>> Sorry for not being clear. These are resources/objects limitation.
>>> It may be the hard HW limit on number of counter objects, for example.
>>> Or the system has a little of memory and NIC is constrained in memory
>>> in its attempt to create these counter objects as another example.
>>> In any case, the info_get() API should return the limit to a user.
>>
>> Look. First of all it is confusing that description says
>> "counter actions". I remember that we have no shared
>> counters now (just shared actions), but it does not matter
>> a lot. IMHO it is a bit more clear to say that it is
>> a limit on a number of flow counters. I guess it better
>> express the nature of the limitation. May be I'm missing
>> something. If so, I'd like to understand what.
>>
>  From my view point, this should be the number of resource/objects that
> the HW can allocate (in an ideal system).
> For example if the HW can allocate 1M counters but due to limited memory
> on the system the actual number can be less.
> 
> Like you said we also have the handle action, this means that
> the same object can be shared between any number of rules.
> as a result the limitation is not on the number of rules but on the number of
> resources allocated.
> 
> In addition and even more importantly during this stage there is no knowlge on the
> number of rules that will be inserted.
> 
> So can we agree to say resources?

Yes

>> Second, "per-configurable" is confusing. May be it is better
>> just to drop it? I.e. "Information about available resources."
>> Otherwise it is necessary to explain who and when
>> pre-configures these resources. Is it really pre-configured?
>>
> I'm O.K with dropping the configuration part
> It should just say number of counter objects
> 
>> "The zero value means a resource cannot be pre-allocated."
>> Does it mean that the action cannot be used at all?
>> I think it must be explicitly clarified in the case of any
>> answer.
> 
> Agree, it should state that if the PMD report 0 in means that
> it doesn’t support such an object.

Good.


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-17 10:52               ` Andrew Rybchenko
@ 2022-02-17 11:08                 ` Ori Kam
  2022-02-17 14:16                   ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-02-17 11:08 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, February 17, 2022 12:53 PM
> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> 
> Hi Ori,
> 
> On 2/16/22 17:53, Ori Kam wrote:
> > Hi Andew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Wednesday, February 16, 2022 3:34 PM
> >> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> >>
> >> On 2/12/22 05:19, Alexander Kozyrev wrote:
> >>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> >>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>>>> A new, faster, queue-based flow rules management mechanism is needed
> >>>> for
> >>>>> applications offloading rules inside the datapath. This asynchronous
> >>>>> and lockless mechanism frees the CPU for further packet processing and
> >>>>> reduces the performance impact of the flow rules creation/destruction
> >>>>> on the datapath. Note that queues are not thread-safe and the queue
> >>>>> should be accessed from the same thread for all queue operations.
> >>>>> It is the responsibility of the app to sync the queue functions in case
> >>>>> of multi-threaded access to the same queue.
> >>>>>
> >>>>> The rte_flow_q_flow_create() function enqueues a flow creation to the
> >>>>> requested queue. It benefits from already configured resources and sets
> >>>>> unique values on top of item and action templates. A flow rule is enqueued
> >>>>> on the specified flow queue and offloaded asynchronously to the
> >>>> hardware.
> >>>>> The function returns immediately to spare CPU for further packet
> >>>>> processing. The application must invoke the rte_flow_q_pull() function
> >>>>> to complete the flow rule operation offloading, to clear the queue, and to
> >>>>> receive the operation status. The rte_flow_q_flow_destroy() function
> >>>>> enqueues a flow destruction to the requested queue.
> >>>>>
> >>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >>>>> Acked-by: Ori Kam <orika@nvidia.com>
> >>
> >> [snip]
> >>
> >>>>> +
> >>>>> +- Available operation types: rule creation, rule destruction,
> >>>>> +  indirect rule creation, indirect rule destruction, indirect rule update.
> >>>>> +
> >>>>> +- Operations may be reordered within a queue.
> >>>>
> >>>> Do we want to have barriers?
> >>>> E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule
> >>>> lives forever.
> >>>
> >>> API design is crafter with the throughput as the main goal in mind.
> >>> We allow user to enforce any ordering outside these functions.
> >>> Another point that not all PMDs/NIC will have this out-of-order execution.
> >>
> >> Throughput is nice, but there more important requirements
> >> which must be satistied before talking about performance.
> >> Could you explain me what I should do based on which
> >> information from NIC in order to solve above problem?
> >>
> >
> > The idea is that if application has dependency between the rules/ rules operations.
> > It should wait for the completion of the operation before sending the dependent operation.
> > In the example you provided above, according to the documeation application should wait
> > for the completion of the flow creation before destroying it.
> 
> I see, thanks. May be I read documentation not that attentive.
> I'll reread on the next version review cycle.
> 
> >>>>> +
> >>>>> +- Operations can be postponed and pushed to NIC in batches.
> >>>>> +
> >>>>> +- Results pulling must be done on time to avoid queue overflows.
> >>>>
> >>>> polling? (as libc poll() which checks status of file descriptors)
> >>>> it is not pulling the door to open it :)
> >>>
> >>> poll waits for some event on a file descriptor as it title says.
> >>> And then user has to invoke read() to actually get any info from the fd.
> >>> The point of our function is to return the result immediately, thus pulling.
> >>> We had many names appearing in the thread for these functions.
> >>> As we know, naming variables is the second hardest thing in programming.
> >>> I wanted this pull for results pulling be a counterpart for the push for
> >>> pushing the operations to a NIC. Another idea is pop/push pair, but they are
> >>> more like for operations only, not for results.
> >>> Having said that I'm at the point of accepting any name here.
> >>
> >> I agree that it is hard to choose good naming.
> >> Just want to say that polling is not alway waiting.
> >>
> >> poll - check the status of (a device), especially as part of a repeated
> >> cycle.
> >>
> >> Here we're checking status of flow engine requests and yes,
> >> finally in a repeated cycle.
> >>
> >> [snip]
> >>
> >>>>> +/**
> >>>>> + * @warning
> >>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>>>> + *
> >>>>> + * Queue operation attributes.
> >>>>> + */
> >>>>> +struct rte_flow_q_ops_attr {
> >>>>> +	/**
> >>>>> +	 * The user data that will be returned on the completion events.
> >>>>> +	 */
> >>>>> +	void *user_data;
> >>>>
> >>>> IMHO it must not be hiddne in attrs. It is a key information
> >>>> which is used to understand the opration result. It should
> >>>> be passed separately.
> >>>
> >>> Maybe, on the other hand it is optional and may not be needed by an application.
> >>
> >> I don't understand how it is possible. Without it application
> >> don't know fate of its requests.
> >>
> > IMHO since user_data should be in all related operations API
> > along with the attr, splitting the user_data will just add extra parameter
> > to each function call. Since we have number of functions and will add
> > more in future I think it will be best to keep it in this location.
> 
> My problem with hiding user_data inside attr is that
> 'user_data' is not an auxiliary attribute defining extra
> properties of the request. It is a key information.
> May be attr is not an ideal name for such grouping
> of parameters. Unfortunately I have no better ideas right now.
> 
I understand your point, if you don't have objections lets keep the current one
and if needed we will modify.
Is that O.K?

> Andrew.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-16 15:15             ` Ori Kam
@ 2022-02-17 11:10               ` Andrew Rybchenko
  2022-02-17 11:19                 ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-17 11:10 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/16/22 18:15, Ori Kam wrote:
> Hi Andew,
> 
> I missed on comments PSB,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
>>
>> On 2/12/22 05:19, Alexander Kozyrev wrote:
>>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
>>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
>>>>> A new, faster, queue-based flow rules management mechanism is needed
> 
> [Snip]
> 
> 
>>>>> +
>>>>> +- Operations can be postponed and pushed to NIC in batches.
>>>>> +
>>>>> +- Results pulling must be done on time to avoid queue overflows.
>>>>
>>>> polling? (as libc poll() which checks status of file descriptors)
>>>> it is not pulling the door to open it :)
>>>
>>> poll waits for some event on a file descriptor as it title says.
>>> And then user has to invoke read() to actually get any info from the fd.
>>> The point of our function is to return the result immediately, thus pulling.
>>> We had many names appearing in the thread for these functions.
>>> As we know, naming variables is the second hardest thing in programming.
>>> I wanted this pull for results pulling be a counterpart for the push for
>>> pushing the operations to a NIC. Another idea is pop/push pair, but they are
>>> more like for operations only, not for results.
>>> Having said that I'm at the point of accepting any name here.
>>
>> I agree that it is hard to choose good naming.
>> Just want to say that polling is not alway waiting.
>>
>> poll - check the status of (a device), especially as part of a repeated
>> cycle.
>>
>> Here we're checking status of flow engine requests and yes,
>> finally in a repeated cycle.
>>
> I think the best name should be dequeue since it means that
> the calling app gets back info and also free space in the the qeueue.

Dequeue is bad since it is not a queue because of out-of-order
completions. So, if it is a ring, completion of one request
does not always free space in ring. May be it should not be
treated as a ring.

> My second option is the pull, since again it implies that we are getting back
> something from the queue and not just waiting for event.

I'll think a bit more about it.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 02/10] ethdev: add flow item/action templates
  2022-02-17 10:44               ` Andrew Rybchenko
@ 2022-02-17 11:11                 ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-17 11:11 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, February 17, 2022 12:45 PM
> Subject: Re: [PATCH v5 02/10] ethdev: add flow item/action templates
> 
> Hi Ori,
> 
> On 2/16/22 17:18, Ori Kam wrote:
> > Hi Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Subject: Re: [PATCH v5 02/10] ethdev: add flow item/action templates
> >>
> >> On 2/12/22 01:25, Alexander Kozyrev wrote:
> >>> On Fri, Feb 11, 2022 6:27 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> wrote:
> >>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>>>> Treating every single flow rule as a completely independent and separate
> >>>>> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> >>>>> application, many flow rules share a common structure (the same item mask
> >>>>> and/or action list) so they can be grouped and classified together.
> >>>>> This knowledge may be used as a source of optimization by a PMD/HW.
> >>>>>
> >>>>> The pattern template defines common matching fields (the item mask) without
> >>>>> values. The actions template holds a list of action types that will be used
> >>>>> together in the same rule. The specific values for items and actions will
> >>>>> be given only during the rule creation.
> >>>>>
> >>>>> A table combines pattern and actions templates along with shared flow rule
> >>>>> attributes (group ID, priority and traffic direction). This way a PMD/HW
> >>>>> can prepare all the resources needed for efficient flow rules creation in
> >>>>> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> >>>>> number of flow rules is defined at the table creation time.
> >>>>>
> >>>>> The flow rule creation is done by selecting a table, a pattern template
> >>>>> and an actions template (which are bound to the table), and setting unique
> >>>>> values for the items and actions.
> >>>>>
> >>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >>>>> Acked-by: Ori Kam <orika@nvidia.com>
> >>
> >> [snip]
> >>
> >
> > [Snip]
> >
> >>>>> + *
> >>>>> + * The pattern template defines common matching fields without values.
> >>>>> + * For example, matching on 5 tuple TCP flow, the template will be
> >>>>> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> >>>>> + * while values for each rule will be set during the flow rule creation.
> >>>>> + * The number and order of items in the template must be the same
> >>>>> + * at the rule creation.
> >>>>> + *
> >>>>> + * @param port_id
> >>>>> + *   Port identifier of Ethernet device.
> >>>>> + * @param[in] template_attr
> >>>>> + *   Pattern template attributes.
> >>>>> + * @param[in] pattern
> >>>>> + *   Pattern specification (list terminated by the END pattern item).
> >>>>> + *   The spec member of an item is not used unless the end member is used.
> >>>>
> >>>> Interpretation of the pattern may depend on transfer vs non-transfer
> >>>> rule to be used. It is essential information and we should provide it
> >>>> when pattern template is created.
> >>>>
> >>>> The information is provided on table stage, but it is too late.
> >>>
> >>> Why is it too late? Application knows which template goes to which table.
> >>> And the pattern is generic to accommodate anything, user just need to put it
> >>> into the right table.
> >>
> >> Because it is more convenient to handle it when individual
> >> template is processed. Otherwise error reporting will be
> >> complicated since it could be just one template which is
> >> wrong.
> >>
> >> Otherwise, I see no point to have driver callbacks
> >> template creation API. I can do nothing here since
> >> I have no enough context. What's the problem to add
> >> the context?
> >>
> >
> > The idea is that the same template can be used in different
> > domains (ingress/egress and transfer)
> > May be we can add on which domains this template is expected to be used.
> > What do you think?
> 
> I see. IMHO if application is going to use the same template
> in transfer and non-transfer rules, it is not a problem to
> register it twice. Otherwise, if PMD needs the information and
> template handling differs a lot in transfer and non-transfer
> case, handling should be postponed and should be done on
> table definition. In this case, we cannot provide feedback
> to application which template it cannot handle. Even if the
> information is somehow encoded in flow error, encoding must
> be defined and it still could be inconvenient for the
> application to handle it.
> 
> Yes, I agree that it is better to fully specify domain
> including ingress and egress, not just transfer/non-transfer.
> 
So lets add the 3 domains.

> Andrew.

Thanks,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-17 11:10               ` Andrew Rybchenko
@ 2022-02-17 11:19                 ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-17 11:19 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, February 17, 2022 1:11 PM
> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> 
> On 2/16/22 18:15, Ori Kam wrote:
> > Hi Andew,
> >
> > I missed on comments PSB,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> >>
> >> On 2/12/22 05:19, Alexander Kozyrev wrote:
> >>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> >>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
> >>>>> A new, faster, queue-based flow rules management mechanism is needed
> >
> > [Snip]
> >
> >
> >>>>> +
> >>>>> +- Operations can be postponed and pushed to NIC in batches.
> >>>>> +
> >>>>> +- Results pulling must be done on time to avoid queue overflows.
> >>>>
> >>>> polling? (as libc poll() which checks status of file descriptors)
> >>>> it is not pulling the door to open it :)
> >>>
> >>> poll waits for some event on a file descriptor as it title says.
> >>> And then user has to invoke read() to actually get any info from the fd.
> >>> The point of our function is to return the result immediately, thus pulling.
> >>> We had many names appearing in the thread for these functions.
> >>> As we know, naming variables is the second hardest thing in programming.
> >>> I wanted this pull for results pulling be a counterpart for the push for
> >>> pushing the operations to a NIC. Another idea is pop/push pair, but they are
> >>> more like for operations only, not for results.
> >>> Having said that I'm at the point of accepting any name here.
> >>
> >> I agree that it is hard to choose good naming.
> >> Just want to say that polling is not alway waiting.
> >>
> >> poll - check the status of (a device), especially as part of a repeated
> >> cycle.
> >>
> >> Here we're checking status of flow engine requests and yes,
> >> finally in a repeated cycle.
> >>
> > I think the best name should be dequeue since it means that
> > the calling app gets back info and also free space in the the qeueue.
> 
> Dequeue is bad since it is not a queue because of out-of-order
> completions. So, if it is a ring, completion of one request
> does not always free space in ring. May be it should not be
> treated as a ring.
> 
Like I said I'm O.K with the pull version.
I had many many descussions about the queue, I was also thinking about it the
way you do, that saying queue means it is ordered, but that is not true by definition
you can have unordered queue. (for example priority queues) or just
your everyday queue in the store that at the end will be split to counters
and each customer can finish before the one that he was after in the queue.

Important thing to notice, this  function does clear space in the 
queue.

> > My second option is the pull, since again it implies that we are getting back
> > something from the queue and not just waiting for event.
> 
> I'll think a bit more about it.

Thanks, 

Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-17 11:08                 ` Ori Kam
@ 2022-02-17 14:16                   ` Ori Kam
  2022-02-17 14:34                     ` Thomas Monjalon
  0 siblings, 1 reply; 220+ messages in thread
From: Ori Kam @ 2022-02-17 14:16 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Ori Kam
> Sent: Thursday, February 17, 2022 1:09 PM
> Subject: RE: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> 
> Hi Andrew,
> 
> > -----Original Message-----
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Sent: Thursday, February 17, 2022 12:53 PM
> > Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> >
> > Hi Ori,
> >
> > On 2/16/22 17:53, Ori Kam wrote:
> > > Hi Andew,
> > >
> > >> -----Original Message-----
> > >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > >> Sent: Wednesday, February 16, 2022 3:34 PM
> > >> Subject: Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
> > >>
> > >> On 2/12/22 05:19, Alexander Kozyrev wrote:
> > >>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> > >>>> On 2/11/22 05:26, Alexander Kozyrev wrote:
> > >>>>> A new, faster, queue-based flow rules management mechanism is needed
> > >>>> for
> > >>>>> applications offloading rules inside the datapath. This asynchronous
> > >>>>> and lockless mechanism frees the CPU for further packet processing and
> > >>>>> reduces the performance impact of the flow rules creation/destruction
> > >>>>> on the datapath. Note that queues are not thread-safe and the queue
> > >>>>> should be accessed from the same thread for all queue operations.
> > >>>>> It is the responsibility of the app to sync the queue functions in case
> > >>>>> of multi-threaded access to the same queue.
> > >>>>>
> > >>>>> The rte_flow_q_flow_create() function enqueues a flow creation to the
> > >>>>> requested queue. It benefits from already configured resources and sets
> > >>>>> unique values on top of item and action templates. A flow rule is enqueued
> > >>>>> on the specified flow queue and offloaded asynchronously to the
> > >>>> hardware.
> > >>>>> The function returns immediately to spare CPU for further packet
> > >>>>> processing. The application must invoke the rte_flow_q_pull() function
> > >>>>> to complete the flow rule operation offloading, to clear the queue, and to
> > >>>>> receive the operation status. The rte_flow_q_flow_destroy() function
> > >>>>> enqueues a flow destruction to the requested queue.
> > >>>>>
> > >>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > >>>>> Acked-by: Ori Kam <orika@nvidia.com>
> > >>
> > >> [snip]
> > >>
> > >>>>> +
> > >>>>> +- Available operation types: rule creation, rule destruction,
> > >>>>> +  indirect rule creation, indirect rule destruction, indirect rule update.
> > >>>>> +
> > >>>>> +- Operations may be reordered within a queue.
> > >>>>
> > >>>> Do we want to have barriers?
> > >>>> E.g. create rule, destroy the same rule -> reoder -> destroy fails, rule
> > >>>> lives forever.
> > >>>
> > >>> API design is crafter with the throughput as the main goal in mind.
> > >>> We allow user to enforce any ordering outside these functions.
> > >>> Another point that not all PMDs/NIC will have this out-of-order execution.
> > >>
> > >> Throughput is nice, but there more important requirements
> > >> which must be satistied before talking about performance.
> > >> Could you explain me what I should do based on which
> > >> information from NIC in order to solve above problem?
> > >>
> > >
> > > The idea is that if application has dependency between the rules/ rules operations.
> > > It should wait for the completion of the operation before sending the dependent operation.
> > > In the example you provided above, according to the documeation application should wait
> > > for the completion of the flow creation before destroying it.
> >
> > I see, thanks. May be I read documentation not that attentive.
> > I'll reread on the next version review cycle.
> >
> > >>>>> +
> > >>>>> +- Operations can be postponed and pushed to NIC in batches.
> > >>>>> +
> > >>>>> +- Results pulling must be done on time to avoid queue overflows.
> > >>>>
> > >>>> polling? (as libc poll() which checks status of file descriptors)
> > >>>> it is not pulling the door to open it :)
> > >>>
> > >>> poll waits for some event on a file descriptor as it title says.
> > >>> And then user has to invoke read() to actually get any info from the fd.
> > >>> The point of our function is to return the result immediately, thus pulling.
> > >>> We had many names appearing in the thread for these functions.
> > >>> As we know, naming variables is the second hardest thing in programming.
> > >>> I wanted this pull for results pulling be a counterpart for the push for
> > >>> pushing the operations to a NIC. Another idea is pop/push pair, but they are
> > >>> more like for operations only, not for results.
> > >>> Having said that I'm at the point of accepting any name here.
> > >>
> > >> I agree that it is hard to choose good naming.
> > >> Just want to say that polling is not alway waiting.
> > >>
> > >> poll - check the status of (a device), especially as part of a repeated
> > >> cycle.
> > >>
> > >> Here we're checking status of flow engine requests and yes,
> > >> finally in a repeated cycle.
> > >>
> > >> [snip]
> > >>
> > >>>>> +/**
> > >>>>> + * @warning
> > >>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
> > >>>>> + *
> > >>>>> + * Queue operation attributes.
> > >>>>> + */
> > >>>>> +struct rte_flow_q_ops_attr {
> > >>>>> +	/**
> > >>>>> +	 * The user data that will be returned on the completion events.
> > >>>>> +	 */
> > >>>>> +	void *user_data;
> > >>>>
> > >>>> IMHO it must not be hiddne in attrs. It is a key information
> > >>>> which is used to understand the opration result. It should
> > >>>> be passed separately.
> > >>>
> > >>> Maybe, on the other hand it is optional and may not be needed by an application.
> > >>
> > >> I don't understand how it is possible. Without it application
> > >> don't know fate of its requests.
> > >>
> > > IMHO since user_data should be in all related operations API
> > > along with the attr, splitting the user_data will just add extra parameter
> > > to each function call. Since we have number of functions and will add
> > > more in future I think it will be best to keep it in this location.
> >
> > My problem with hiding user_data inside attr is that
> > 'user_data' is not an auxiliary attribute defining extra
> > properties of the request. It is a key information.
> > May be attr is not an ideal name for such grouping
> > of parameters. Unfortunately I have no better ideas right now.
> >
> I understand your point, if you don't have objections lets keep the current one
> and if needed we will modify.
> Is that O.K?
> 

Thinking about it again,
lets move it to a dedecated parameter.

Ori
> > Andrew.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations
  2022-02-17 14:16                   ` Ori Kam
@ 2022-02-17 14:34                     ` Thomas Monjalon
  0 siblings, 0 replies; 220+ messages in thread
From: Thomas Monjalon @ 2022-02-17 14:34 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev, Ori Kam
  Cc: ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

17/02/2022 15:16, Ori Kam:
> From: Ori Kam
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > > On 2/16/22 17:53, Ori Kam wrote:
> > > > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > > >> On 2/12/22 05:19, Alexander Kozyrev wrote:
> > > >>> On Fri, Feb 11, 2022 7:42 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> > > >>>>> +/**
> > > >>>>> + * @warning
> > > >>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
> > > >>>>> + *
> > > >>>>> + * Queue operation attributes.
> > > >>>>> + */
> > > >>>>> +struct rte_flow_q_ops_attr {
> > > >>>>> +	/**
> > > >>>>> +	 * The user data that will be returned on the completion events.
> > > >>>>> +	 */
> > > >>>>> +	void *user_data;
> > > >>>>
> > > >>>> IMHO it must not be hiddne in attrs. It is a key information
> > > >>>> which is used to understand the opration result. It should
> > > >>>> be passed separately.
> > > >>>
> > > >>> Maybe, on the other hand it is optional and may not be needed by an application.
> > > >>
> > > >> I don't understand how it is possible. Without it application
> > > >> don't know fate of its requests.
> > > >>
> > > > IMHO since user_data should be in all related operations API
> > > > along with the attr, splitting the user_data will just add extra parameter
> > > > to each function call. Since we have number of functions and will add
> > > > more in future I think it will be best to keep it in this location.
> > >
> > > My problem with hiding user_data inside attr is that
> > > 'user_data' is not an auxiliary attribute defining extra
> > > properties of the request. It is a key information.
> > > May be attr is not an ideal name for such grouping
> > > of parameters. Unfortunately I have no better ideas right now.
> > >
> > I understand your point, if you don't have objections lets keep the current one
> > and if needed we will modify.
> > Is that O.K?
> 
> Thinking about it again,
> lets move it to a dedecated parameter.

I'm OK with the decision of moving user_data as a function parameter.



^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 00/10] ethdev: datapath-focused flow rules management
  2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                         ` (9 preceding siblings ...)
  2022-02-12  4:19       ` [PATCH v6 10/10] app/testpmd: add async indirect actions creation/destruction Alexander Kozyrev
@ 2022-02-19  4:11       ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
                           ` (11 more replies)
  10 siblings, 12 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
v7:
- added sanity checks and device state validation
- added flow engine state validation
- added ingress/egress/transfer attibutes to templates
- moved user_data to a parameter list
- renamed asynchronous functions from "_q_" to "_async_"
- created a separate commit for indirect actions

v6: addressed more review comments
- fixed typos
- rewrote code snippets
- add a way to get queue size
- renamed port/queue attibutes parameters

v5: changed titles for testpmd commits

v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (11):
  ethdev: introduce flow engine configuration
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  ethdev: bring in async indirect actions operations
  app/testpmd: add flow engine configuration
  app/testpmd: add flow template management
  app/testpmd: add flow table management
  app/testpmd: add async flow create/destroy operations
  app/testpmd: add flow queue push operation
  app/testpmd: add flow queue pull operation
  app/testpmd: add async indirect actions operations

 app/test-pmd/cmdline_flow.c                   | 1726 ++++++++++++++++-
 app/test-pmd/config.c                         |  778 ++++++++
 app/test-pmd/testpmd.h                        |   67 +
 .../prog_guide/img/rte_flow_async_init.svg    |  205 ++
 .../prog_guide/img/rte_flow_async_usage.svg   |  354 ++++
 doc/guides/prog_guide/rte_flow.rst            |  345 ++++
 doc/guides/rel_notes/release_22_03.rst        |   27 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  383 +++-
 lib/ethdev/ethdev_driver.h                    |    7 +-
 lib/ethdev/rte_flow.c                         |  500 +++++
 lib/ethdev/rte_flow.h                         |  766 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  108 ++
 lib/ethdev/version.map                        |   15 +
 13 files changed, 5185 insertions(+), 96 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 01/11] ethdev: introduce flow engine configuration
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 02/11] ethdev: add flow item/action templates Alexander Kozyrev
                           ` (10 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  36 ++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/ethdev_driver.h             |   7 +-
 lib/ethdev/rte_flow.c                  |  69 +++++++++++++++
 lib/ethdev/rte_flow.h                  | 111 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 7 files changed, 240 insertions(+), 1 deletion(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 0e475019a6..c89161faef 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3606,6 +3606,42 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API engine configuration and allocates
+requested resources beforehand to avoid costly allocations later.
+Expected number of resources in an application allows PMD to prepare
+and optimize NIC hardware configuration and memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                      const struct rte_flow_port_attr *port_attr,
+                      struct rte_flow_error *error);
+
+Information about the number of available resources can be retrieved via
+``rte_flow_info_get()`` API.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index ff3095d742..eceab07576 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -99,6 +99,12 @@ New Features
   The information of these properties is important for debug.
   As the information is private, a dump function is introduced.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve available resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 6d697a879a..06f0896e1e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -138,7 +138,12 @@ struct rte_eth_dev_data {
 		 * Indicates whether the device is configured:
 		 * CONFIGURED(1) / NOT CONFIGURED(0)
 		 */
-		dev_configured : 1;
+		dev_configured:1,
+		/**
+		 * Indicates whether the flow engine is configured:
+		 * CONFIGURED(1) / NOT CONFIGURED(0)
+		 */
+		flow_configured:1;
 
 	/** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
 	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7f93900bc8..ffd48e40d5 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (port_info == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	dev->data->flow_configured = 0;
+	if (port_attr == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_started != 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" already started.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		ret = ops->configure(dev, port_attr, error);
+		if (ret == 0)
+			dev->data->flow_configured = 1;
+		return flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 765beb3e52..cdb7b2be68 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -43,6 +43,9 @@
 extern "C" {
 #endif
 
+#define RTE_FLOW_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__)
+
 /**
  * Flow rule attributes.
  *
@@ -4872,6 +4875,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine resources.
+ * The zero value means a resource is not supported.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Maximum number of counters.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t max_nb_counters;
+	/**
+	 * Maximum number of aging objects.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t max_nb_aging_objects;
+	/**
+	 * Maximum number traffic meters.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t max_nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get information about flow engine resources.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the resources information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine resources settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging objects to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_objects;
+	/**
+	 * Number of traffic meters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the
+ * settings. The port, however, may reject the changes.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index d5cc56a560..0d849c153f 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -264,6 +264,8 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_capability_get;
 	rte_eth_ip_reassembly_conf_get;
 	rte_eth_ip_reassembly_conf_set;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 02/11] ethdev: add flow item/action templates
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                           ` (9 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 135 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   9 +
 lib/ethdev/rte_flow.c                  | 252 +++++++++++++++++++++++
 lib/ethdev/rte_flow.h                  | 274 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 713 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c89161faef..6cdfea09be 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3642,6 +3642,141 @@ Information about the number of available resources can be retrieved via
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	const struct rte_flow_pattern_template_attr attr = {.ingress = 1};
+	struct rte_flow_item_eth eth_m = {
+		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	};
+	struct rte_flow_item pattern[] = {
+		[0] = {.type = RTE_FLOW_ITEM_TYPE_ETH,
+		       .mask = &eth_m},
+		[1] = {.type = RTE_FLOW_ITEM_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &attr, &pattern, &err);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	rte_flow_actions_template_attr attr = {.ingress = 1};
+	struct rte_flow_action act[] = {
+		/* Mark ID is 4 for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action msk[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_actions_template *actions_template =
+		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_template_table_attr table_attr = {
+		.flow_attr.ingress = 1,
+		.nb_flows = 10000;
+	};
+	uint8_t nb_pattern_templ = 1;
+	struct rte_flow_pattern_template *pattern_templates[nb_pattern_templ];
+	pattern_templates[0] = pattern_template;
+	uint8_t nb_actions_templ = 1;
+	struct rte_flow_actions_template *actions_templates[nb_actions_templ];
+	actions_templates[0] = actions_template;
+	struct rte_flow_error error;
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, &table_attr,
+				&pattern_templates, nb_pattern_templ,
+				&actions_templates, nb_actions_templ,
+				&error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index eceab07576..3a1c2d2d4d 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -105,6 +105,15 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve available resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``,
+	``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index ffd48e40d5..e9f684eedb 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1461,3 +1461,255 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_STATE,
+				NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+							pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(pattern_template == NULL))
+		return 0;
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (masks == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" masks is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+
+	}
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(actions_template == NULL))
+		return 0;
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (table_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" table attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(template_table == NULL))
+		return 0;
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index cdb7b2be68..776e8ccc11 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4983,6 +4983,280 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - PMD may match only on items with mask member set and skip
+	 * matching on protocol layers specified without any masks.
+	 * - If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * - Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets, starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+	/** Pattern valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Pattern valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Pattern valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+__extension__
+struct rte_flow_actions_template_attr {
+	/** Action valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Action valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Action valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0d849c153f..62ff791261 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -266,6 +266,12 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_conf_set;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 02/11] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
                           ` (8 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 .../prog_guide/img/rte_flow_async_init.svg    | 205 ++++++++++
 .../prog_guide/img/rte_flow_async_usage.svg   | 354 ++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 124 ++++++
 doc/guides/rel_notes/release_22_03.rst        |   7 +
 lib/ethdev/rte_flow.c                         | 110 +++++-
 lib/ethdev/rte_flow.h                         | 251 +++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  35 ++
 lib/ethdev/version.map                        |   4 +
 8 files changed, 1087 insertions(+), 3 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg
new file mode 100644
index 0000000000..f66e9c73d7
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_async_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
new file mode 100644
index 0000000000..bb978bca1e
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
@@ -0,0 +1,354 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_async_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84607"
+     inkscape:cy="305.37563"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="-9"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="587.84119"
+       y="279.47534"
+       width="200.65393"
+       height="46.049305"
+       stroke="#000000"
+       stroke-width="1.20888"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text21"
+       x="595.42902"
+       y="308">rte_flow_async_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="92.600937"
+       y="280.48242"
+       width="210.14578"
+       height="45.035149"
+       stroke="#000000"
+       stroke-width="1.24464"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect65" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text67"
+       x="100.2282"
+       y="308">rte_flow_async_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="357.15436"
+       y="540.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="393.08301"
+       y="569">rte_flow_pull()</text>
+    <rect
+       x="357.15436"
+       y="462.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect79" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text81"
+       x="389.19"
+       y="491">rte_flow_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 6cdfea09be..436845717f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare
 and optimize NIC hardware configuration and memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                       const struct rte_flow_port_attr *port_attr,
+                      uint16_t nb_queue,
+                      const struct rte_flow_queue_attr *queue_attr[],
                       struct rte_flow_error *error);
 
 Information about the number of available resources can be retrieved via
@@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via
    int
    rte_flow_info_get(uint16_t port_id,
                      struct rte_flow_port_info *port_info,
+                     struct rte_flow_queue_info *queue_info,
                      struct rte_flow_error *error);
 
 Flow templates
@@ -3777,6 +3782,125 @@ and pattern and actions templates are created.
 				&actions_templates, nb_actions_templ,
 				&error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+  packet processing can continue while queue operations are processed by NIC.
+
+- Number of flow queues is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+  indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+  destroyed even if the operation is not successful and the rule is not inserted.
+
+- Application must wait for the creation operation result before enqueueing
+  the deletion operation to make sure the creation is processed by NIC.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_async_init:
+
+.. figure:: img/rte_flow_async_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_async_usage:
+
+.. figure:: img/rte_flow_async_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_async_create(uint16_t port_id,
+			      uint32_t queue_id,
+			      const struct rte_flow_q_ops_attr *q_ops_attr,
+			      struct rte_flow_template_table *template_table,
+			      const struct rte_flow_item pattern[],
+			      uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      uint8_t actions_template_index,
+			      void *user_data,
+			      struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_destroy(uint16_t port_id,
+			       uint32_t queue_id,
+			       const struct rte_flow_q_ops_attr *q_ops_attr,
+			       struct rte_flow *flow,
+			       void *user_data,
+			       struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_push(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_pull(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_q_op_res res[],
+		      uint16_t n_res,
+		      struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 3a1c2d2d4d..e0549a2da3 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -114,6 +114,13 @@ New Features
 	``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+* ** Added functions for asynchronous flow rules creation/destruction
+
+  * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API
+    to enqueue flow creaion/destruction operations asynchronously as well as
+	``rte_flow_pull`` to poll and retrieve results of these operations and
+	``rte_flow_push`` to push all the in-flight	operations to the NIC.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index e9f684eedb..4e7b202522 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->info_get)) {
 		return flow_err(port_id,
-				ops->info_get(dev, port_info, error),
+				ops->info_get(dev, port_info, queue_info, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1433,7 +1436,7 @@ rte_flow_configure(uint16_t port_id,
 	int ret;
 
 	dev->data->flow_configured = 0;
-	if (port_attr == NULL) {
+	if (port_attr == NULL || queue_attr == NULL) {
 		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
 		return -EINVAL;
 	}
@@ -1452,7 +1455,7 @@ rte_flow_configure(uint16_t port_id,
 	if (unlikely(!ops))
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
-		ret = ops->configure(dev, port_attr, error);
+		ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error);
 		if (ret == 0)
 			dev->data->flow_configured = 1;
 		return flow_err(port_id, ret, error);
@@ -1713,3 +1716,104 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_q_ops_attr *q_ops_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->async_create)) {
+		flow = ops->async_create(dev, queue_id,
+					 q_ops_attr, template_table,
+					 pattern, pattern_template_index,
+					 actions, actions_template_index,
+					 user_data, error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->async_destroy)) {
+		return flow_err(port_id,
+				ops->async_destroy(dev, queue_id,
+						   q_ops_attr, flow,
+						   user_data, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->push)) {
+		return flow_err(port_id,
+				ops->push(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_q_op_res res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pull)) {
+		ret = ops->pull(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 776e8ccc11..9e71a576f6 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Maximum umber of queues for asynchronous operations.
+	 */
+	uint32_t max_nb_queues;
 	/**
 	 * Maximum number of counters.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4901,6 +4905,21 @@ struct rte_flow_port_info {
 	uint32_t max_nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine asynchronous queues.
+ * The value only valid if @p port_attr.max_nb_queues is not zero.
+ *
+ */
+struct rte_flow_queue_info {
+	/**
+	 * Maximum number of operations a queue can hold.
+	 */
+	uint32_t max_size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4912,6 +4931,9 @@ struct rte_flow_port_info {
  * @param[out] port_info
  *   A pointer to a structure of type *rte_flow_port_info*
  *   to be filled with the resources information of the port.
+ * @param[out] queue_info
+ *   A pointer to a structure of type *rte_flow_queue_info*
+ *   to be filled with the asynchronous queues information.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4923,6 +4945,7 @@ __rte_experimental
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error);
 
 /**
@@ -4951,6 +4974,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine asynchronous queues settings.
+ * The value means default value picked by PMD.
+ *
+ */
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4970,6 +5008,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4981,6 +5024,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5257,6 +5302,212 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+__extension__
+struct rte_flow_q_ops_attr {
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule has been populated.
+ *   Only completion result indicates that if there was success or failure.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_q_ops_attr *q_ops_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *queue_id* invalid.
+ */
+__rte_experimental
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation results.
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *queue_id* invalid.
+ */
+__rte_experimental
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_q_op_res res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..332783cf78 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -156,11 +156,14 @@ struct rte_flow_ops {
 	int (*info_get)
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_queue_info *queue_info,
 		 struct rte_flow_error *err);
 	/** See rte_flow_configure() */
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +202,38 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_async_create() */
+	struct rte_flow *(*async_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_destroy() */
+	int (*async_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_push() */
+	int (*push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_pull() */
+	int (*pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 62ff791261..13c1a22118 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -272,6 +272,10 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_async_create;
+	rte_flow_async_destroy;
+	rte_flow_push;
+	rte_flow_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 04/11] ethdev: bring in async indirect actions operations
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (2 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
                           ` (7 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  50 ++++++++++
 doc/guides/rel_notes/release_22_03.rst |   5 +
 lib/ethdev/rte_flow.c                  |  75 ++++++++++++++
 lib/ethdev/rte_flow.h                  | 130 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  26 +++++
 lib/ethdev/version.map                 |   3 +
 6 files changed, 289 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 436845717f..ac5e2046e4 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3861,6 +3861,56 @@ Enqueueing a flow rule destruction operation is similar to simple destruction.
 			       void *user_data,
 			       struct rte_flow_error *error);
 
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+``rte_flow_async_action_handle_destroy()`` even if the rule was rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
+
 Push enqueued operations
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index e0549a2da3..34df3557dd 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -121,6 +121,11 @@ New Features
 	``rte_flow_pull`` to poll and retrieve results of these operations and
 	``rte_flow_push`` to push all the in-flight	operations to the NIC.
 
+  * ethdev: Added asynchronous API for indirect actions management:
+    ``rte_flow_async_action_handle_create``,
+	``rte_flow_async_action_handle_destroy`` and
+    ``rte_flow_async_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 4e7b202522..38886edb0b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1817,3 +1817,78 @@ rte_flow_pull(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->async_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->async_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, user_data, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->async_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->async_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, user_data, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->async_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->async_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, user_data, error);
+	return flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 9e71a576f6..f85f20abe6 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -5508,6 +5508,136 @@ rte_flow_pull(uint16_t port_id,
 	      uint16_t n_res,
 	      struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   A valid handle in case of success, NULL otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *action* invalid.
+ *   - (ENOTSUP) if *action* valid but unsupported.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 332783cf78..d660e29c6a 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -234,6 +234,32 @@ struct rte_flow_ops {
 		 struct rte_flow_q_op_res res[],
 		 uint16_t n_res,
 		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_create() */
+	struct rte_flow_action_handle *(*async_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_action_handle_destroy() */
+	int (*async_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 void *user_data,
+		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_update() */
+	int (*async_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 void *user_data,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 13c1a22118..20391ab29e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -276,6 +276,9 @@ EXPERIMENTAL {
 	rte_flow_async_destroy;
 	rte_flow_push;
 	rte_flow_pull;
+	rte_flow_async_action_handle_create;
+	rte_flow_async_action_handle_destroy;
+	rte_flow_async_action_handle_update;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 05/11] app/testpmd: add flow engine configuration
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (3 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 06/11] app/testpmd: add flow template management Alexander Kozyrev
                           ` (6 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  61 ++++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  61 +++++++++-
 4 files changed, 252 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c0644d678c..0533a33ca2 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -868,6 +877,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -949,6 +963,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2045,6 +2069,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2270,7 +2297,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2285,6 +2314,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_OBJECTS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging objects",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_objects)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7736,6 +7824,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8964,6 +9079,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index de1ec14bc7..33a85cd7ca 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,67 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_queue_info queue_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	memset(&port_info, 0, sizeof(port_info));
+	memset(&queue_info, 0, sizeof(queue_info));
+	if (rte_flow_info_get(port_id, &port_info, &queue_info, &error))
+		return port_flow_complain(&error);
+	printf("Flow engine resources on port %u:\n"
+	       "Number of queues: %d\n"
+		   "Size of queues: %d\n"
+	       "Number of counters: %d\n"
+	       "Number of aging objects: %d\n"
+	       "Number of meter actions: %d\n",
+	       port_id, port_info.max_nb_queues,
+		   queue_info.max_size,
+	       port_info.max_nb_counters,
+	       port_info.max_nb_aging_objects,
+	       port_info.max_nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9cc248084f..c8f048aeef 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,51 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Flow engine resources on port #[...]:
+   Number of queues: #[...]
+   Size of queues: #[...]
+   Number of counters: #[...]
+   Number of aging objects: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 06/11] app/testpmd: add flow template management
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (4 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 07/11] app/testpmd: add flow table management Alexander Kozyrev
                           ` (5 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 456 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++
 app/test-pmd/testpmd.h                      |  24 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 101 +++++
 4 files changed, 782 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0533a33ca2..1aa32ea217 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,28 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -882,6 +908,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -890,10 +920,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -973,6 +1006,49 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2072,6 +2148,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2141,6 +2223,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2291,6 +2377,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2299,6 +2399,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2373,6 +2475,148 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute pattern to ingress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute pattern to egress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute pattern to transfer",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute actions to ingress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute actions to egress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute actions to transfer",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2695,7 +2939,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5975,7 +6219,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7851,6 +8097,132 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case PATTERN_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case PATTERN_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case PATTERN_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8820,6 +9192,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9088,6 +9508,38 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port,
+				in->args.vc.pat_templ_id,
+				&((const struct rte_flow_pattern_template_attr) {
+					.relaxed_matching = in->args.vc.attr.reserved,
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port,
+				in->args.vc.act_templ_id,
+				&((const struct rte_flow_actions_template_attr) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions,
+				in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 33a85cd7ca..ecaf4ca03c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2086,6 +2129,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_pattern_template_attr *attr,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_actions_template_attr *attr,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						attr, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..ce46d754a1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,17 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_pattern_template_attr *attr,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_actions_template_attr *attr,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c8f048aeef..2e6a23b12a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,26 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3448,6 +3468,87 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+	   template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 07/11] app/testpmd: add flow table management
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (5 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 06/11] app/testpmd: add flow template management Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
                           ` (4 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1aa32ea217..5715899c95 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -118,6 +120,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -912,6 +928,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1049,6 +1077,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2154,6 +2208,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2227,6 +2286,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2391,6 +2452,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2401,6 +2469,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2617,6 +2686,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8223,6 +8390,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9240,6 +9520,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9540,6 +9844,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ecaf4ca03c..cefbc64c0c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1653,6 +1653,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2289,6 +2332,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index ce46d754a1..fd02498faf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -916,6 +927,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2e6a23b12a..f63eb76a3a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3364,6 +3364,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3549,6 +3562,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 08/11] app/testpmd: add async flow create/destroy operations
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (6 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 07/11] app/testpmd: add flow table management Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
                           ` (3 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5715899c95..d359127df9 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -120,6 +122,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -918,6 +936,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -948,6 +968,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1103,6 +1124,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2213,6 +2246,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2288,6 +2327,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2459,6 +2500,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2481,7 +2529,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2784,6 +2833,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8503,6 +8630,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9544,6 +9776,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9855,6 +10109,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cefbc64c0c..d3b3e6ca5a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2460,6 +2460,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_async_create(port_id, queue_id, &ops_attr, pt->table,
+		pattern, pattern_idx, actions, actions_idx, NULL, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_async_destroy(port_id, queue_id, &op_attr,
+						   pf->flow, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_pull(port_id, queue_id,
+						    &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index fd02498faf..62e874eaaf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -933,6 +933,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f63eb76a3a..194b350932 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3384,6 +3384,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3708,6 +3722,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_async_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4430,6 +4468,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_async_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 09/11] app/testpmd: add flow queue push operation
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (7 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
                           ` (2 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d359127df9..af36975cdf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -138,6 +139,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2252,6 +2256,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2530,7 +2537,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2911,6 +2919,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8735,6 +8758,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10120,6 +10171,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d3b3e6ca5a..e3b5e348ab 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2626,6 +2626,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 62e874eaaf..24a43fd82c 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 194b350932..4f1f908d4a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3398,6 +3398,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3616,6 +3620,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 10/11] app/testpmd: add flow queue pull operation
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (8 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-19  4:11         ` [PATCH v7 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index af36975cdf..d4b72724e6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -142,6 +143,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2259,6 +2263,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2538,7 +2545,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2934,6 +2942,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8786,6 +8809,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10174,6 +10225,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e3b5e348ab..2bd4359bfe 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2469,14 +2469,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2539,16 +2537,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2563,7 +2551,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2599,21 +2586,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_pull(port_id, queue_id,
-						    &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2654,6 +2626,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 24a43fd82c..5ea2408a0b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -941,6 +941,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 4f1f908d4a..5080ddb256 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3402,6 +3402,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3637,6 +3641,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3767,6 +3788,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4508,6 +4531,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v7 11/11] app/testpmd: add async indirect actions operations
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (9 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
@ 2022-02-19  4:11         ` Alexander Kozyrev
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-19  4:11 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d4b72724e6..b5f1191e55 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -127,6 +127,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -140,6 +141,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1135,6 +1156,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1144,6 +1166,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2260,6 +2312,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2873,6 +2931,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2926,6 +2991,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6501,6 +6650,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -10228,6 +10481,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2bd4359bfe..53a848cf84 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2598,6 +2598,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_async_action_handle_create(port_id, queue_id,
+					&attr, conf, action, NULL, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_async_action_handle_destroy(port_id,
+				queue_id, &attr, pia->handle, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_async_action_handle_update(port_id, queue_id, &attr,
+				    action_handle, action, NULL, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5ea2408a0b..31f766c965 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 5080ddb256..1083c6d538 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4792,6 +4792,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4821,6 +4846,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4844,6 +4888,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_async_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 00/10] ethdev: datapath-focused flow rules management
  2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                           ` (10 preceding siblings ...)
  2022-02-19  4:11         ` [PATCH v7 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
@ 2022-02-20  3:43         ` Alexander Kozyrev
  2022-02-20  3:43           ` [PATCH v8 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
                             ` (12 more replies)
  11 siblings, 13 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:43 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
v8: fixed documentation indentation

v7:
- added sanity checks and device state validation
- added flow engine state validation
- added ingress/egress/transfer attibutes to templates
- moved user_data to a parameter list
- renamed asynchronous functions from "_q_" to "_async_"
- created a separate commit for indirect actions

v6: addressed more review comments
- fixed typos
- rewrote code snippets
- add a way to get queue size
- renamed port/queue attibutes parameters

v5: changed titles for testpmd commits

v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (11):
  ethdev: introduce flow engine configuration
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  ethdev: bring in async indirect actions operations
  app/testpmd: add flow engine configuration
  app/testpmd: add flow template management
  app/testpmd: add flow table management
  app/testpmd: add async flow create/destroy operations
  app/testpmd: add flow queue push operation
  app/testpmd: add flow queue pull operation
  app/testpmd: add async indirect actions operations

 app/test-pmd/cmdline_flow.c                   | 1726 ++++++++++++++++-
 app/test-pmd/config.c                         |  778 ++++++++
 app/test-pmd/testpmd.h                        |   67 +
 .../prog_guide/img/rte_flow_async_init.svg    |  205 ++
 .../prog_guide/img/rte_flow_async_usage.svg   |  354 ++++
 doc/guides/prog_guide/rte_flow.rst            |  345 ++++
 doc/guides/rel_notes/release_22_03.rst        |   26 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  383 +++-
 lib/ethdev/ethdev_driver.h                    |    7 +-
 lib/ethdev/rte_flow.c                         |  500 +++++
 lib/ethdev/rte_flow.h                         |  766 ++++++++
 lib/ethdev/rte_flow_driver.h                  |  108 ++
 lib/ethdev/version.map                        |   15 +
 13 files changed, 5184 insertions(+), 96 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-20  3:43           ` Alexander Kozyrev
  2022-02-21  9:47             ` Andrew Rybchenko
  2022-02-20  3:44           ` [PATCH v8 02/11] ethdev: add flow item/action templates Alexander Kozyrev
                             ` (11 subsequent siblings)
  12 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:43 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  36 ++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/ethdev_driver.h             |   7 +-
 lib/ethdev/rte_flow.c                  |  69 +++++++++++++++
 lib/ethdev/rte_flow.h                  | 111 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 7 files changed, 240 insertions(+), 1 deletion(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 0e475019a6..c89161faef 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3606,6 +3606,42 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API engine configuration and allocates
+requested resources beforehand to avoid costly allocations later.
+Expected number of resources in an application allows PMD to prepare
+and optimize NIC hardware configuration and memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                      const struct rte_flow_port_attr *port_attr,
+                      struct rte_flow_error *error);
+
+Information about the number of available resources can be retrieved via
+``rte_flow_info_get()`` API.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index ff3095d742..eceab07576 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -99,6 +99,12 @@ New Features
   The information of these properties is important for debug.
   As the information is private, a dump function is introduced.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve available resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 6d697a879a..06f0896e1e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -138,7 +138,12 @@ struct rte_eth_dev_data {
 		 * Indicates whether the device is configured:
 		 * CONFIGURED(1) / NOT CONFIGURED(0)
 		 */
-		dev_configured : 1;
+		dev_configured:1,
+		/**
+		 * Indicates whether the flow engine is configured:
+		 * CONFIGURED(1) / NOT CONFIGURED(0)
+		 */
+		flow_configured:1;
 
 	/** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
 	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7f93900bc8..ffd48e40d5 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (port_info == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	dev->data->flow_configured = 0;
+	if (port_attr == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_started != 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" already started.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->configure)) {
+		ret = ops->configure(dev, port_attr, error);
+		if (ret == 0)
+			dev->data->flow_configured = 1;
+		return flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 765beb3e52..cdb7b2be68 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -43,6 +43,9 @@
 extern "C" {
 #endif
 
+#define RTE_FLOW_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__)
+
 /**
  * Flow rule attributes.
  *
@@ -4872,6 +4875,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine resources.
+ * The zero value means a resource is not supported.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Maximum number of counters.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t max_nb_counters;
+	/**
+	 * Maximum number of aging objects.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t max_nb_aging_objects;
+	/**
+	 * Maximum number traffic meters.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t max_nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get information about flow engine resources.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the resources information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine resources settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging objects to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_objects;
+	/**
+	 * Number of traffic meters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the
+ * settings. The port, however, may reject the changes.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index d5cc56a560..0d849c153f 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -264,6 +264,8 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_capability_get;
 	rte_eth_ip_reassembly_conf_get;
 	rte_eth_ip_reassembly_conf_set;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 02/11] ethdev: add flow item/action templates
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-20  3:43           ` [PATCH v8 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-21 10:57             ` Andrew Rybchenko
  2022-02-20  3:44           ` [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                             ` (10 subsequent siblings)
  12 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 135 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 252 +++++++++++++++++++++++
 lib/ethdev/rte_flow.h                  | 274 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 712 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c89161faef..6cdfea09be 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3642,6 +3642,141 @@ Information about the number of available resources can be retrieved via
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	const struct rte_flow_pattern_template_attr attr = {.ingress = 1};
+	struct rte_flow_item_eth eth_m = {
+		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	};
+	struct rte_flow_item pattern[] = {
+		[0] = {.type = RTE_FLOW_ITEM_TYPE_ETH,
+		       .mask = &eth_m},
+		[1] = {.type = RTE_FLOW_ITEM_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &attr, &pattern, &err);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	rte_flow_actions_template_attr attr = {.ingress = 1};
+	struct rte_flow_action act[] = {
+		/* Mark ID is 4 for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action msk[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_actions_template *actions_template =
+		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_template_table_attr table_attr = {
+		.flow_attr.ingress = 1,
+		.nb_flows = 10000;
+	};
+	uint8_t nb_pattern_templ = 1;
+	struct rte_flow_pattern_template *pattern_templates[nb_pattern_templ];
+	pattern_templates[0] = pattern_template;
+	uint8_t nb_actions_templ = 1;
+	struct rte_flow_actions_template *actions_templates[nb_actions_templ];
+	actions_templates[0] = actions_template;
+	struct rte_flow_error error;
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, &table_attr,
+				&pattern_templates, nb_pattern_templ,
+				&actions_templates, nb_actions_templ,
+				&error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index eceab07576..7150d06c87 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -105,6 +105,14 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve available resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index ffd48e40d5..e9f684eedb 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1461,3 +1461,255 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_STATE,
+				NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+							pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(pattern_template == NULL))
+		return 0;
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (masks == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" masks is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+
+	}
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(actions_template == NULL))
+		return 0;
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (table_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" table attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(template_table == NULL))
+		return 0;
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index cdb7b2be68..776e8ccc11 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4983,6 +4983,280 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - PMD may match only on items with mask member set and skip
+	 * matching on protocol layers specified without any masks.
+	 * - If not set, PMD will match on protocol layers
+	 * specified without any masks as well.
+	 * - Packet data must be stacked in the same order as the
+	 * protocol layers to match inside packets, starting from the lowest.
+	 */
+	uint32_t relaxed_matching:1;
+	/** Pattern valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Pattern valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Pattern valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+__extension__
+struct rte_flow_actions_template_attr {
+	/** Action valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Action valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Action valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0d849c153f..62ff791261 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -266,6 +266,12 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_conf_set;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-20  3:43           ` [PATCH v8 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 02/11] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-21 14:49             ` Andrew Rybchenko
  2022-02-20  3:44           ` [PATCH v8 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
                             ` (9 subsequent siblings)
  12 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 .../prog_guide/img/rte_flow_async_init.svg    | 205 ++++++++++
 .../prog_guide/img/rte_flow_async_usage.svg   | 354 ++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 124 ++++++
 doc/guides/rel_notes/release_22_03.rst        |   7 +
 lib/ethdev/rte_flow.c                         | 110 +++++-
 lib/ethdev/rte_flow.h                         | 251 +++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  35 ++
 lib/ethdev/version.map                        |   4 +
 8 files changed, 1087 insertions(+), 3 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg
new file mode 100644
index 0000000000..f66e9c73d7
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_async_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
new file mode 100644
index 0000000000..bb978bca1e
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
@@ -0,0 +1,354 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_async_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84607"
+     inkscape:cy="305.37563"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="-9"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="587.84119"
+       y="279.47534"
+       width="200.65393"
+       height="46.049305"
+       stroke="#000000"
+       stroke-width="1.20888"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text21"
+       x="595.42902"
+       y="308">rte_flow_async_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="92.600937"
+       y="280.48242"
+       width="210.14578"
+       height="45.035149"
+       stroke="#000000"
+       stroke-width="1.24464"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect65" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text67"
+       x="100.2282"
+       y="308">rte_flow_async_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="357.15436"
+       y="540.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="393.08301"
+       y="569">rte_flow_pull()</text>
+    <rect
+       x="357.15436"
+       y="462.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect79" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text81"
+       x="389.19"
+       y="491">rte_flow_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 6cdfea09be..436845717f 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare
 and optimize NIC hardware configuration and memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                       const struct rte_flow_port_attr *port_attr,
+                      uint16_t nb_queue,
+                      const struct rte_flow_queue_attr *queue_attr[],
                       struct rte_flow_error *error);
 
 Information about the number of available resources can be retrieved via
@@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via
    int
    rte_flow_info_get(uint16_t port_id,
                      struct rte_flow_port_info *port_info,
+                     struct rte_flow_queue_info *queue_info,
                      struct rte_flow_error *error);
 
 Flow templates
@@ -3777,6 +3782,125 @@ and pattern and actions templates are created.
 				&actions_templates, nb_actions_templ,
 				&error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+  packet processing can continue while queue operations are processed by NIC.
+
+- Number of flow queues is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+  indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+  destroyed even if the operation is not successful and the rule is not inserted.
+
+- Application must wait for the creation operation result before enqueueing
+  the deletion operation to make sure the creation is processed by NIC.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_async_init:
+
+.. figure:: img/rte_flow_async_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_async_usage:
+
+.. figure:: img/rte_flow_async_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_async_create(uint16_t port_id,
+			      uint32_t queue_id,
+			      const struct rte_flow_q_ops_attr *q_ops_attr,
+			      struct rte_flow_template_table *template_table,
+			      const struct rte_flow_item pattern[],
+			      uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      uint8_t actions_template_index,
+			      void *user_data,
+			      struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_destroy(uint16_t port_id,
+			       uint32_t queue_id,
+			       const struct rte_flow_q_ops_attr *q_ops_attr,
+			       struct rte_flow *flow,
+			       void *user_data,
+			       struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_push(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_pull(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_q_op_res res[],
+		      uint16_t n_res,
+		      struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 7150d06c87..cd495ef40c 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -113,6 +113,13 @@ New Features
     ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+* ** Added functions for asynchronous flow rules creation/destruction
+
+  * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API
+    to enqueue flow creaion/destruction operations asynchronously as well as
+    ``rte_flow_pull`` to poll and retrieve results of these operations and
+    ``rte_flow_push`` to push all the in-flight	operations to the NIC.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index e9f684eedb..4e7b202522 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id,
 		return -rte_errno;
 	if (likely(!!ops->info_get)) {
 		return flow_err(port_id,
-				ops->info_get(dev, port_info, error),
+				ops->info_get(dev, port_info, queue_info, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1433,7 +1436,7 @@ rte_flow_configure(uint16_t port_id,
 	int ret;
 
 	dev->data->flow_configured = 0;
-	if (port_attr == NULL) {
+	if (port_attr == NULL || queue_attr == NULL) {
 		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
 		return -EINVAL;
 	}
@@ -1452,7 +1455,7 @@ rte_flow_configure(uint16_t port_id,
 	if (unlikely(!ops))
 		return -rte_errno;
 	if (likely(!!ops->configure)) {
-		ret = ops->configure(dev, port_attr, error);
+		ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error);
 		if (ret == 0)
 			dev->data->flow_configured = 1;
 		return flow_err(port_id, ret, error);
@@ -1713,3 +1716,104 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_q_ops_attr *q_ops_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (likely(!!ops->async_create)) {
+		flow = ops->async_create(dev, queue_id,
+					 q_ops_attr, template_table,
+					 pattern, pattern_template_index,
+					 actions, actions_template_index,
+					 user_data, error);
+		if (flow == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return flow;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->async_destroy)) {
+		return flow_err(port_id,
+				ops->async_destroy(dev, queue_id,
+						   q_ops_attr, flow,
+						   user_data, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->push)) {
+		return flow_err(port_id,
+				ops->push(dev, queue_id, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_q_op_res res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->pull)) {
+		ret = ops->pull(dev, queue_id, res, n_res, error);
+		return ret ? ret : flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 776e8ccc11..9e71a576f6 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Maximum umber of queues for asynchronous operations.
+	 */
+	uint32_t max_nb_queues;
 	/**
 	 * Maximum number of counters.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4901,6 +4905,21 @@ struct rte_flow_port_info {
 	uint32_t max_nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine asynchronous queues.
+ * The value only valid if @p port_attr.max_nb_queues is not zero.
+ *
+ */
+struct rte_flow_queue_info {
+	/**
+	 * Maximum number of operations a queue can hold.
+	 */
+	uint32_t max_size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4912,6 +4931,9 @@ struct rte_flow_port_info {
  * @param[out] port_info
  *   A pointer to a structure of type *rte_flow_port_info*
  *   to be filled with the resources information of the port.
+ * @param[out] queue_info
+ *   A pointer to a structure of type *rte_flow_queue_info*
+ *   to be filled with the asynchronous queues information.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4923,6 +4945,7 @@ __rte_experimental
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error);
 
 /**
@@ -4951,6 +4974,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine asynchronous queues settings.
+ * The value means default value picked by PMD.
+ *
+ */
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4970,6 +5008,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4981,6 +5024,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5257,6 +5302,212 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation attributes.
+ */
+__extension__
+struct rte_flow_q_ops_attr {
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] q_ops_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule has been populated.
+ *   Only completion result indicates that if there was success or failure.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_q_ops_attr *q_ops_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] q_ops_attr
+ *   Rule destroy operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_q_ops_attr *q_ops_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *queue_id* invalid.
+ */
+__rte_experimental
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation status.
+ */
+enum rte_flow_q_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_Q_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_Q_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Queue operation results.
+ */
+__extension__
+struct rte_flow_q_op_res {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_q_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *queue_id* invalid.
+ */
+__rte_experimental
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_q_op_res res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..332783cf78 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -156,11 +156,14 @@ struct rte_flow_ops {
 	int (*info_get)
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_queue_info *queue_info,
 		 struct rte_flow_error *err);
 	/** See rte_flow_configure() */
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +202,38 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_async_create() */
+	struct rte_flow *(*async_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_destroy() */
+	int (*async_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow *flow,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_push() */
+	int (*push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_pull() */
+	int (*pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_q_op_res res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 62ff791261..13c1a22118 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -272,6 +272,10 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_async_create;
+	rte_flow_async_destroy;
+	rte_flow_push;
+	rte_flow_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 04/11] ethdev: bring in async indirect actions operations
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (2 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
                             ` (8 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  50 ++++++++++
 doc/guides/rel_notes/release_22_03.rst |   5 +
 lib/ethdev/rte_flow.c                  |  75 ++++++++++++++
 lib/ethdev/rte_flow.h                  | 130 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  26 +++++
 lib/ethdev/version.map                 |   3 +
 6 files changed, 289 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 436845717f..ac5e2046e4 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3861,6 +3861,56 @@ Enqueueing a flow rule destruction operation is similar to simple destruction.
 			       void *user_data,
 			       struct rte_flow_error *error);
 
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+``rte_flow_async_action_handle_destroy()`` even if the rule was rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
+
 Push enqueued operations
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index cd495ef40c..c9c9078cab 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -120,6 +120,11 @@ New Features
     ``rte_flow_pull`` to poll and retrieve results of these operations and
     ``rte_flow_push`` to push all the in-flight	operations to the NIC.
 
+  * ethdev: Added asynchronous API for indirect actions management:
+    ``rte_flow_async_action_handle_create``,
+    ``rte_flow_async_action_handle_destroy`` and
+    ``rte_flow_async_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 4e7b202522..38886edb0b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1817,3 +1817,78 @@ rte_flow_pull(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (unlikely(!ops->async_action_handle_create)) {
+		rte_flow_error_set(error, ENOSYS,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   rte_strerror(ENOSYS));
+		return NULL;
+	}
+	handle = ops->async_action_handle_create(dev, queue_id, q_ops_attr,
+					     indir_action_conf, action, user_data, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->async_action_handle_destroy))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->async_action_handle_destroy(dev, queue_id, q_ops_attr,
+					   action_handle, user_data, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->async_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->async_action_handle_update(dev, queue_id, q_ops_attr,
+					  action_handle, update, user_data, error);
+	return flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 9e71a576f6..f85f20abe6 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -5508,6 +5508,136 @@ rte_flow_pull(uint16_t port_id,
 	      uint16_t n_res,
 	      struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   A valid handle in case of success, NULL otherwise and rte_errno is set
+ *   to one of the error codes defined:
+ *   - (ENODEV) if *port_id* invalid.
+ *   - (ENOSYS) if underlying device does not support this functionality.
+ *   - (EIO) if underlying device is removed.
+ *   - (EINVAL) if *action* invalid.
+ *   - (ENOTSUP) if *action* valid but unsupported.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] q_ops_attr
+ *   Queue operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   - (0) if success.
+ *   - (-ENODEV) if *port_id* invalid.
+ *   - (-ENOSYS) if underlying device does not support this functionality.
+ *   - (-EIO) if underlying device is removed.
+ *   - (-ENOENT) if action pointed by *action* handle was not found.
+ *   - (-EBUSY) if action pointed by *action* handle still used by some rules
+ *   rte_errno is also set.
+ *   - (EAGAIN) if *queue* is full
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 332783cf78..d660e29c6a 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -234,6 +234,32 @@ struct rte_flow_ops {
 		 struct rte_flow_q_op_res res[],
 		 uint16_t n_res,
 		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_create() */
+	struct rte_flow_action_handle *(*async_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_action_handle_destroy() */
+	int (*async_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 void *user_data,
+		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_update() */
+	int (*async_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_q_ops_attr *q_ops_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 void *user_data,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 13c1a22118..20391ab29e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -276,6 +276,9 @@ EXPERIMENTAL {
 	rte_flow_async_destroy;
 	rte_flow_push;
 	rte_flow_pull;
+	rte_flow_async_action_handle_create;
+	rte_flow_async_action_handle_destroy;
+	rte_flow_async_action_handle_update;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 05/11] app/testpmd: add flow engine configuration
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (3 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 06/11] app/testpmd: add flow template management Alexander Kozyrev
                             ` (7 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  61 ++++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  61 +++++++++-
 4 files changed, 252 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c0644d678c..0533a33ca2 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -868,6 +877,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -949,6 +963,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2045,6 +2069,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2270,7 +2297,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2285,6 +2314,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_OBJECTS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging objects",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_objects)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7736,6 +7824,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8964,6 +9079,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index de1ec14bc7..33a85cd7ca 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,67 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_queue_info queue_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	memset(&port_info, 0, sizeof(port_info));
+	memset(&queue_info, 0, sizeof(queue_info));
+	if (rte_flow_info_get(port_id, &port_info, &queue_info, &error))
+		return port_flow_complain(&error);
+	printf("Flow engine resources on port %u:\n"
+	       "Number of queues: %d\n"
+		   "Size of queues: %d\n"
+	       "Number of counters: %d\n"
+	       "Number of aging objects: %d\n"
+	       "Number of meter actions: %d\n",
+	       port_id, port_info.max_nb_queues,
+		   queue_info.max_size,
+	       port_info.max_nb_counters,
+	       port_info.max_nb_aging_objects,
+	       port_info.max_nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9cc248084f..c8f048aeef 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,51 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Flow engine resources on port #[...]:
+   Number of queues: #[...]
+   Size of queues: #[...]
+   Number of counters: #[...]
+   Number of aging objects: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 06/11] app/testpmd: add flow template management
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (4 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 07/11] app/testpmd: add flow table management Alexander Kozyrev
                             ` (6 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 456 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++
 app/test-pmd/testpmd.h                      |  24 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 101 +++++
 4 files changed, 782 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0533a33ca2..1aa32ea217 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,28 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -882,6 +908,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -890,10 +920,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -973,6 +1006,49 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2072,6 +2148,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2141,6 +2223,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2291,6 +2377,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2299,6 +2399,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2373,6 +2475,148 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute pattern to ingress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute pattern to egress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute pattern to transfer",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute actions to ingress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute actions to egress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute actions to transfer",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2695,7 +2939,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5975,7 +6219,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7851,6 +8097,132 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case PATTERN_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case PATTERN_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case PATTERN_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8820,6 +9192,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9088,6 +9508,38 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port,
+				in->args.vc.pat_templ_id,
+				&((const struct rte_flow_pattern_template_attr) {
+					.relaxed_matching = in->args.vc.attr.reserved,
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port,
+				in->args.vc.act_templ_id,
+				&((const struct rte_flow_actions_template_attr) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions,
+				in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 33a85cd7ca..ecaf4ca03c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2086,6 +2129,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_pattern_template_attr *attr,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_actions_template_attr *attr,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						attr, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..ce46d754a1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,17 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_pattern_template_attr *attr,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_actions_template_attr *attr,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c8f048aeef..2e6a23b12a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,26 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3448,6 +3468,87 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+	   template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 07/11] app/testpmd: add flow table management
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (5 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 06/11] app/testpmd: add flow template management Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
                             ` (5 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1aa32ea217..5715899c95 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -118,6 +120,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -912,6 +928,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1049,6 +1077,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2154,6 +2208,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2227,6 +2286,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2391,6 +2452,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2401,6 +2469,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2617,6 +2686,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8223,6 +8390,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9240,6 +9520,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9540,6 +9844,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ecaf4ca03c..cefbc64c0c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1653,6 +1653,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2289,6 +2332,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index ce46d754a1..fd02498faf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -916,6 +927,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2e6a23b12a..f63eb76a3a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3364,6 +3364,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3549,6 +3562,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 08/11] app/testpmd: add async flow create/destroy operations
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (6 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 07/11] app/testpmd: add flow table management Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
                             ` (4 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5715899c95..d359127df9 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -120,6 +122,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -918,6 +936,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -948,6 +968,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1103,6 +1124,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2213,6 +2246,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2288,6 +2327,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2459,6 +2500,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2481,7 +2529,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2784,6 +2833,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8503,6 +8630,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9544,6 +9776,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9855,6 +10109,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cefbc64c0c..d3b3e6ca5a 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2460,6 +2460,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_async_create(port_id, queue_id, &ops_attr, pt->table,
+		pattern, pattern_idx, actions, actions_idx, NULL, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
+	struct rte_flow_q_op_res comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_async_destroy(port_id, queue_id, &op_attr,
+						   pf->flow, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_pull(port_id, queue_id,
+						    &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index fd02498faf..62e874eaaf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -933,6 +933,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f63eb76a3a..194b350932 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3384,6 +3384,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3708,6 +3722,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_async_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4430,6 +4468,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_async_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 09/11] app/testpmd: add flow queue push operation
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (7 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
                             ` (3 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d359127df9..af36975cdf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -138,6 +139,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2252,6 +2256,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2530,7 +2537,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2911,6 +2919,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8735,6 +8758,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10120,6 +10171,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d3b3e6ca5a..e3b5e348ab 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2626,6 +2626,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 62e874eaaf..24a43fd82c 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 194b350932..4f1f908d4a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3398,6 +3398,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3616,6 +3620,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 10/11] app/testpmd: add flow queue pull operation
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (8 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-20  3:44           ` [PATCH v8 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
                             ` (2 subsequent siblings)
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index af36975cdf..d4b72724e6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -142,6 +143,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2259,6 +2263,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2538,7 +2545,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2934,6 +2942,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8786,6 +8809,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10174,6 +10225,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index e3b5e348ab..2bd4359bfe 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2469,14 +2469,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_q_ops_attr ops_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2539,16 +2537,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2563,7 +2551,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_q_ops_attr op_attr = { .postpone = postpone };
-	struct rte_flow_q_op_res comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2599,21 +2586,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_pull(port_id, queue_id,
-						    &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2654,6 +2626,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_q_op_res *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_q_op_res));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_Q_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 24a43fd82c..5ea2408a0b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -941,6 +941,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 4f1f908d4a..5080ddb256 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3402,6 +3402,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3637,6 +3641,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3767,6 +3788,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4508,6 +4531,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v8 11/11] app/testpmd: add async indirect actions operations
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (9 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
@ 2022-02-20  3:44           ` Alexander Kozyrev
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-22 16:41           ` [PATCH v8 00/10] " Ferruh Yigit
  12 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-20  3:44 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d4b72724e6..b5f1191e55 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -127,6 +127,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -140,6 +141,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1135,6 +1156,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1144,6 +1166,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2260,6 +2312,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2873,6 +2931,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2926,6 +2991,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6501,6 +6650,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -10228,6 +10481,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2bd4359bfe..53a848cf84 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2598,6 +2598,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_async_action_handle_create(port_id, queue_id,
+					&attr, conf, action, NULL, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_async_action_handle_destroy(port_id,
+				queue_id, &attr, pia->handle, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_q_ops_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_async_action_handle_update(port_id, queue_id, &attr,
+				    action_handle, action, NULL, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5ea2408a0b..31f766c965 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 5080ddb256..1083c6d538 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4792,6 +4792,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4821,6 +4846,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4844,6 +4888,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_async_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-20  3:43           ` [PATCH v8 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
@ 2022-02-21  9:47             ` Andrew Rybchenko
  2022-02-21  9:52               ` Andrew Rybchenko
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-21  9:47 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/20/22 06:43, Alexander Kozyrev wrote:
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
> 
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
> 
> The rte_flow_info_get() is available to retrieve the information about
> supported pre-configurable resources. Both these functions must be called
> before any other usage of the flow API engine.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

[snip]

> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 6d697a879a..06f0896e1e 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -138,7 +138,12 @@ struct rte_eth_dev_data {
>   		 * Indicates whether the device is configured:
>   		 * CONFIGURED(1) / NOT CONFIGURED(0)
>   		 */
> -		dev_configured : 1;
> +		dev_configured:1,

Above is unrelated to the patch. Moreover, it breaks style used
few lines above.

> +		/**
> +		 * Indicates whether the flow engine is configured:
> +		 * CONFIGURED(1) / NOT CONFIGURED(0)
> +		 */
> +		flow_configured:1;

I'd like to understand why we need the information. It is
unclear from the patch. Right now it is write-only. Nobody
checks it. Is flow engine configuration become a mandatory
step? Always? Just in some cases?

>   
>   	/** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
>   	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index 7f93900bc8..ffd48e40d5 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
>   	ret = ops->flex_item_release(dev, handle, error);
>   	return flow_err(port_id, ret, error);
>   }
> +
> +int
> +rte_flow_info_get(uint16_t port_id,
> +		  struct rte_flow_port_info *port_info,
> +		  struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (port_info == NULL) {
> +		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
> +		return -EINVAL;
> +	}
> +	if (dev->data->dev_configured == 0) {
> +		RTE_FLOW_LOG(INFO,
> +			"Device with port_id=%"PRIu16" is not configured.\n",
> +			port_id);
> +		return -EINVAL;
> +	}
> +	if (unlikely(!ops))
> +		return -rte_errno;

Order of checks is not always obvious, but requires at
least some rules to follow. When there is no any good
reason to do otherwise, I'd suggest to check arguments
in there order. I.e. check port_id and its direct
derivatives first:
1. ops (since it is NULL if port_id is invalid)
2. dev_configured (since only port_id is required to check it)
3. port_info (since it goes after port_id)

> +	if (likely(!!ops->info_get)) {
> +		return flow_err(port_id,
> +				ops->info_get(dev, port_info, error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +int
> +rte_flow_configure(uint16_t port_id,
> +		   const struct rte_flow_port_attr *port_attr,
> +		   struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	int ret;
> +
> +	dev->data->flow_configured = 0;
> +	if (port_attr == NULL) {
> +		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
> +		return -EINVAL;
> +	}
> +	if (dev->data->dev_configured == 0) {
> +		RTE_FLOW_LOG(INFO,
> +			"Device with port_id=%"PRIu16" is not configured.\n",
> +			port_id);
> +		return -EINVAL;
> +	}
> +	if (dev->data->dev_started != 0) {
> +		RTE_FLOW_LOG(INFO,
> +			"Device with port_id=%"PRIu16" already started.\n",
> +			port_id);
> +		return -EINVAL;
> +	}
> +	if (unlikely(!ops))
> +		return -rte_errno;

Same logic here:
1. ops
2. dev_configured
3. dev_started
4. port_attr
5. ops->configure since we want to be sure that state and input
    arguments are valid before calling it

> +	if (likely(!!ops->configure)) {
> +		ret = ops->configure(dev, port_attr, error);
> +		if (ret == 0)
> +			dev->data->flow_configured = 1;
> +		return flow_err(port_id, ret, error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}

[snip]

> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Get information about flow engine resources.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[out] port_info
> + *   A pointer to a structure of type *rte_flow_port_info*
> + *   to be filled with the resources information of the port.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.

If I'm not mistakes we should be explicit with
negative result values menting

> + */
> +__rte_experimental
> +int
> +rte_flow_info_get(uint16_t port_id,
> +		  struct rte_flow_port_info *port_info,
> +		  struct rte_flow_error *error);

[snip]

> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Configure the port's flow API engine.
> + *
> + * This API can only be invoked before the application
> + * starts using the rest of the flow library functions.
> + *
> + * The API can be invoked multiple times to change the
> + * settings. The port, however, may reject the changes.
> + *
> + * Parameters in configuration attributes must not exceed
> + * numbers of resources returned by the rte_flow_info_get API.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] port_attr
> + *   Port configuration attributes.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   0 on success, a negative errno value otherwise and rte_errno is set.

Same here.

[snip]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-21  9:47             ` Andrew Rybchenko
@ 2022-02-21  9:52               ` Andrew Rybchenko
  2022-02-21 12:53                 ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-21  9:52 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/21/22 12:47, Andrew Rybchenko wrote:
> On 2/20/22 06:43, Alexander Kozyrev wrote:
>> The flow rules creation/destruction at a large scale incurs a performance
>> penalty and may negatively impact the packet processing when used
>> as part of the datapath logic. This is mainly because software/hardware
>> resources are allocated and prepared during the flow rule creation.
>>
>> In order to optimize the insertion rate, PMD may use some hints provided
>> by the application at the initialization phase. The rte_flow_configure()
>> function allows to pre-allocate all the needed resources beforehand.
>> These resources can be used at a later stage without costly allocations.
>> Every PMD may use only the subset of hints and ignore unused ones or
>> fail in case the requested configuration is not supported.
>>
>> The rte_flow_info_get() is available to retrieve the information about
>> supported pre-configurable resources. Both these functions must be called
>> before any other usage of the flow API engine.
>>
>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>> Acked-by: Ori Kam <orika@nvidia.com>
> 
> [snip]
> 
>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>> index 6d697a879a..06f0896e1e 100644
>> --- a/lib/ethdev/ethdev_driver.h
>> +++ b/lib/ethdev/ethdev_driver.h
>> @@ -138,7 +138,12 @@ struct rte_eth_dev_data {
>>            * Indicates whether the device is configured:
>>            * CONFIGURED(1) / NOT CONFIGURED(0)
>>            */
>> -        dev_configured : 1;
>> +        dev_configured:1,
> 
> Above is unrelated to the patch. Moreover, it breaks style used
> few lines above.
> 
>> +        /**
>> +         * Indicates whether the flow engine is configured:
>> +         * CONFIGURED(1) / NOT CONFIGURED(0)
>> +         */
>> +        flow_configured:1;
> 
> I'd like to understand why we need the information. It is
> unclear from the patch. Right now it is write-only. Nobody
> checks it. Is flow engine configuration become a mandatory
> step? Always? Just in some cases?
> 
>>       /** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
>>       uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>> index 7f93900bc8..ffd48e40d5 100644
>> --- a/lib/ethdev/rte_flow.c
>> +++ b/lib/ethdev/rte_flow.c
>> @@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
>>       ret = ops->flex_item_release(dev, handle, error);
>>       return flow_err(port_id, ret, error);
>>   }
>> +
>> +int
>> +rte_flow_info_get(uint16_t port_id,
>> +          struct rte_flow_port_info *port_info,
>> +          struct rte_flow_error *error)
>> +{
>> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>> +
>> +    if (port_info == NULL) {
>> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
>> +        return -EINVAL;
>> +    }
>> +    if (dev->data->dev_configured == 0) {
>> +        RTE_FLOW_LOG(INFO,
>> +            "Device with port_id=%"PRIu16" is not configured.\n",
>> +            port_id);
>> +        return -EINVAL;
>> +    }
>> +    if (unlikely(!ops))
>> +        return -rte_errno;
> 
> Order of checks is not always obvious, but requires at
> least some rules to follow. When there is no any good
> reason to do otherwise, I'd suggest to check arguments
> in there order. I.e. check port_id and its direct
> derivatives first:
> 1. ops (since it is NULL if port_id is invalid)
> 2. dev_configured (since only port_id is required to check it)
> 3. port_info (since it goes after port_id)
> 
>> +    if (likely(!!ops->info_get)) {
>> +        return flow_err(port_id,
>> +                ops->info_get(dev, port_info, error),
>> +                error);
>> +    }
>> +    return rte_flow_error_set(error, ENOTSUP,
>> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>> +                  NULL, rte_strerror(ENOTSUP));
>> +}
>> +
>> +int
>> +rte_flow_configure(uint16_t port_id,
>> +           const struct rte_flow_port_attr *port_attr,
>> +           struct rte_flow_error *error)
>> +{
>> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>> +    int ret;
>> +
>> +    dev->data->flow_configured = 0;
>> +    if (port_attr == NULL) {
>> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
>> +        return -EINVAL;
>> +    }
>> +    if (dev->data->dev_configured == 0) {
>> +        RTE_FLOW_LOG(INFO,
>> +            "Device with port_id=%"PRIu16" is not configured.\n",
>> +            port_id);
>> +        return -EINVAL;
>> +    }

In fact there is one more interesting question related
to device states. Necessity to call flow info and flow
configure in configured state allows configure to rely
on device configuration. The question is: what should
happen with the device flow engine configuration if
the device is reconfigured?

>> +    if (dev->data->dev_started != 0) {
>> +        RTE_FLOW_LOG(INFO,
>> +            "Device with port_id=%"PRIu16" already started.\n",
>> +            port_id);
>> +        return -EINVAL;
>> +    }
>> +    if (unlikely(!ops))
>> +        return -rte_errno;
> 
> Same logic here:
> 1. ops
> 2. dev_configured
> 3. dev_started
> 4. port_attr
> 5. ops->configure since we want to be sure that state and input
>     arguments are valid before calling it
> 
>> +    if (likely(!!ops->configure)) {
>> +        ret = ops->configure(dev, port_attr, error);
>> +        if (ret == 0)
>> +            dev->data->flow_configured = 1;
>> +        return flow_err(port_id, ret, error);
>> +    }
>> +    return rte_flow_error_set(error, ENOTSUP,
>> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>> +                  NULL, rte_strerror(ENOTSUP));
>> +}
> 
> [snip]
> 
>> +/**
>> + * @warning
>> + * @b EXPERIMENTAL: this API may change without prior notice.
>> + *
>> + * Get information about flow engine resources.
>> + *
>> + * @param port_id
>> + *   Port identifier of Ethernet device.
>> + * @param[out] port_info
>> + *   A pointer to a structure of type *rte_flow_port_info*
>> + *   to be filled with the resources information of the port.
>> + * @param[out] error
>> + *   Perform verbose error reporting if not NULL.
>> + *   PMDs initialize this structure in case of error only.
>> + *
>> + * @return
>> + *   0 on success, a negative errno value otherwise and rte_errno is 
>> set.
> 
> If I'm not mistakes we should be explicit with
> negative result values menting
> 
>> + */
>> +__rte_experimental
>> +int
>> +rte_flow_info_get(uint16_t port_id,
>> +          struct rte_flow_port_info *port_info,
>> +          struct rte_flow_error *error);
> 
> [snip]
> 
>> +/**
>> + * @warning
>> + * @b EXPERIMENTAL: this API may change without prior notice.
>> + *
>> + * Configure the port's flow API engine.
>> + *
>> + * This API can only be invoked before the application
>> + * starts using the rest of the flow library functions.
>> + *
>> + * The API can be invoked multiple times to change the
>> + * settings. The port, however, may reject the changes.
>> + *
>> + * Parameters in configuration attributes must not exceed
>> + * numbers of resources returned by the rte_flow_info_get API.
>> + *
>> + * @param port_id
>> + *   Port identifier of Ethernet device.
>> + * @param[in] port_attr
>> + *   Port configuration attributes.
>> + * @param[out] error
>> + *   Perform verbose error reporting if not NULL.
>> + *   PMDs initialize this structure in case of error only.
>> + *
>> + * @return
>> + *   0 on success, a negative errno value otherwise and rte_errno is 
>> set.
> 
> Same here.
> 
> [snip]


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 02/11] ethdev: add flow item/action templates
  2022-02-20  3:44           ` [PATCH v8 02/11] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-21 10:57             ` Andrew Rybchenko
  2022-02-21 13:12               ` Ori Kam
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-21 10:57 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/20/22 06:44, Alexander Kozyrev wrote:
> Treating every single flow rule as a completely independent and separate
> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> application, many flow rules share a common structure (the same item mask
> and/or action list) so they can be grouped and classified together.
> This knowledge may be used as a source of optimization by a PMD/HW.
> 
> The pattern template defines common matching fields (the item mask) without
> values. The actions template holds a list of action types that will be used
> together in the same rule. The specific values for items and actions will
> be given only during the rule creation.
> 
> A table combines pattern and actions templates along with shared flow rule
> attributes (group ID, priority and traffic direction). This way a PMD/HW
> can prepare all the resources needed for efficient flow rules creation in
> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> number of flow rules is defined at the table creation time.
> 
> The flow rule creation is done by selecting a table, a pattern template
> and an actions template (which are bound to the table), and setting unique
> values for the items and actions.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

[snip]

> +For example, to create an actions template with the same Mark ID
> +but different Queue Index for every rule:
> +
> +.. code-block:: c
> +
> +	rte_flow_actions_template_attr attr = {.ingress = 1};
> +	struct rte_flow_action act[] = {
> +		/* Mark ID is 4 for every rule, Queue Index is unique */
> +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> +		       .conf = &(struct rte_flow_action_mark){.id = 4}},
> +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> +	};
> +	struct rte_flow_action msk[] = {
> +		/* Assign to MARK mask any non-zero value to make it constant */
> +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> +		       .conf = &(struct rte_flow_action_mark){.id = 1}},

1 looks very strange. I can understand it in the case of
integer and boolean fields, but what to do in the case of
arrays? IMHO, it would be better to use all 0xff's in value.
Anyway, it must be defined very carefully and non-ambiguous.

> +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> +	};
> +	struct rte_flow_error err;
> +
> +	struct rte_flow_actions_template *actions_template =
> +		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
> +
> +The concrete value for Queue Index will be provided at the rule creation.

[snip]

> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index ffd48e40d5..e9f684eedb 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1461,3 +1461,255 @@ rte_flow_configure(uint16_t port_id,
>   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>   				  NULL, rte_strerror(ENOTSUP));
>   }
> +
> +struct rte_flow_pattern_template *
> +rte_flow_pattern_template_create(uint16_t port_id,
> +		const struct rte_flow_pattern_template_attr *template_attr,
> +		const struct rte_flow_item pattern[],
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_pattern_template *template;
> +
> +	if (template_attr == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" template attr is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (pattern == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" pattern is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (dev->data->flow_configured == 0) {
> +		RTE_FLOW_LOG(INFO,
> +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
> +			port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				RTE_FLOW_ERROR_TYPE_STATE,
> +				NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (unlikely(!ops))
> +		return NULL;

See notes about order of checks in previous patch review notes.

> +	if (likely(!!ops->pattern_template_create)) {
> +		template = ops->pattern_template_create(dev, template_attr,
> +							pattern, error);
> +		if (template == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return template;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_pattern_template_destroy(uint16_t port_id,
> +		struct rte_flow_pattern_template *pattern_template,
> +		struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(pattern_template == NULL))
> +		return 0;
> +	if (unlikely(!ops))
> +		return -rte_errno;

Same here. I'm afraid it is really important here as well,
since request should not return OK if port_id is invalid.


> +	if (likely(!!ops->pattern_template_destroy)) {
> +		return flow_err(port_id,
> +				ops->pattern_template_destroy(dev,
> +							      pattern_template,
> +							      error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +struct rte_flow_actions_template *
> +rte_flow_actions_template_create(uint16_t port_id,
> +			const struct rte_flow_actions_template_attr *template_attr,
> +			const struct rte_flow_action actions[],
> +			const struct rte_flow_action masks[],
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_actions_template *template;
> +
> +	if (template_attr == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" template attr is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (actions == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" actions is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (masks == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" masks is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +
> +	}
> +	if (dev->data->flow_configured == 0) {
> +		RTE_FLOW_LOG(INFO,
> +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
> +			port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_STATE,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (unlikely(!ops))
> +		return NULL;

same here

> +	if (likely(!!ops->actions_template_create)) {
> +		template = ops->actions_template_create(dev, template_attr,
> +							actions, masks, error);
> +		if (template == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return template;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_actions_template_destroy(uint16_t port_id,
> +			struct rte_flow_actions_template *actions_template,
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(actions_template == NULL))
> +		return 0;
> +	if (unlikely(!ops))
> +		return -rte_errno;

same here

> +	if (likely(!!ops->actions_template_destroy)) {
> +		return flow_err(port_id,
> +				ops->actions_template_destroy(dev,
> +							      actions_template,
> +							      error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> +
> +struct rte_flow_template_table *
> +rte_flow_template_table_create(uint16_t port_id,
> +			const struct rte_flow_template_table_attr *table_attr,
> +			struct rte_flow_pattern_template *pattern_templates[],
> +			uint8_t nb_pattern_templates,
> +			struct rte_flow_actions_template *actions_templates[],
> +			uint8_t nb_actions_templates,
> +			struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow_template_table *table;
> +
> +	if (table_attr == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" table attr is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (pattern_templates == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" pattern templates is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (actions_templates == NULL) {
> +		RTE_FLOW_LOG(ERR,
> +			     "Port %"PRIu16" actions templates is NULL.\n",
> +			     port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_ATTR,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (dev->data->flow_configured == 0) {
> +		RTE_FLOW_LOG(INFO,
> +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
> +			port_id);
> +		rte_flow_error_set(error, EINVAL,
> +				   RTE_FLOW_ERROR_TYPE_STATE,
> +				   NULL, rte_strerror(EINVAL));
> +		return NULL;
> +	}
> +	if (unlikely(!ops))
> +		return NULL;

Order of checks

> +	if (likely(!!ops->template_table_create)) {
> +		table = ops->template_table_create(dev, table_attr,
> +					pattern_templates, nb_pattern_templates,
> +					actions_templates, nb_actions_templates,
> +					error);
> +		if (table == NULL)
> +			flow_err(port_id, -rte_errno, error);
> +		return table;
> +	}
> +	rte_flow_error_set(error, ENOTSUP,
> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +			   NULL, rte_strerror(ENOTSUP));
> +	return NULL;
> +}
> +
> +int
> +rte_flow_template_table_destroy(uint16_t port_id,
> +				struct rte_flow_template_table *template_table,
> +				struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +
> +	if (unlikely(template_table == NULL))
> +		return 0;
> +	if (unlikely(!ops))
> +		return -rte_errno;
> +	if (likely(!!ops->template_table_destroy)) {
> +		return flow_err(port_id,
> +				ops->template_table_destroy(dev,
> +							    template_table,
> +							    error),
> +				error);
> +	}
> +	return rte_flow_error_set(error, ENOTSUP,
> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> +				  NULL, rte_strerror(ENOTSUP));
> +}
> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index cdb7b2be68..776e8ccc11 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4983,6 +4983,280 @@ rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
>   		   struct rte_flow_error *error);
>   
> +/**
> + * Opaque type returned after successful creation of pattern template.
> + * This handle can be used to manage the created pattern template.
> + */
> +struct rte_flow_pattern_template;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flow pattern template attributes.

Would it be useful to mentioned that at least one direction
bit must be set? Otherwise request does not make sense.

> + */
> +__extension__
> +struct rte_flow_pattern_template_attr {
> +	/**
> +	 * Relaxed matching policy.
> +	 * - PMD may match only on items with mask member set and skip
> +	 * matching on protocol layers specified without any masks.
> +	 * - If not set, PMD will match on protocol layers
> +	 * specified without any masks as well.
> +	 * - Packet data must be stacked in the same order as the
> +	 * protocol layers to match inside packets, starting from the lowest.
> +	 */
> +	uint32_t relaxed_matching:1;

I should notice this earlier, but it looks like a new feature
which sounds unrelated to templates. If so, it makes asymmetry
in sync and async flow rules capabilities.
Am I missing something?

Anyway, the feature looks hidden in the patch.

> +	/** Pattern valid for rules applied to ingress traffic. */
> +	uint32_t ingress:1;
> +	/** Pattern valid for rules applied to egress traffic. */
> +	uint32_t egress:1;
> +	/** Pattern valid for rules applied to transfer traffic. */
> +	uint32_t transfer:1;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create flow pattern template.
> + *
> + * The pattern template defines common matching fields without values.
> + * For example, matching on 5 tuple TCP flow, the template will be
> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> + * while values for each rule will be set during the flow rule creation.
> + * The number and order of items in the template must be the same
> + * at the rule creation.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template_attr
> + *   Pattern template attributes.
> + * @param[in] pattern
> + *   Pattern specification (list terminated by the END pattern item).
> + *   The spec member of an item is not used unless the end member is used.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.

Don't we want to be explicit about used negative error code?
The question is applicable to all functions.

[snip]

> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flow actions template attributes.

Same question about no directions specified.

> + */
> +__extension__
> +struct rte_flow_actions_template_attr {
> +	/** Action valid for rules applied to ingress traffic. */
> +	uint32_t ingress:1;
> +	/** Action valid for rules applied to egress traffic. */
> +	uint32_t egress:1;
> +	/** Action valid for rules applied to transfer traffic. */
> +	uint32_t transfer:1;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Create flow actions template.
> + *
> + * The actions template holds a list of action types without values.
> + * For example, the template to change TCP ports is TCP(s_port + d_port),
> + * while values for each rule will be set during the flow rule creation.
> + * The number and order of actions in the template must be the same
> + * at the rule creation.
> + *
> + * @param port_id
> + *   Port identifier of Ethernet device.
> + * @param[in] template_attr
> + *   Template attributes.
> + * @param[in] actions
> + *   Associated actions (list terminated by the END action).
> + *   The spec member is only used if @p masks spec is non-zero.
> + * @param[in] masks
> + *   List of actions that marks which of the action's member is constant.
> + *   A mask has the same format as the corresponding action.
> + *   If the action field in @p masks is not 0,

Comparison with zero makes sense for integers only.

> + *   the corresponding value in an action from @p actions will be the part
> + *   of the template and used in all flow rules.
> + *   The order of actions in @p masks is the same as in @p actions.
> + *   In case of indirect actions present in @p actions,
> + *   the actual action type should be present in @p mask.
> + * @param[out] error
> + *   Perform verbose error reporting if not NULL.
> + *   PMDs initialize this structure in case of error only.
> + *
> + * @return
> + *   Handle on success, NULL otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +struct rte_flow_actions_template *
> +rte_flow_actions_template_create(uint16_t port_id,
> +		const struct rte_flow_actions_template_attr *template_attr,
> +		const struct rte_flow_action actions[],
> +		const struct rte_flow_action masks[],
> +		struct rte_flow_error *error);

[snip]


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-21  9:52               ` Andrew Rybchenko
@ 2022-02-21 12:53                 ` Ori Kam
  2022-02-21 14:33                   ` Alexander Kozyrev
  2022-02-21 14:53                   ` Andrew Rybchenko
  0 siblings, 2 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-21 12:53 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew and Alexander,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, February 21, 2022 11:53 AM
> Subject: Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
> 
> On 2/21/22 12:47, Andrew Rybchenko wrote:
> > On 2/20/22 06:43, Alexander Kozyrev wrote:
> >> The flow rules creation/destruction at a large scale incurs a performance
> >> penalty and may negatively impact the packet processing when used
> >> as part of the datapath logic. This is mainly because software/hardware
> >> resources are allocated and prepared during the flow rule creation.
> >>
> >> In order to optimize the insertion rate, PMD may use some hints provided
> >> by the application at the initialization phase. The rte_flow_configure()
> >> function allows to pre-allocate all the needed resources beforehand.
> >> These resources can be used at a later stage without costly allocations.
> >> Every PMD may use only the subset of hints and ignore unused ones or
> >> fail in case the requested configuration is not supported.
> >>
> >> The rte_flow_info_get() is available to retrieve the information about
> >> supported pre-configurable resources. Both these functions must be called
> >> before any other usage of the flow API engine.
> >>
> >> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> >> Acked-by: Ori Kam <orika@nvidia.com>
> >
> > [snip]
> >
> >> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> >> index 6d697a879a..06f0896e1e 100644
> >> --- a/lib/ethdev/ethdev_driver.h
> >> +++ b/lib/ethdev/ethdev_driver.h
> >> @@ -138,7 +138,12 @@ struct rte_eth_dev_data {
> >>            * Indicates whether the device is configured:
> >>            * CONFIGURED(1) / NOT CONFIGURED(0)
> >>            */
> >> -        dev_configured : 1;
> >> +        dev_configured:1,
> >
> > Above is unrelated to the patch. Moreover, it breaks style used
> > few lines above.
> >
+1
> >> +        /**
> >> +         * Indicates whether the flow engine is configured:
> >> +         * CONFIGURED(1) / NOT CONFIGURED(0)
> >> +         */
> >> +        flow_configured:1;
> >
> > I'd like to understand why we need the information. It is
> > unclear from the patch. Right now it is write-only. Nobody
> > checks it. Is flow engine configuration become a mandatory
> > step? Always? Just in some cases?
> >

See my commets below,
I can see two ways or remove this member or check in each control function
that the configuration function was done.

> >>       /** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
> >>       uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> >> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> >> index 7f93900bc8..ffd48e40d5 100644
> >> --- a/lib/ethdev/rte_flow.c
> >> +++ b/lib/ethdev/rte_flow.c
> >> @@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
> >>       ret = ops->flex_item_release(dev, handle, error);
> >>       return flow_err(port_id, ret, error);
> >>   }
> >> +
> >> +int
> >> +rte_flow_info_get(uint16_t port_id,
> >> +          struct rte_flow_port_info *port_info,
> >> +          struct rte_flow_error *error)
> >> +{
> >> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> >> +
> >> +    if (port_info == NULL) {
> >> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
> >> +        return -EINVAL;
> >> +    }
> >> +    if (dev->data->dev_configured == 0) {
> >> +        RTE_FLOW_LOG(INFO,
> >> +            "Device with port_id=%"PRIu16" is not configured.\n",
> >> +            port_id);
> >> +        return -EINVAL;
> >> +    }
> >> +    if (unlikely(!ops))
> >> +        return -rte_errno;
> >
> > Order of checks is not always obvious, but requires at
> > least some rules to follow. When there is no any good
> > reason to do otherwise, I'd suggest to check arguments
> > in there order. I.e. check port_id and its direct
> > derivatives first:
> > 1. ops (since it is NULL if port_id is invalid)
> > 2. dev_configured (since only port_id is required to check it)
> > 3. port_info (since it goes after port_id)
> >

Agree,

> >> +    if (likely(!!ops->info_get)) {
> >> +        return flow_err(port_id,
> >> +                ops->info_get(dev, port_info, error),
> >> +                error);
> >> +    }
> >> +    return rte_flow_error_set(error, ENOTSUP,
> >> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >> +                  NULL, rte_strerror(ENOTSUP));
> >> +}
> >> +
> >> +int
> >> +rte_flow_configure(uint16_t port_id,
> >> +           const struct rte_flow_port_attr *port_attr,
> >> +           struct rte_flow_error *error)
> >> +{
> >> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> >> +    int ret;
> >> +
> >> +    dev->data->flow_configured = 0;

I don't think there is meaning to add this set here.
I would remove this field.
Unless you want to check it for all control functions.

> >> +    if (port_attr == NULL) {
> >> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
> >> +        return -EINVAL;
> >> +    }
> >> +    if (dev->data->dev_configured == 0) {
> >> +        RTE_FLOW_LOG(INFO,
> >> +            "Device with port_id=%"PRIu16" is not configured.\n",
> >> +            port_id);
> >> +        return -EINVAL;
> >> +    }
> 
> In fact there is one more interesting question related
> to device states. Necessity to call flow info and flow
> configure in configured state allows configure to rely
> on device configuration. The question is: what should
> happen with the device flow engine configuration if
> the device is reconfigured?
> 

That’s dependes on PMD.
PMD may support reconfiguring, partial reconfigure (for example only number of objects
but not changing the number of queues) or it will not support any reconfigure.
It may also be dependent if the port is started or not.
Currently we don't plan to support reconfigure but in future we may support changing the
number of objects.

> >> +    if (dev->data->dev_started != 0) {
> >> +        RTE_FLOW_LOG(INFO,
> >> +            "Device with port_id=%"PRIu16" already started.\n",
> >> +            port_id);
> >> +        return -EINVAL;
> >> +    }
> >> +    if (unlikely(!ops))
> >> +        return -rte_errno;
> >
> > Same logic here:
> > 1. ops
> > 2. dev_configured
> > 3. dev_started
> > 4. port_attr
> > 5. ops->configure since we want to be sure that state and input
> >     arguments are valid before calling it
> >
> >> +    if (likely(!!ops->configure)) {
> >> +        ret = ops->configure(dev, port_attr, error);
> >> +        if (ret == 0)
> >> +            dev->data->flow_configured = 1;
> >> +        return flow_err(port_id, ret, error);
> >> +    }
> >> +    return rte_flow_error_set(error, ENOTSUP,
> >> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >> +                  NULL, rte_strerror(ENOTSUP));
> >> +}
> >
> > [snip]
> >
> >> +/**
> >> + * @warning
> >> + * @b EXPERIMENTAL: this API may change without prior notice.
> >> + *
> >> + * Get information about flow engine resources.
> >> + *
> >> + * @param port_id
> >> + *   Port identifier of Ethernet device.
> >> + * @param[out] port_info
> >> + *   A pointer to a structure of type *rte_flow_port_info*
> >> + *   to be filled with the resources information of the port.
> >> + * @param[out] error
> >> + *   Perform verbose error reporting if not NULL.
> >> + *   PMDs initialize this structure in case of error only.
> >> + *
> >> + * @return
> >> + *   0 on success, a negative errno value otherwise and rte_errno is
> >> set.
> >
> > If I'm not mistakes we should be explicit with
> > negative result values menting
> >
I'm not sure, until now we didn't have any errors values defined in RTE flow.
I don't want to enforce PMD with the error types.
If PMD can say that it can give better error code or add a case that may result in
error, I don't want to change the API.
So I think we better leave the error codes out of documentation unless they are final and can only 
be resulted from the rte_level.

> >> + */
> >> +__rte_experimental
> >> +int
> >> +rte_flow_info_get(uint16_t port_id,
> >> +          struct rte_flow_port_info *port_info,
> >> +          struct rte_flow_error *error);
> >
> > [snip]
> >
> >> +/**
> >> + * @warning
> >> + * @b EXPERIMENTAL: this API may change without prior notice.
> >> + *
> >> + * Configure the port's flow API engine.
> >> + *
> >> + * This API can only be invoked before the application
> >> + * starts using the rest of the flow library functions.
> >> + *
> >> + * The API can be invoked multiple times to change the
> >> + * settings. The port, however, may reject the changes.
> >> + *
> >> + * Parameters in configuration attributes must not exceed
> >> + * numbers of resources returned by the rte_flow_info_get API.
> >> + *
> >> + * @param port_id
> >> + *   Port identifier of Ethernet device.
> >> + * @param[in] port_attr
> >> + *   Port configuration attributes.
> >> + * @param[out] error
> >> + *   Perform verbose error reporting if not NULL.
> >> + *   PMDs initialize this structure in case of error only.
> >> + *
> >> + * @return
> >> + *   0 on success, a negative errno value otherwise and rte_errno is
> >> set.
> >
> > Same here.
> >
Same as above.

> > [snip]

Best,
ORi

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v8 02/11] ethdev: add flow item/action templates
  2022-02-21 10:57             ` Andrew Rybchenko
@ 2022-02-21 13:12               ` Ori Kam
  2022-02-21 15:05                 ` Andrew Rybchenko
  2022-02-21 15:14                 ` Alexander Kozyrev
  0 siblings, 2 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-21 13:12 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, February 21, 2022 12:57 PM
> Subject: Re: [PATCH v8 02/11] ethdev: add flow item/action templates
> 
> On 2/20/22 06:44, Alexander Kozyrev wrote:
> > Treating every single flow rule as a completely independent and separate
> > entity negatively impacts the flow rules insertion rate. Oftentimes in an
> > application, many flow rules share a common structure (the same item mask
> > and/or action list) so they can be grouped and classified together.
> > This knowledge may be used as a source of optimization by a PMD/HW.
> >
> > The pattern template defines common matching fields (the item mask) without
> > values. The actions template holds a list of action types that will be used
> > together in the same rule. The specific values for items and actions will
> > be given only during the rule creation.
> >
> > A table combines pattern and actions templates along with shared flow rule
> > attributes (group ID, priority and traffic direction). This way a PMD/HW
> > can prepare all the resources needed for efficient flow rules creation in
> > the datapath. To avoid any hiccups due to memory reallocation, the maximum
> > number of flow rules is defined at the table creation time.
> >
> > The flow rule creation is done by selecting a table, a pattern template
> > and an actions template (which are bound to the table), and setting unique
> > values for the items and actions.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > Acked-by: Ori Kam <orika@nvidia.com>
> 
> [snip]
> 
> > +For example, to create an actions template with the same Mark ID
> > +but different Queue Index for every rule:
> > +
> > +.. code-block:: c
> > +
> > +	rte_flow_actions_template_attr attr = {.ingress = 1};
> > +	struct rte_flow_action act[] = {
> > +		/* Mark ID is 4 for every rule, Queue Index is unique */
> > +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> > +		       .conf = &(struct rte_flow_action_mark){.id = 4}},
> > +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> > +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> > +	};
> > +	struct rte_flow_action msk[] = {
> > +		/* Assign to MARK mask any non-zero value to make it constant */
> > +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
> > +		       .conf = &(struct rte_flow_action_mark){.id = 1}},
> 
> 1 looks very strange. I can understand it in the case of
> integer and boolean fields, but what to do in the case of
> arrays? IMHO, it would be better to use all 0xff's in value.
> Anyway, it must be defined very carefully and non-ambiguous.
> 
There is some issues with all 0xff for example in case of pointers or
enums this it will result in invalid value.
So I vote for saving it as is.
I fully agree that it should be defined very clearly.
I think that for arrays with predefined size (I don't think we have such in rte_flow)
it should be declared that that the first element should not be 0.

> > +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
> > +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
> > +	};
> > +	struct rte_flow_error err;
> > +
> > +	struct rte_flow_actions_template *actions_template =
> > +		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
> > +
> > +The concrete value for Queue Index will be provided at the rule creation.
> 
> [snip]
> 
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index ffd48e40d5..e9f684eedb 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1461,3 +1461,255 @@ rte_flow_configure(uint16_t port_id,
> >   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >   				  NULL, rte_strerror(ENOTSUP));
> >   }
> > +
> > +struct rte_flow_pattern_template *
> > +rte_flow_pattern_template_create(uint16_t port_id,
> > +		const struct rte_flow_pattern_template_attr *template_attr,
> > +		const struct rte_flow_item pattern[],
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_pattern_template *template;
> > +
> > +	if (template_attr == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" template attr is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (pattern == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" pattern is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (dev->data->flow_configured == 0) {
> > +		RTE_FLOW_LOG(INFO,
> > +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
> > +			port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				RTE_FLOW_ERROR_TYPE_STATE,
> > +				NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (unlikely(!ops))
> > +		return NULL;
> 
> See notes about order of checks in previous patch review notes.
> 
> > +	if (likely(!!ops->pattern_template_create)) {
> > +		template = ops->pattern_template_create(dev, template_attr,
> > +							pattern, error);
> > +		if (template == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return template;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_pattern_template_destroy(uint16_t port_id,
> > +		struct rte_flow_pattern_template *pattern_template,
> > +		struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(pattern_template == NULL))
> > +		return 0;
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> 
> Same here. I'm afraid it is really important here as well,
> since request should not return OK if port_id is invalid.
> 
> 
> > +	if (likely(!!ops->pattern_template_destroy)) {
> > +		return flow_err(port_id,
> > +				ops->pattern_template_destroy(dev,
> > +							      pattern_template,
> > +							      error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +struct rte_flow_actions_template *
> > +rte_flow_actions_template_create(uint16_t port_id,
> > +			const struct rte_flow_actions_template_attr *template_attr,
> > +			const struct rte_flow_action actions[],
> > +			const struct rte_flow_action masks[],
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_actions_template *template;
> > +
> > +	if (template_attr == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" template attr is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (actions == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" actions is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (masks == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" masks is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +
> > +	}
> > +	if (dev->data->flow_configured == 0) {
> > +		RTE_FLOW_LOG(INFO,
> > +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
> > +			port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_STATE,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (unlikely(!ops))
> > +		return NULL;
> 
> same here
> 
> > +	if (likely(!!ops->actions_template_create)) {
> > +		template = ops->actions_template_create(dev, template_attr,
> > +							actions, masks, error);
> > +		if (template == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return template;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_actions_template_destroy(uint16_t port_id,
> > +			struct rte_flow_actions_template *actions_template,
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(actions_template == NULL))
> > +		return 0;
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> 
> same here
> 
> > +	if (likely(!!ops->actions_template_destroy)) {
> > +		return flow_err(port_id,
> > +				ops->actions_template_destroy(dev,
> > +							      actions_template,
> > +							      error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +struct rte_flow_template_table *
> > +rte_flow_template_table_create(uint16_t port_id,
> > +			const struct rte_flow_template_table_attr *table_attr,
> > +			struct rte_flow_pattern_template *pattern_templates[],
> > +			uint8_t nb_pattern_templates,
> > +			struct rte_flow_actions_template *actions_templates[],
> > +			uint8_t nb_actions_templates,
> > +			struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow_template_table *table;
> > +
> > +	if (table_attr == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" table attr is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (pattern_templates == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" pattern templates is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (actions_templates == NULL) {
> > +		RTE_FLOW_LOG(ERR,
> > +			     "Port %"PRIu16" actions templates is NULL.\n",
> > +			     port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_ATTR,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (dev->data->flow_configured == 0) {
> > +		RTE_FLOW_LOG(INFO,
> > +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
> > +			port_id);
> > +		rte_flow_error_set(error, EINVAL,
> > +				   RTE_FLOW_ERROR_TYPE_STATE,
> > +				   NULL, rte_strerror(EINVAL));
> > +		return NULL;
> > +	}
> > +	if (unlikely(!ops))
> > +		return NULL;
> 
> Order of checks
> 
> > +	if (likely(!!ops->template_table_create)) {
> > +		table = ops->template_table_create(dev, table_attr,
> > +					pattern_templates, nb_pattern_templates,
> > +					actions_templates, nb_actions_templates,
> > +					error);
> > +		if (table == NULL)
> > +			flow_err(port_id, -rte_errno, error);
> > +		return table;
> > +	}
> > +	rte_flow_error_set(error, ENOTSUP,
> > +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +			   NULL, rte_strerror(ENOTSUP));
> > +	return NULL;
> > +}
> > +
> > +int
> > +rte_flow_template_table_destroy(uint16_t port_id,
> > +				struct rte_flow_template_table *template_table,
> > +				struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > +	if (unlikely(template_table == NULL))
> > +		return 0;
> > +	if (unlikely(!ops))
> > +		return -rte_errno;
> > +	if (likely(!!ops->template_table_destroy)) {
> > +		return flow_err(port_id,
> > +				ops->template_table_destroy(dev,
> > +							    template_table,
> > +							    error),
> > +				error);
> > +	}
> > +	return rte_flow_error_set(error, ENOTSUP,
> > +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > +				  NULL, rte_strerror(ENOTSUP));
> > +}
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index cdb7b2be68..776e8ccc11 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4983,6 +4983,280 @@ rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> >   		   struct rte_flow_error *error);
> >
> > +/**
> > + * Opaque type returned after successful creation of pattern template.
> > + * This handle can be used to manage the created pattern template.
> > + */
> > +struct rte_flow_pattern_template;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Flow pattern template attributes.
> 
> Would it be useful to mentioned that at least one direction
> bit must be set? Otherwise request does not make sense.
> 
Agree one direction must be set.

> > + */
> > +__extension__
> > +struct rte_flow_pattern_template_attr {
> > +	/**
> > +	 * Relaxed matching policy.
> > +	 * - PMD may match only on items with mask member set and skip
> > +	 * matching on protocol layers specified without any masks.
> > +	 * - If not set, PMD will match on protocol layers
> > +	 * specified without any masks as well.
> > +	 * - Packet data must be stacked in the same order as the
> > +	 * protocol layers to match inside packets, starting from the lowest.
> > +	 */
> > +	uint32_t relaxed_matching:1;
> 
> I should notice this earlier, but it looks like a new feature
> which sounds unrelated to templates. If so, it makes asymmetry
> in sync and async flow rules capabilities.
> Am I missing something?
> 
> Anyway, the feature looks hidden in the patch.
>
No this is not hidden feature.
In current API application must specify all the preciding items,
For example application wants to match on udp source port.
The rte flow will look something like eth / ipv4/ udp sport = xxx .. 
When PMD gets this pattern it must enforce the after the eth
there will be IPv4 and then UDP and then add the match for the
sport.
This means that the PMD addes extra matching.
If the application already validated that there is udp in the packet
in group 0 and then jump to group 1  it can save the HW those extra matching
by enabling this bit which means that the HW should only match on implicit
masked fields.

> > +	/** Pattern valid for rules applied to ingress traffic. */
> > +	uint32_t ingress:1;
> > +	/** Pattern valid for rules applied to egress traffic. */
> > +	uint32_t egress:1;
> > +	/** Pattern valid for rules applied to transfer traffic. */
> > +	uint32_t transfer:1;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create flow pattern template.
> > + *
> > + * The pattern template defines common matching fields without values.
> > + * For example, matching on 5 tuple TCP flow, the template will be
> > + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
> > + * while values for each rule will be set during the flow rule creation.
> > + * The number and order of items in the template must be the same
> > + * at the rule creation.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template_attr
> > + *   Pattern template attributes.
> > + * @param[in] pattern
> > + *   Pattern specification (list terminated by the END pattern item).
> > + *   The spec member of an item is not used unless the end member is used.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> 
> Don't we want to be explicit about used negative error code?
> The question is applicable to all functions.
> 
Same answer as given in other patch.
Since PMD may have different/extra error codes I don't think we should
give them here.

> [snip]
> 
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Flow actions template attributes.
> 
> Same question about no directions specified.
> 
> > + */
> > +__extension__
> > +struct rte_flow_actions_template_attr {
> > +	/** Action valid for rules applied to ingress traffic. */
> > +	uint32_t ingress:1;
> > +	/** Action valid for rules applied to egress traffic. */
> > +	uint32_t egress:1;
> > +	/** Action valid for rules applied to transfer traffic. */
> > +	uint32_t transfer:1;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Create flow actions template.
> > + *
> > + * The actions template holds a list of action types without values.
> > + * For example, the template to change TCP ports is TCP(s_port + d_port),
> > + * while values for each rule will be set during the flow rule creation.
> > + * The number and order of actions in the template must be the same
> > + * at the rule creation.
> > + *
> > + * @param port_id
> > + *   Port identifier of Ethernet device.
> > + * @param[in] template_attr
> > + *   Template attributes.
> > + * @param[in] actions
> > + *   Associated actions (list terminated by the END action).
> > + *   The spec member is only used if @p masks spec is non-zero.
> > + * @param[in] masks
> > + *   List of actions that marks which of the action's member is constant.
> > + *   A mask has the same format as the corresponding action.
> > + *   If the action field in @p masks is not 0,
> 
> Comparison with zero makes sense for integers only.
> 

Why? It can also be with pointers enums.

> > + *   the corresponding value in an action from @p actions will be the part
> > + *   of the template and used in all flow rules.
> > + *   The order of actions in @p masks is the same as in @p actions.
> > + *   In case of indirect actions present in @p actions,
> > + *   the actual action type should be present in @p mask.
> > + * @param[out] error
> > + *   Perform verbose error reporting if not NULL.
> > + *   PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + *   Handle on success, NULL otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +struct rte_flow_actions_template *
> > +rte_flow_actions_template_create(uint16_t port_id,
> > +		const struct rte_flow_actions_template_attr *template_attr,
> > +		const struct rte_flow_action actions[],
> > +		const struct rte_flow_action masks[],
> > +		struct rte_flow_error *error);
> 
> [snip]

Best,
Ori


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-21 12:53                 ` Ori Kam
@ 2022-02-21 14:33                   ` Alexander Kozyrev
  2022-02-21 14:53                   ` Andrew Rybchenko
  1 sibling, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 14:33 UTC (permalink / raw)
  To: Ori Kam, Andrew Rybchenko, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Monday, February 21, 2022 7:54 Ori Kam <orika@nvidia.com> wrote:
> Hi Andrew and Alexander,
> 
> > -----Original Message-----
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Sent: Monday, February 21, 2022 11:53 AM
> > Subject: Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
> >
> > On 2/21/22 12:47, Andrew Rybchenko wrote:
> > > On 2/20/22 06:43, Alexander Kozyrev wrote:
> > >> The flow rules creation/destruction at a large scale incurs a performance
> > >> penalty and may negatively impact the packet processing when used
> > >> as part of the datapath logic. This is mainly because software/hardware
> > >> resources are allocated and prepared during the flow rule creation.
> > >>
> > >> In order to optimize the insertion rate, PMD may use some hints provided
> > >> by the application at the initialization phase. The rte_flow_configure()
> > >> function allows to pre-allocate all the needed resources beforehand.
> > >> These resources can be used at a later stage without costly allocations.
> > >> Every PMD may use only the subset of hints and ignore unused ones or
> > >> fail in case the requested configuration is not supported.
> > >>
> > >> The rte_flow_info_get() is available to retrieve the information about
> > >> supported pre-configurable resources. Both these functions must be called
> > >> before any other usage of the flow API engine.
> > >>
> > >> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > >> Acked-by: Ori Kam <orika@nvidia.com>
> > >
> > > [snip]
> > >
> > >> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> > >> index 6d697a879a..06f0896e1e 100644
> > >> --- a/lib/ethdev/ethdev_driver.h
> > >> +++ b/lib/ethdev/ethdev_driver.h
> > >> @@ -138,7 +138,12 @@ struct rte_eth_dev_data {
> > >>            * Indicates whether the device is configured:
> > >>            * CONFIGURED(1) / NOT CONFIGURED(0)
> > >>            */
> > >> -        dev_configured : 1;
> > >> +        dev_configured:1,
> > >
> > > Above is unrelated to the patch. Moreover, it breaks style used
> > > few lines above.
> > >
> +1

It is related, I had to change this line to add flow_configured member.
And there is a waring if I keep old style:
ERROR:SPACING: space prohibited before that ':' (ctx:WxW)
Should I keep the old style with warnings or change all members to fix it?

> > >> +        /**
> > >> +         * Indicates whether the flow engine is configured:
> > >> +         * CONFIGURED(1) / NOT CONFIGURED(0)
> > >> +         */
> > >> +        flow_configured:1;
> > >
> > > I'd like to understand why we need the information. It is
> > > unclear from the patch. Right now it is write-only. Nobody
> > > checks it. Is flow engine configuration become a mandatory
> > > step? Always? Just in some cases?
> > >
> 
> See my commets below,
> I can see two ways or remove this member or check in each control function
> that the configuration function was done.

It is write-only in this patch, rte_flow_configure() sets it when configuration is done.
The it is checked in the templates/tables creation in patch 2.
We do not allow tamplates/tables creation without invoking configure first.
 
> > >>       /** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
> > >>       uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
> > >> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > >> index 7f93900bc8..ffd48e40d5 100644
> > >> --- a/lib/ethdev/rte_flow.c
> > >> +++ b/lib/ethdev/rte_flow.c
> > >> @@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
> > >>       ret = ops->flex_item_release(dev, handle, error);
> > >>       return flow_err(port_id, ret, error);
> > >>   }
> > >> +
> > >> +int
> > >> +rte_flow_info_get(uint16_t port_id,
> > >> +          struct rte_flow_port_info *port_info,
> > >> +          struct rte_flow_error *error)
> > >> +{
> > >> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > >> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > >> +
> > >> +    if (port_info == NULL) {
> > >> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
> > >> +        return -EINVAL;
> > >> +    }
> > >> +    if (dev->data->dev_configured == 0) {
> > >> +        RTE_FLOW_LOG(INFO,
> > >> +            "Device with port_id=%"PRIu16" is not configured.\n",
> > >> +            port_id);
> > >> +        return -EINVAL;
> > >> +    }
> > >> +    if (unlikely(!ops))
> > >> +        return -rte_errno;
> > >
> > > Order of checks is not always obvious, but requires at
> > > least some rules to follow. When there is no any good
> > > reason to do otherwise, I'd suggest to check arguments
> > > in there order. I.e. check port_id and its direct
> > > derivatives first:
> > > 1. ops (since it is NULL if port_id is invalid)
> > > 2. dev_configured (since only port_id is required to check it)
> > > 3. port_info (since it goes after port_id)
> > >
> 
> Agree,

Ok.

> > >> +    if (likely(!!ops->info_get)) {
> > >> +        return flow_err(port_id,
> > >> +                ops->info_get(dev, port_info, error),
> > >> +                error);
> > >> +    }
> > >> +    return rte_flow_error_set(error, ENOTSUP,
> > >> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > >> +                  NULL, rte_strerror(ENOTSUP));
> > >> +}
> > >> +
> > >> +int
> > >> +rte_flow_configure(uint16_t port_id,
> > >> +           const struct rte_flow_port_attr *port_attr,
> > >> +           struct rte_flow_error *error)
> > >> +{
> > >> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > >> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > >> +    int ret;
> > >> +
> > >> +    dev->data->flow_configured = 0;
> 
> I don't think there is meaning to add this set here.
> I would remove this field.
> Unless you want to check it for all control functions.

I do check it in templates/tables creation API as I mentioned above.

> > >> +    if (port_attr == NULL) {
> > >> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
> > >> +        return -EINVAL;
> > >> +    }
> > >> +    if (dev->data->dev_configured == 0) {
> > >> +        RTE_FLOW_LOG(INFO,
> > >> +            "Device with port_id=%"PRIu16" is not configured.\n",
> > >> +            port_id);
> > >> +        return -EINVAL;
> > >> +    }
> >
> > In fact there is one more interesting question related
> > to device states. Necessity to call flow info and flow
> > configure in configured state allows configure to rely
> > on device configuration. The question is: what should
> > happen with the device flow engine configuration if
> > the device is reconfigured?
> >
> 
> That’s dependes on PMD.
> PMD may support reconfiguring, partial reconfigure (for example only number
> of objects
> but not changing the number of queues) or it will not support any reconfigure.
> It may also be dependent if the port is started or not.
> Currently we don't plan to support reconfigure but in future we may support
> changing the
> number of objects.
> > >> +    if (dev->data->dev_started != 0) {
> > >> +        RTE_FLOW_LOG(INFO,
> > >> +            "Device with port_id=%"PRIu16" already started.\n",
> > >> +            port_id);
> > >> +        return -EINVAL;
> > >> +    }
> > >> +    if (unlikely(!ops))
> > >> +        return -rte_errno;
> > >
> > > Same logic here:
> > > 1. ops
> > > 2. dev_configured
> > > 3. dev_started
> > > 4. port_attr
> > > 5. ops->configure since we want to be sure that state and input
> > >     arguments are valid before calling it
> > >
> > >> +    if (likely(!!ops->configure)) {
> > >> +        ret = ops->configure(dev, port_attr, error);
> > >> +        if (ret == 0)
> > >> +            dev->data->flow_configured = 1;
> > >> +        return flow_err(port_id, ret, error);
> > >> +    }
> > >> +    return rte_flow_error_set(error, ENOTSUP,
> > >> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > >> +                  NULL, rte_strerror(ENOTSUP));
> > >> +}
> > >
> > > [snip]
> > >
> > >> +/**
> > >> + * @warning
> > >> + * @b EXPERIMENTAL: this API may change without prior notice.
> > >> + *
> > >> + * Get information about flow engine resources.
> > >> + *
> > >> + * @param port_id
> > >> + *   Port identifier of Ethernet device.
> > >> + * @param[out] port_info
> > >> + *   A pointer to a structure of type *rte_flow_port_info*
> > >> + *   to be filled with the resources information of the port.
> > >> + * @param[out] error
> > >> + *   Perform verbose error reporting if not NULL.
> > >> + *   PMDs initialize this structure in case of error only.
> > >> + *
> > >> + * @return
> > >> + *   0 on success, a negative errno value otherwise and rte_errno is
> > >> set.
> > >
> > > If I'm not mistakes we should be explicit with
> > > negative result values menting
> > >
> I'm not sure, until now we didn't have any errors values defined in RTE flow.
> I don't want to enforce PMD with the error types.
> If PMD can say that it can give better error code or add a case that may result in
> error, I don't want to change the API.
> So I think we better leave the error codes out of documentation unless they are
> final and can only
> be resulted from the rte_level.
> 
> > >> + */
> > >> +__rte_experimental
> > >> +int
> > >> +rte_flow_info_get(uint16_t port_id,
> > >> +          struct rte_flow_port_info *port_info,
> > >> +          struct rte_flow_error *error);
> > >
> > > [snip]
> > >
> > >> +/**
> > >> + * @warning
> > >> + * @b EXPERIMENTAL: this API may change without prior notice.
> > >> + *
> > >> + * Configure the port's flow API engine.
> > >> + *
> > >> + * This API can only be invoked before the application
> > >> + * starts using the rest of the flow library functions.
> > >> + *
> > >> + * The API can be invoked multiple times to change the
> > >> + * settings. The port, however, may reject the changes.
> > >> + *
> > >> + * Parameters in configuration attributes must not exceed
> > >> + * numbers of resources returned by the rte_flow_info_get API.
> > >> + *
> > >> + * @param port_id
> > >> + *   Port identifier of Ethernet device.
> > >> + * @param[in] port_attr
> > >> + *   Port configuration attributes.
> > >> + * @param[out] error
> > >> + *   Perform verbose error reporting if not NULL.
> > >> + *   PMDs initialize this structure in case of error only.
> > >> + *
> > >> + * @return
> > >> + *   0 on success, a negative errno value otherwise and rte_errno is
> > >> set.
> > >
> > > Same here.
> > >
> Same as above.
> 
> > > [snip]
> 
> Best,
> ORi

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-20  3:44           ` [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-21 14:49             ` Andrew Rybchenko
  2022-02-21 15:35               ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-21 14:49 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/20/22 06:44, Alexander Kozyrev wrote:
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and the queue
> should be accessed from the same thread for all queue operations.
> It is the responsibility of the app to sync the queue functions in case
> of multi-threaded access to the same queue.
> 
> The rte_flow_async_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_pull() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_async_destroy() function
> enqueues a flow destruction to the requested queue.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

[snip]

> @@ -3777,6 +3782,125 @@ and pattern and actions templates are created.
>   				&actions_templates, nb_actions_templ,
>   				&error);
>   
> +Asynchronous operations
> +-----------------------
> +
> +Flow rules management can be done via special lockless flow management queues.
> +- Queue operations are asynchronous and not thread-safe.
> +
> +- Operations can thus be invoked by the app's datapath,
> +  packet processing can continue while queue operations are processed by NIC.
> +
> +- Number of flow queues is configured at initialization stage.
> +
> +- Available operation types: rule creation, rule destruction,
> +  indirect rule creation, indirect rule destruction, indirect rule update.
> +
> +- Operations may be reordered within a queue.
> +
> +- Operations can be postponed and pushed to NIC in batches.
> +
> +- Results pulling must be done on time to avoid queue overflows.

I guess the documenation is for applications, but IMHO it is a
driver responsiblity. Application should not care  about it.
Yes, applicatoin should do pulling, but it should not think
about overflow. Request should be rejected if there is no space
in queue.

> +
> +- User data is returned as part of the result to identify an operation.

Also "User data should uniquelly identify request (may be except corner 
case when only one request is enqueued at most)."

> +
> +- Flow handle is valid once the creation operation is enqueued and must be
> +  destroyed even if the operation is not successful and the rule is not inserted.
> +
> +- Application must wait for the creation operation result before enqueueing
> +  the deletion operation to make sure the creation is processed by NIC.
> +

[snip]

> +The asynchronous flow rule insertion logic can be broken into two phases.
> +
> +1. Initialization stage as shown here:
> +
> +.. _figure_rte_flow_async_init:
> +
> +.. figure:: img/rte_flow_async_init.*
> +
> +2. Main loop as presented on a datapath application example:
> +
> +.. _figure_rte_flow_async_usage:
> +
> +.. figure:: img/rte_flow_async_usage.*
> +
> +Enqueue creation operation
> +~~~~~~~~~~~~~~~~~~~~~~~~~~
> +
> +Enqueueing a flow rule creation operation is similar to simple creation.
> +
> +.. code-block:: c
> +
> +	struct rte_flow *
> +	rte_flow_async_create(uint16_t port_id,
> +			      uint32_t queue_id,
> +			      const struct rte_flow_q_ops_attr *q_ops_attr,

May be rte_flow_async_ops_attr *attr?

> +			      struct rte_flow_template_table *template_table,
> +			      const struct rte_flow_item pattern[],
> +			      uint8_t pattern_template_index,
> +			      const struct rte_flow_action actions[],
> +			      uint8_t actions_template_index,
> +			      void *user_data,
> +			      struct rte_flow_error *error);
> +
> +A valid handle in case of success is returned. It must be destroyed later
> +by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.

[snip]

> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index e9f684eedb..4e7b202522 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id,
>   int
>   rte_flow_info_get(uint16_t port_id,
>   		  struct rte_flow_port_info *port_info,
> +		  struct rte_flow_queue_info *queue_info,

It should be either optional (update description) or sanity
checked vs NULL below (similar to port_info).

>   		  struct rte_flow_error *error)
>   {
>   	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> @@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id,
>   		return -rte_errno;
>   	if (likely(!!ops->info_get)) {
>   		return flow_err(port_id,
> -				ops->info_get(dev, port_info, error),
> +				ops->info_get(dev, port_info, queue_info, error),
>   				error);
>   	}
>   	return rte_flow_error_set(error, ENOTSUP,
> @@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id,
>   int
>   rte_flow_configure(uint16_t port_id,
>   		   const struct rte_flow_port_attr *port_attr,
> +		   uint16_t nb_queue,
> +		   const struct rte_flow_queue_attr *queue_attr[],

Is it really an array of pointers? If yes, why?

>   		   struct rte_flow_error *error)
>   {
>   	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> @@ -1433,7 +1436,7 @@ rte_flow_configure(uint16_t port_id,
>   	int ret;
>   
>   	dev->data->flow_configured = 0;
> -	if (port_attr == NULL) {
> +	if (port_attr == NULL || queue_attr == NULL) {
>   		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);

Log message becomes misleading

[snip]

>   		return -EINVAL;
>   	}
> @@ -1452,7 +1455,7 @@ rte_flow_configure(uint16_t port_id,
>   	if (unlikely(!ops))
>   		return -rte_errno;
>   	if (likely(!!ops->configure)) {
> -		ret = ops->configure(dev, port_attr, error);
> +		ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error);
>   		if (ret == 0)
>   			dev->data->flow_configured = 1;
>   		return flow_err(port_id, ret, error);
> @@ -1713,3 +1716,104 @@ rte_flow_template_table_destroy(uint16_t port_id,
>   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>   				  NULL, rte_strerror(ENOTSUP));
>   }
> +
> +struct rte_flow *
> +rte_flow_async_create(uint16_t port_id,
> +		      uint32_t queue_id,
> +		      const struct rte_flow_q_ops_attr *q_ops_attr,
> +		      struct rte_flow_template_table *template_table,
> +		      const struct rte_flow_item pattern[],
> +		      uint8_t pattern_template_index,
> +		      const struct rte_flow_action actions[],
> +		      uint8_t actions_template_index,
> +		      void *user_data,
> +		      struct rte_flow_error *error)
> +{
> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> +	struct rte_flow *flow;
> +
> +	if (unlikely(!ops))
> +		return NULL;
> +	if (likely(!!ops->async_create)) {

Hm, we should make a consistent decision. If it is super-
critical fast path - we should have no sanity checks at all.
If no, we should have all simple sanity checks. Otherwise,
I don't understand why we do some checks and ignore another.

[snip]

> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> index 776e8ccc11..9e71a576f6 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id,
>    *
>    */
>   struct rte_flow_port_info {
> +	/**
> +	 * Maximum umber of queues for asynchronous operations.

umber -> number


[snip]

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-21 12:53                 ` Ori Kam
  2022-02-21 14:33                   ` Alexander Kozyrev
@ 2022-02-21 14:53                   ` Andrew Rybchenko
  2022-02-21 15:49                     ` Thomas Monjalon
  1 sibling, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-21 14:53 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/21/22 15:53, Ori Kam wrote:
> Hi Andrew and Alexander,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, February 21, 2022 11:53 AM
>> Subject: Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
>>
>> On 2/21/22 12:47, Andrew Rybchenko wrote:
>>> On 2/20/22 06:43, Alexander Kozyrev wrote:
>>>> The flow rules creation/destruction at a large scale incurs a performance
>>>> penalty and may negatively impact the packet processing when used
>>>> as part of the datapath logic. This is mainly because software/hardware
>>>> resources are allocated and prepared during the flow rule creation.
>>>>
>>>> In order to optimize the insertion rate, PMD may use some hints provided
>>>> by the application at the initialization phase. The rte_flow_configure()
>>>> function allows to pre-allocate all the needed resources beforehand.
>>>> These resources can be used at a later stage without costly allocations.
>>>> Every PMD may use only the subset of hints and ignore unused ones or
>>>> fail in case the requested configuration is not supported.
>>>>
>>>> The rte_flow_info_get() is available to retrieve the information about
>>>> supported pre-configurable resources. Both these functions must be called
>>>> before any other usage of the flow API engine.
>>>>
>>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>>> Acked-by: Ori Kam <orika@nvidia.com>
>>>
>>> [snip]
>>>
>>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>>> index 6d697a879a..06f0896e1e 100644
>>>> --- a/lib/ethdev/ethdev_driver.h
>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>> @@ -138,7 +138,12 @@ struct rte_eth_dev_data {
>>>>             * Indicates whether the device is configured:
>>>>             * CONFIGURED(1) / NOT CONFIGURED(0)
>>>>             */
>>>> -        dev_configured : 1;
>>>> +        dev_configured:1,
>>>
>>> Above is unrelated to the patch. Moreover, it breaks style used
>>> few lines above.
>>>
> +1
>>>> +        /**
>>>> +         * Indicates whether the flow engine is configured:
>>>> +         * CONFIGURED(1) / NOT CONFIGURED(0)
>>>> +         */
>>>> +        flow_configured:1;
>>>
>>> I'd like to understand why we need the information. It is
>>> unclear from the patch. Right now it is write-only. Nobody
>>> checks it. Is flow engine configuration become a mandatory
>>> step? Always? Just in some cases?
>>>
> 
> See my commets below,
> I can see two ways or remove this member or check in each control function
> that the configuration function was done.
> 
>>>>        /** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
>>>>        uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
>>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>>> index 7f93900bc8..ffd48e40d5 100644
>>>> --- a/lib/ethdev/rte_flow.c
>>>> +++ b/lib/ethdev/rte_flow.c
>>>> @@ -1392,3 +1392,72 @@ rte_flow_flex_item_release(uint16_t port_id,
>>>>        ret = ops->flex_item_release(dev, handle, error);
>>>>        return flow_err(port_id, ret, error);
>>>>    }
>>>> +
>>>> +int
>>>> +rte_flow_info_get(uint16_t port_id,
>>>> +          struct rte_flow_port_info *port_info,
>>>> +          struct rte_flow_error *error)
>>>> +{
>>>> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>>> +
>>>> +    if (port_info == NULL) {
>>>> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
>>>> +        return -EINVAL;
>>>> +    }
>>>> +    if (dev->data->dev_configured == 0) {
>>>> +        RTE_FLOW_LOG(INFO,
>>>> +            "Device with port_id=%"PRIu16" is not configured.\n",
>>>> +            port_id);
>>>> +        return -EINVAL;
>>>> +    }
>>>> +    if (unlikely(!ops))
>>>> +        return -rte_errno;
>>>
>>> Order of checks is not always obvious, but requires at
>>> least some rules to follow. When there is no any good
>>> reason to do otherwise, I'd suggest to check arguments
>>> in there order. I.e. check port_id and its direct
>>> derivatives first:
>>> 1. ops (since it is NULL if port_id is invalid)
>>> 2. dev_configured (since only port_id is required to check it)
>>> 3. port_info (since it goes after port_id)
>>>
> 
> Agree,
> 
>>>> +    if (likely(!!ops->info_get)) {
>>>> +        return flow_err(port_id,
>>>> +                ops->info_get(dev, port_info, error),
>>>> +                error);
>>>> +    }
>>>> +    return rte_flow_error_set(error, ENOTSUP,
>>>> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>> +                  NULL, rte_strerror(ENOTSUP));
>>>> +}
>>>> +
>>>> +int
>>>> +rte_flow_configure(uint16_t port_id,
>>>> +           const struct rte_flow_port_attr *port_attr,
>>>> +           struct rte_flow_error *error)
>>>> +{
>>>> +    struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>>> +    const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>>> +    int ret;
>>>> +
>>>> +    dev->data->flow_configured = 0;
> 
> I don't think there is meaning to add this set here.
> I would remove this field.
> Unless you want to check it for all control functions.
> 
>>>> +    if (port_attr == NULL) {
>>>> +        RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
>>>> +        return -EINVAL;
>>>> +    }
>>>> +    if (dev->data->dev_configured == 0) {
>>>> +        RTE_FLOW_LOG(INFO,
>>>> +            "Device with port_id=%"PRIu16" is not configured.\n",
>>>> +            port_id);
>>>> +        return -EINVAL;
>>>> +    }
>>
>> In fact there is one more interesting question related
>> to device states. Necessity to call flow info and flow
>> configure in configured state allows configure to rely
>> on device configuration. The question is: what should
>> happen with the device flow engine configuration if
>> the device is reconfigured?
>>
> 
> That’s dependes on PMD.
> PMD may support reconfiguring, partial reconfigure (for example only number of objects
> but not changing the number of queues) or it will not support any reconfigure.
> It may also be dependent if the port is started or not.
> Currently we don't plan to support reconfigure but in future we may support changing the
> number of objects.

But we should define behaviour and say what application should
expect. Above sounds like: Flow engine configuration persists
across device reconfigure.

> 
>>>> +    if (dev->data->dev_started != 0) {
>>>> +        RTE_FLOW_LOG(INFO,
>>>> +            "Device with port_id=%"PRIu16" already started.\n",
>>>> +            port_id);
>>>> +        return -EINVAL;
>>>> +    }
>>>> +    if (unlikely(!ops))
>>>> +        return -rte_errno;
>>>
>>> Same logic here:
>>> 1. ops
>>> 2. dev_configured
>>> 3. dev_started
>>> 4. port_attr
>>> 5. ops->configure since we want to be sure that state and input
>>>      arguments are valid before calling it
>>>
>>>> +    if (likely(!!ops->configure)) {
>>>> +        ret = ops->configure(dev, port_attr, error);
>>>> +        if (ret == 0)
>>>> +            dev->data->flow_configured = 1;
>>>> +        return flow_err(port_id, ret, error);
>>>> +    }
>>>> +    return rte_flow_error_set(error, ENOTSUP,
>>>> +                  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>> +                  NULL, rte_strerror(ENOTSUP));
>>>> +}
>>>
>>> [snip]
>>>
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Get information about flow engine resources.
>>>> + *
>>>> + * @param port_id
>>>> + *   Port identifier of Ethernet device.
>>>> + * @param[out] port_info
>>>> + *   A pointer to a structure of type *rte_flow_port_info*
>>>> + *   to be filled with the resources information of the port.
>>>> + * @param[out] error
>>>> + *   Perform verbose error reporting if not NULL.
>>>> + *   PMDs initialize this structure in case of error only.
>>>> + *
>>>> + * @return
>>>> + *   0 on success, a negative errno value otherwise and rte_errno is
>>>> set.
>>>
>>> If I'm not mistakes we should be explicit with
>>> negative result values menting
>>>
> I'm not sure, until now we didn't have any errors values defined in RTE flow.
> I don't want to enforce PMD with the error types.
> If PMD can say that it can give better error code or add a case that may result in
> error, I don't want to change the API.
> So I think we better leave the error codes out of documentation unless they are final and can only
> be resulted from the rte_level.

It is not helpful for application. If so, application don't
know how to interpret and handle various error codes.

>>>> + */
>>>> +__rte_experimental
>>>> +int
>>>> +rte_flow_info_get(uint16_t port_id,
>>>> +          struct rte_flow_port_info *port_info,
>>>> +          struct rte_flow_error *error);
>>>
>>> [snip]
>>>
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Configure the port's flow API engine.
>>>> + *
>>>> + * This API can only be invoked before the application
>>>> + * starts using the rest of the flow library functions.
>>>> + *
>>>> + * The API can be invoked multiple times to change the
>>>> + * settings. The port, however, may reject the changes.
>>>> + *
>>>> + * Parameters in configuration attributes must not exceed
>>>> + * numbers of resources returned by the rte_flow_info_get API.
>>>> + *
>>>> + * @param port_id
>>>> + *   Port identifier of Ethernet device.
>>>> + * @param[in] port_attr
>>>> + *   Port configuration attributes.
>>>> + * @param[out] error
>>>> + *   Perform verbose error reporting if not NULL.
>>>> + *   PMDs initialize this structure in case of error only.
>>>> + *
>>>> + * @return
>>>> + *   0 on success, a negative errno value otherwise and rte_errno is
>>>> set.
>>>
>>> Same here.
>>>
> Same as above.
> 
>>> [snip]
> 
> Best,
> ORi


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 02/11] ethdev: add flow item/action templates
  2022-02-21 13:12               ` Ori Kam
@ 2022-02-21 15:05                 ` Andrew Rybchenko
  2022-02-21 15:43                   ` Ori Kam
  2022-02-21 15:14                 ` Alexander Kozyrev
  1 sibling, 1 reply; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-21 15:05 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/21/22 16:12, Ori Kam wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, February 21, 2022 12:57 PM
>> Subject: Re: [PATCH v8 02/11] ethdev: add flow item/action templates
>>
>> On 2/20/22 06:44, Alexander Kozyrev wrote:
>>> Treating every single flow rule as a completely independent and separate
>>> entity negatively impacts the flow rules insertion rate. Oftentimes in an
>>> application, many flow rules share a common structure (the same item mask
>>> and/or action list) so they can be grouped and classified together.
>>> This knowledge may be used as a source of optimization by a PMD/HW.
>>>
>>> The pattern template defines common matching fields (the item mask) without
>>> values. The actions template holds a list of action types that will be used
>>> together in the same rule. The specific values for items and actions will
>>> be given only during the rule creation.
>>>
>>> A table combines pattern and actions templates along with shared flow rule
>>> attributes (group ID, priority and traffic direction). This way a PMD/HW
>>> can prepare all the resources needed for efficient flow rules creation in
>>> the datapath. To avoid any hiccups due to memory reallocation, the maximum
>>> number of flow rules is defined at the table creation time.
>>>
>>> The flow rule creation is done by selecting a table, a pattern template
>>> and an actions template (which are bound to the table), and setting unique
>>> values for the items and actions.
>>>
>>> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
>>> Acked-by: Ori Kam <orika@nvidia.com>
>>
>> [snip]
>>
>>> +For example, to create an actions template with the same Mark ID
>>> +but different Queue Index for every rule:
>>> +
>>> +.. code-block:: c
>>> +
>>> +	rte_flow_actions_template_attr attr = {.ingress = 1};
>>> +	struct rte_flow_action act[] = {
>>> +		/* Mark ID is 4 for every rule, Queue Index is unique */
>>> +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
>>> +		       .conf = &(struct rte_flow_action_mark){.id = 4}},
>>> +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
>>> +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
>>> +	};
>>> +	struct rte_flow_action msk[] = {
>>> +		/* Assign to MARK mask any non-zero value to make it constant */
>>> +		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
>>> +		       .conf = &(struct rte_flow_action_mark){.id = 1}},
>>
>> 1 looks very strange. I can understand it in the case of
>> integer and boolean fields, but what to do in the case of
>> arrays? IMHO, it would be better to use all 0xff's in value.
>> Anyway, it must be defined very carefully and non-ambiguous.
>>
> There is some issues with all 0xff for example in case of pointers or
> enums this it will result in invalid value.
> So I vote for saving it as is.
> I fully agree that it should be defined very clearly.
> I think that for arrays with predefined size (I don't think we have such in rte_flow)
> it should be declared that that the first element should not be 0.

It is good that we agree that the aspect should be documented
very carefully. Let's do it.

> 
>>> +		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
>>> +		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
>>> +	};
>>> +	struct rte_flow_error err;
>>> +
>>> +	struct rte_flow_actions_template *actions_template =
>>> +		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
>>> +
>>> +The concrete value for Queue Index will be provided at the rule creation.
>>
>> [snip]
>>
>>> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
>>> index ffd48e40d5..e9f684eedb 100644
>>> --- a/lib/ethdev/rte_flow.c
>>> +++ b/lib/ethdev/rte_flow.c
>>> @@ -1461,3 +1461,255 @@ rte_flow_configure(uint16_t port_id,
>>>    				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>>    				  NULL, rte_strerror(ENOTSUP));
>>>    }
>>> +
>>> +struct rte_flow_pattern_template *
>>> +rte_flow_pattern_template_create(uint16_t port_id,
>>> +		const struct rte_flow_pattern_template_attr *template_attr,
>>> +		const struct rte_flow_item pattern[],
>>> +		struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +	struct rte_flow_pattern_template *template;
>>> +
>>> +	if (template_attr == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" template attr is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (pattern == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" pattern is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (dev->data->flow_configured == 0) {
>>> +		RTE_FLOW_LOG(INFO,
>>> +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
>>> +			port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				RTE_FLOW_ERROR_TYPE_STATE,
>>> +				NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (unlikely(!ops))
>>> +		return NULL;
>>
>> See notes about order of checks in previous patch review notes.
>>
>>> +	if (likely(!!ops->pattern_template_create)) {
>>> +		template = ops->pattern_template_create(dev, template_attr,
>>> +							pattern, error);
>>> +		if (template == NULL)
>>> +			flow_err(port_id, -rte_errno, error);
>>> +		return template;
>>> +	}
>>> +	rte_flow_error_set(error, ENOTSUP,
>>> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +			   NULL, rte_strerror(ENOTSUP));
>>> +	return NULL;
>>> +}
>>> +
>>> +int
>>> +rte_flow_pattern_template_destroy(uint16_t port_id,
>>> +		struct rte_flow_pattern_template *pattern_template,
>>> +		struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(pattern_template == NULL))
>>> +		return 0;
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>
>> Same here. I'm afraid it is really important here as well,
>> since request should not return OK if port_id is invalid.
>>
>>
>>> +	if (likely(!!ops->pattern_template_destroy)) {
>>> +		return flow_err(port_id,
>>> +				ops->pattern_template_destroy(dev,
>>> +							      pattern_template,
>>> +							      error),
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> +
>>> +struct rte_flow_actions_template *
>>> +rte_flow_actions_template_create(uint16_t port_id,
>>> +			const struct rte_flow_actions_template_attr *template_attr,
>>> +			const struct rte_flow_action actions[],
>>> +			const struct rte_flow_action masks[],
>>> +			struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +	struct rte_flow_actions_template *template;
>>> +
>>> +	if (template_attr == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" template attr is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (actions == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" actions is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (masks == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" masks is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +
>>> +	}
>>> +	if (dev->data->flow_configured == 0) {
>>> +		RTE_FLOW_LOG(INFO,
>>> +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
>>> +			port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_STATE,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (unlikely(!ops))
>>> +		return NULL;
>>
>> same here
>>
>>> +	if (likely(!!ops->actions_template_create)) {
>>> +		template = ops->actions_template_create(dev, template_attr,
>>> +							actions, masks, error);
>>> +		if (template == NULL)
>>> +			flow_err(port_id, -rte_errno, error);
>>> +		return template;
>>> +	}
>>> +	rte_flow_error_set(error, ENOTSUP,
>>> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +			   NULL, rte_strerror(ENOTSUP));
>>> +	return NULL;
>>> +}
>>> +
>>> +int
>>> +rte_flow_actions_template_destroy(uint16_t port_id,
>>> +			struct rte_flow_actions_template *actions_template,
>>> +			struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(actions_template == NULL))
>>> +		return 0;
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>
>> same here
>>
>>> +	if (likely(!!ops->actions_template_destroy)) {
>>> +		return flow_err(port_id,
>>> +				ops->actions_template_destroy(dev,
>>> +							      actions_template,
>>> +							      error),
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> +
>>> +struct rte_flow_template_table *
>>> +rte_flow_template_table_create(uint16_t port_id,
>>> +			const struct rte_flow_template_table_attr *table_attr,
>>> +			struct rte_flow_pattern_template *pattern_templates[],
>>> +			uint8_t nb_pattern_templates,
>>> +			struct rte_flow_actions_template *actions_templates[],
>>> +			uint8_t nb_actions_templates,
>>> +			struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +	struct rte_flow_template_table *table;
>>> +
>>> +	if (table_attr == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" table attr is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (pattern_templates == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" pattern templates is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (actions_templates == NULL) {
>>> +		RTE_FLOW_LOG(ERR,
>>> +			     "Port %"PRIu16" actions templates is NULL.\n",
>>> +			     port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_ATTR,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (dev->data->flow_configured == 0) {
>>> +		RTE_FLOW_LOG(INFO,
>>> +			"Flow engine on port_id=%"PRIu16" is not configured.\n",
>>> +			port_id);
>>> +		rte_flow_error_set(error, EINVAL,
>>> +				   RTE_FLOW_ERROR_TYPE_STATE,
>>> +				   NULL, rte_strerror(EINVAL));
>>> +		return NULL;
>>> +	}
>>> +	if (unlikely(!ops))
>>> +		return NULL;
>>
>> Order of checks
>>
>>> +	if (likely(!!ops->template_table_create)) {
>>> +		table = ops->template_table_create(dev, table_attr,
>>> +					pattern_templates, nb_pattern_templates,
>>> +					actions_templates, nb_actions_templates,
>>> +					error);
>>> +		if (table == NULL)
>>> +			flow_err(port_id, -rte_errno, error);
>>> +		return table;
>>> +	}
>>> +	rte_flow_error_set(error, ENOTSUP,
>>> +			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +			   NULL, rte_strerror(ENOTSUP));
>>> +	return NULL;
>>> +}
>>> +
>>> +int
>>> +rte_flow_template_table_destroy(uint16_t port_id,
>>> +				struct rte_flow_template_table *template_table,
>>> +				struct rte_flow_error *error)
>>> +{
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
>>> +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
>>> +
>>> +	if (unlikely(template_table == NULL))
>>> +		return 0;
>>> +	if (unlikely(!ops))
>>> +		return -rte_errno;
>>> +	if (likely(!!ops->template_table_destroy)) {
>>> +		return flow_err(port_id,
>>> +				ops->template_table_destroy(dev,
>>> +							    template_table,
>>> +							    error),
>>> +				error);
>>> +	}
>>> +	return rte_flow_error_set(error, ENOTSUP,
>>> +				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
>>> +				  NULL, rte_strerror(ENOTSUP));
>>> +}
>>> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
>>> index cdb7b2be68..776e8ccc11 100644
>>> --- a/lib/ethdev/rte_flow.h
>>> +++ b/lib/ethdev/rte_flow.h
>>> @@ -4983,6 +4983,280 @@ rte_flow_configure(uint16_t port_id,
>>>    		   const struct rte_flow_port_attr *port_attr,
>>>    		   struct rte_flow_error *error);
>>>
>>> +/**
>>> + * Opaque type returned after successful creation of pattern template.
>>> + * This handle can be used to manage the created pattern template.
>>> + */
>>> +struct rte_flow_pattern_template;
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Flow pattern template attributes.
>>
>> Would it be useful to mentioned that at least one direction
>> bit must be set? Otherwise request does not make sense.
>>
> Agree one direction must be set.
> 
>>> + */
>>> +__extension__
>>> +struct rte_flow_pattern_template_attr {
>>> +	/**
>>> +	 * Relaxed matching policy.
>>> +	 * - PMD may match only on items with mask member set and skip
>>> +	 * matching on protocol layers specified without any masks.
>>> +	 * - If not set, PMD will match on protocol layers
>>> +	 * specified without any masks as well.
>>> +	 * - Packet data must be stacked in the same order as the
>>> +	 * protocol layers to match inside packets, starting from the lowest.
>>> +	 */
>>> +	uint32_t relaxed_matching:1;
>>
>> I should notice this earlier, but it looks like a new feature
>> which sounds unrelated to templates. If so, it makes asymmetry
>> in sync and async flow rules capabilities.
>> Am I missing something?
>>
>> Anyway, the feature looks hidden in the patch.
>>
> No this is not hidden feature.
> In current API application must specify all the preciding items,
> For example application wants to match on udp source port.
> The rte flow will look something like eth / ipv4/ udp sport = xxx ..
> When PMD gets this pattern it must enforce the after the eth
> there will be IPv4 and then UDP and then add the match for the
> sport.
> This means that the PMD addes extra matching.
> If the application already validated that there is udp in the packet
> in group 0 and then jump to group 1  it can save the HW those extra matching
> by enabling this bit which means that the HW should only match on implicit
> masked fields.

Old API allows to insert rule to non-0 table as well.
So, similar logic could be applicable. Do we want to
have the same feature in old API?

> 
>>> +	/** Pattern valid for rules applied to ingress traffic. */
>>> +	uint32_t ingress:1;
>>> +	/** Pattern valid for rules applied to egress traffic. */
>>> +	uint32_t egress:1;
>>> +	/** Pattern valid for rules applied to transfer traffic. */
>>> +	uint32_t transfer:1;
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Create flow pattern template.
>>> + *
>>> + * The pattern template defines common matching fields without values.
>>> + * For example, matching on 5 tuple TCP flow, the template will be
>>> + * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
>>> + * while values for each rule will be set during the flow rule creation.
>>> + * The number and order of items in the template must be the same
>>> + * at the rule creation.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] template_attr
>>> + *   Pattern template attributes.
>>> + * @param[in] pattern
>>> + *   Pattern specification (list terminated by the END pattern item).
>>> + *   The spec member of an item is not used unless the end member is used.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   Handle on success, NULL otherwise and rte_errno is set.
>>
>> Don't we want to be explicit about used negative error code?
>> The question is applicable to all functions.
>>
> Same answer as given in other patch.
> Since PMD may have different/extra error codes I don't think we should
> give them here.
> 
>> [snip]
>>
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Flow actions template attributes.
>>
>> Same question about no directions specified.
>>
>>> + */
>>> +__extension__
>>> +struct rte_flow_actions_template_attr {
>>> +	/** Action valid for rules applied to ingress traffic. */
>>> +	uint32_t ingress:1;
>>> +	/** Action valid for rules applied to egress traffic. */
>>> +	uint32_t egress:1;
>>> +	/** Action valid for rules applied to transfer traffic. */
>>> +	uint32_t transfer:1;
>>> +};
>>> +
>>> +/**
>>> + * @warning
>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>> + *
>>> + * Create flow actions template.
>>> + *
>>> + * The actions template holds a list of action types without values.
>>> + * For example, the template to change TCP ports is TCP(s_port + d_port),
>>> + * while values for each rule will be set during the flow rule creation.
>>> + * The number and order of actions in the template must be the same
>>> + * at the rule creation.
>>> + *
>>> + * @param port_id
>>> + *   Port identifier of Ethernet device.
>>> + * @param[in] template_attr
>>> + *   Template attributes.
>>> + * @param[in] actions
>>> + *   Associated actions (list terminated by the END action).
>>> + *   The spec member is only used if @p masks spec is non-zero.
>>> + * @param[in] masks
>>> + *   List of actions that marks which of the action's member is constant.
>>> + *   A mask has the same format as the corresponding action.
>>> + *   If the action field in @p masks is not 0,
>>
>> Comparison with zero makes sense for integers only.
>>
> 
> Why? It can also be with pointers enums.

It should be NULL for pointers and enum-specific member of
enum.

> 
>>> + *   the corresponding value in an action from @p actions will be the part
>>> + *   of the template and used in all flow rules.
>>> + *   The order of actions in @p masks is the same as in @p actions.
>>> + *   In case of indirect actions present in @p actions,
>>> + *   the actual action type should be present in @p mask.
>>> + * @param[out] error
>>> + *   Perform verbose error reporting if not NULL.
>>> + *   PMDs initialize this structure in case of error only.
>>> + *
>>> + * @return
>>> + *   Handle on success, NULL otherwise and rte_errno is set.
>>> + */
>>> +__rte_experimental
>>> +struct rte_flow_actions_template *
>>> +rte_flow_actions_template_create(uint16_t port_id,
>>> +		const struct rte_flow_actions_template_attr *template_attr,
>>> +		const struct rte_flow_action actions[],
>>> +		const struct rte_flow_action masks[],
>>> +		struct rte_flow_error *error);
>>
>> [snip]
> 
> Best,
> Ori
> 


^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v8 02/11] ethdev: add flow item/action templates
  2022-02-21 13:12               ` Ori Kam
  2022-02-21 15:05                 ` Andrew Rybchenko
@ 2022-02-21 15:14                 ` Alexander Kozyrev
  1 sibling, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 15:14 UTC (permalink / raw)
  To: Ori Kam, Andrew Rybchenko, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Monday, February 21, 2022 8:12 Ori Kam <orika@nvidia.com> wrote:

> > See notes about order of checks in previous patch review notes.

I'll fix order of checks in all patches, thank you for the suggestion.

> > Would it be useful to mentioned that at least one direction
> > bit must be set? Otherwise request does not make sense.
> >
> Agree one direction must be set.

Will add comments about mandatory setting of the direction.

> > > + */
> > > +__extension__
> > > +struct rte_flow_pattern_template_attr {
> > > +	/**
> > > +	 * Relaxed matching policy.
> > > +	 * - PMD may match only on items with mask member set and skip
> > > +	 * matching on protocol layers specified without any masks.
> > > +	 * - If not set, PMD will match on protocol layers
> > > +	 * specified without any masks as well.
> > > +	 * - Packet data must be stacked in the same order as the
> > > +	 * protocol layers to match inside packets, starting from the lowest.
> > > +	 */
> > > +	uint32_t relaxed_matching:1;
> >
> > I should notice this earlier, but it looks like a new feature
> > which sounds unrelated to templates. If so, it makes asymmetry
> > in sync and async flow rules capabilities.
> > Am I missing something?
> >
> > Anyway, the feature looks hidden in the patch.
> >
> No this is not hidden feature.
> In current API application must specify all the preciding items,
> For example application wants to match on udp source port.
> The rte flow will look something like eth / ipv4/ udp sport = xxx ..
> When PMD gets this pattern it must enforce the after the eth
> there will be IPv4 and then UDP and then add the match for the
> sport.
> This means that the PMD addes extra matching.
> If the application already validated that there is udp in the packet
> in group 0 and then jump to group 1  it can save the HW those extra matching
> by enabling this bit which means that the HW should only match on implicit
> masked fields.

This is a new capability that only exists for templates.
We can think about adding it to the old rte_flow_create() API when
we are allowed to break ABI again.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-21 14:49             ` Andrew Rybchenko
@ 2022-02-21 15:35               ` Alexander Kozyrev
  0 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 15:35 UTC (permalink / raw)
  To: Andrew Rybchenko, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On Monday, February 21, 2022 9:49 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>:
> [snip]
> 
> > @@ -3777,6 +3782,125 @@ and pattern and actions templates are created.
> >   				&actions_templates, nb_actions_templ,
> >   				&error);
> >
> > +Asynchronous operations
> > +-----------------------
> > +
> > +Flow rules management can be done via special lockless flow
> management queues.
> > +- Queue operations are asynchronous and not thread-safe.
> > +
> > +- Operations can thus be invoked by the app's datapath,
> > +  packet processing can continue while queue operations are processed by
> NIC.
> > +
> > +- Number of flow queues is configured at initialization stage.
> > +
> > +- Available operation types: rule creation, rule destruction,
> > +  indirect rule creation, indirect rule destruction, indirect rule update.
> > +
> > +- Operations may be reordered within a queue.
> > +
> > +- Operations can be postponed and pushed to NIC in batches.
> > +
> > +- Results pulling must be done on time to avoid queue overflows.
> 
> I guess the documenation is for applications, but IMHO it is a
> driver responsiblity. Application should not care  about it.
> Yes, applicatoin should do pulling, but it should not think
> about overflow. Request should be rejected if there is no space
> in queue.

It is rejected in case of queue overflow and -EAGAIN is returned.

> > +
> > +- User data is returned as part of the result to identify an operation.
> 
> Also "User data should uniquelly identify request (may be except corner
> case when only one request is enqueued at most)."

It is up to application what to put into the user data and how it differentiates
between the operations. I don't want to restrict this in any way.

> > +
> > +- Flow handle is valid once the creation operation is enqueued and must
> be
> > +  destroyed even if the operation is not successful and the rule is not
> inserted.
> > +
> > +- Application must wait for the creation operation result before
> enqueueing
> > +  the deletion operation to make sure the creation is processed by NIC.
> > +
> 
> [snip]
> 
> > +The asynchronous flow rule insertion logic can be broken into two phases.
> > +
> > +1. Initialization stage as shown here:
> > +
> > +.. _figure_rte_flow_async_init:
> > +
> > +.. figure:: img/rte_flow_async_init.*
> > +
> > +2. Main loop as presented on a datapath application example:
> > +
> > +.. _figure_rte_flow_async_usage:
> > +
> > +.. figure:: img/rte_flow_async_usage.*
> > +
> > +Enqueue creation operation
> > +~~~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Enqueueing a flow rule creation operation is similar to simple creation.
> > +
> > +.. code-block:: c
> > +
> > +	struct rte_flow *
> > +	rte_flow_async_create(uint16_t port_id,
> > +			      uint32_t queue_id,
> > +			      const struct rte_flow_q_ops_attr *q_ops_attr,
> 
> May be rte_flow_async_ops_attr *attr?

It is still operation on a queue, but I'll rename it to rte_flow_ops_attr to
simplify the description: it is operations attributes.

> > +			      struct rte_flow_template_table *template_table,
> > +			      const struct rte_flow_item pattern[],
> > +			      uint8_t pattern_template_index,
> > +			      const struct rte_flow_action actions[],
> > +			      uint8_t actions_template_index,
> > +			      void *user_data,
> > +			      struct rte_flow_error *error);
> > +
> > +A valid handle in case of success is returned. It must be destroyed later
> > +by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
> 
> [snip]
> 
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index e9f684eedb..4e7b202522 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id,
> >   int
> >   rte_flow_info_get(uint16_t port_id,
> >   		  struct rte_flow_port_info *port_info,
> > +		  struct rte_flow_queue_info *queue_info,
> 
> It should be either optional (update description) or sanity
> checked vs NULL below (similar to port_info).

Ok.

> >   		  struct rte_flow_error *error)
> >   {
> >   	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > @@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id,
> >   		return -rte_errno;
> >   	if (likely(!!ops->info_get)) {
> >   		return flow_err(port_id,
> > -				ops->info_get(dev, port_info, error),
> > +				ops->info_get(dev, port_info, queue_info,
> error),
> >   				error);
> >   	}
> >   	return rte_flow_error_set(error, ENOTSUP,
> > @@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id,
> >   int
> >   rte_flow_configure(uint16_t port_id,
> >   		   const struct rte_flow_port_attr *port_attr,
> > +		   uint16_t nb_queue,
> > +		   const struct rte_flow_queue_attr *queue_attr[],
> 
> Is it really an array of pointers? If yes, why?

Yes, it is. Different queue may have different attributes (sizes...).

> >   		   struct rte_flow_error *error)
> >   {
> >   	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > @@ -1433,7 +1436,7 @@ rte_flow_configure(uint16_t port_id,
> >   	int ret;
> >
> >   	dev->data->flow_configured = 0;
> > -	if (port_attr == NULL) {
> > +	if (port_attr == NULL || queue_attr == NULL) {
> >   		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n",
> port_id);
> 
> Log message becomes misleading

Will fix this.

> [snip]
> 
> >   		return -EINVAL;
> >   	}
> > @@ -1452,7 +1455,7 @@ rte_flow_configure(uint16_t port_id,
> >   	if (unlikely(!ops))
> >   		return -rte_errno;
> >   	if (likely(!!ops->configure)) {
> > -		ret = ops->configure(dev, port_attr, error);
> > +		ret = ops->configure(dev, port_attr, nb_queue, queue_attr,
> error);
> >   		if (ret == 0)
> >   			dev->data->flow_configured = 1;
> >   		return flow_err(port_id, ret, error);
> > @@ -1713,3 +1716,104 @@ rte_flow_template_table_destroy(uint16_t
> port_id,
> >   				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> >   				  NULL, rte_strerror(ENOTSUP));
> >   }
> > +
> > +struct rte_flow *
> > +rte_flow_async_create(uint16_t port_id,
> > +		      uint32_t queue_id,
> > +		      const struct rte_flow_q_ops_attr *q_ops_attr,
> > +		      struct rte_flow_template_table *template_table,
> > +		      const struct rte_flow_item pattern[],
> > +		      uint8_t pattern_template_index,
> > +		      const struct rte_flow_action actions[],
> > +		      uint8_t actions_template_index,
> > +		      void *user_data,
> > +		      struct rte_flow_error *error)
> > +{
> > +	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > +	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +	struct rte_flow *flow;
> > +
> > +	if (unlikely(!ops))
> > +		return NULL;
> > +	if (likely(!!ops->async_create)) {
> 
> Hm, we should make a consistent decision. If it is super-
> critical fast path - we should have no sanity checks at all.
> If no, we should have all simple sanity checks. Otherwise,
> I don't understand why we do some checks and ignore another.

Agree. Will remove every check in this async API.

> [snip]
> 
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
> > index 776e8ccc11..9e71a576f6 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > @@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id,
> >    *
> >    */
> >   struct rte_flow_port_info {
> > +	/**
> > +	 * Maximum umber of queues for asynchronous operations.
> 
> umber -> number

Will fix typo, thanks.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v8 02/11] ethdev: add flow item/action templates
  2022-02-21 15:05                 ` Andrew Rybchenko
@ 2022-02-21 15:43                   ` Ori Kam
  0 siblings, 0 replies; 220+ messages in thread
From: Ori Kam @ 2022-02-21 15:43 UTC (permalink / raw)
  To: Andrew Rybchenko, Alexander Kozyrev, dev
  Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, February 21, 2022 5:06 PM
> Subject: Re: [PATCH v8 02/11] ethdev: add flow item/action templates
> 
> On 2/21/22 16:12, Ori Kam wrote:
> > Hi Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, February 21, 2022 12:57 PM
> >> Subject: Re: [PATCH v8 02/11] ethdev: add flow item/action templates
> >>
> >> On 2/20/22 06:44, Alexander Kozyrev wrote:
> >>> Treating every single flow rule as a completely independent and separate
> >>> entity negatively impacts the flow rules insertion rate. Oftentimes in an

[Snip]

> >> Anyway, the feature looks hidden in the patch.
> >>
> > No this is not hidden feature.
> > In current API application must specify all the preciding items,
> > For example application wants to match on udp source port.
> > The rte flow will look something like eth / ipv4/ udp sport = xxx ..
> > When PMD gets this pattern it must enforce the after the eth
> > there will be IPv4 and then UDP and then add the match for the
> > sport.
> > This means that the PMD addes extra matching.
> > If the application already validated that there is udp in the packet
> > in group 0 and then jump to group 1  it can save the HW those extra matching
> > by enabling this bit which means that the HW should only match on implicit
> > masked fields.
> 
> Old API allows to insert rule to non-0 table as well.
> So, similar logic could be applicable. Do we want to
> have the same feature in old API?
> 
Maybe but in any case this should be done when we can break API.
In general I'm not sure that any new capabilities that will be added to this
API will be implemented in the current one.

> >
> >>> +	/** Pattern valid for rules applied to ingress traffic. */
> >>> +	uint32_t ingress:1;
> >>> +	/** Pattern valid for rules applied to egress traffic. */
> >>> +	uint32_t egress:1;
> >>> +	/** Pattern valid for rules applied to transfer traffic. */
> >>> +	uint32_t transfer:1;
> >>> +};
> >>> +

[Snip]

> >>> +/**
> >>> + * @warning
> >>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>> + *
> >>> + * Create flow actions template.
> >>> + *
> >>> + * The actions template holds a list of action types without values.
> >>> + * For example, the template to change TCP ports is TCP(s_port + d_port),
> >>> + * while values for each rule will be set during the flow rule creation.
> >>> + * The number and order of actions in the template must be the same
> >>> + * at the rule creation.
> >>> + *
> >>> + * @param port_id
> >>> + *   Port identifier of Ethernet device.
> >>> + * @param[in] template_attr
> >>> + *   Template attributes.
> >>> + * @param[in] actions
> >>> + *   Associated actions (list terminated by the END action).
> >>> + *   The spec member is only used if @p masks spec is non-zero.
> >>> + * @param[in] masks
> >>> + *   List of actions that marks which of the action's member is constant.
> >>> + *   A mask has the same format as the corresponding action.
> >>> + *   If the action field in @p masks is not 0,
> >>
> >> Comparison with zero makes sense for integers only.
> >>
> >
> > Why? It can also be with pointers enums.
> 
> It should be NULL for pointers and enum-specific member of
> enum.
> 

Since NULL is zero, I think it is much better to have the same logic
and compare to 0 in all cases.
Adding dedicated enum member will break current API, in addition
if we will have more complex structures like arrays it is much easier
to compare the first element with 0.
You can look at it as true/false for each field.

> >
> >>> + *   the corresponding value in an action from @p actions will be the part
> >>> + *   of the template and used in all flow rules.
> >>> + *   The order of actions in @p masks is the same as in @p actions.
> >>> + *   In case of indirect actions present in @p actions,
> >>> + *   the actual action type should be present in @p mask.
> >>> + * @param[out] error
> >>> + *   Perform verbose error reporting if not NULL.
> >>> + *   PMDs initialize this structure in case of error only.
> >>> + *
> >>> + * @return
> >>> + *   Handle on success, NULL otherwise and rte_errno is set.
> >>> + */
> >>> +__rte_experimental
> >>> +struct rte_flow_actions_template *
> >>> +rte_flow_actions_template_create(uint16_t port_id,
> >>> +		const struct rte_flow_actions_template_attr *template_attr,
> >>> +		const struct rte_flow_action actions[],
> >>> +		const struct rte_flow_action masks[],
> >>> +		struct rte_flow_error *error);
> >>
> >> [snip]
> >
> > Best,
> > Ori
> >

Best,
Ori

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 01/11] ethdev: introduce flow engine configuration
  2022-02-21 14:53                   ` Andrew Rybchenko
@ 2022-02-21 15:49                     ` Thomas Monjalon
  0 siblings, 0 replies; 220+ messages in thread
From: Thomas Monjalon @ 2022-02-21 15:49 UTC (permalink / raw)
  To: Ori Kam, Alexander Kozyrev, Andrew Rybchenko
  Cc: dev, ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

21/02/2022 15:53, Andrew Rybchenko:
> On 2/21/22 15:53, Ori Kam wrote:
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> +/**
> >>>> + * @warning
> >>>> + * @b EXPERIMENTAL: this API may change without prior notice.
> >>>> + *
> >>>> + * Get information about flow engine resources.
> >>>> + *
> >>>> + * @param port_id
> >>>> + *   Port identifier of Ethernet device.
> >>>> + * @param[out] port_info
> >>>> + *   A pointer to a structure of type *rte_flow_port_info*
> >>>> + *   to be filled with the resources information of the port.
> >>>> + * @param[out] error
> >>>> + *   Perform verbose error reporting if not NULL.
> >>>> + *   PMDs initialize this structure in case of error only.
> >>>> + *
> >>>> + * @return
> >>>> + *   0 on success, a negative errno value otherwise and rte_errno is
> >>>> set.
> >>>
> >>> If I'm not mistakes we should be explicit with
> >>> negative result values menting
> >>>
> > I'm not sure, until now we didn't have any errors values defined in RTE flow.
> > I don't want to enforce PMD with the error types.
> > If PMD can say that it can give better error code or add a case that may result in
> > error, I don't want to change the API.
> > So I think we better leave the error codes out of documentation unless they are final and can only
> > be resulted from the rte_level.
> 
> It is not helpful for application. If so, application don't
> know how to interpret and handle various error codes.

Yes rte_flow error codes are not listed
(except for rte_flow_validate and indirect action).
As a consequence, the error code is mainly for debug purposes.

I am OK with being consistent and not listing error codes
in these new functions for now.
For consistency, I suggest removing error codes
from rte_flow_async_action_handle_* in this patchset.

We should have a general discussion about error codes handling later.
It may be a design decision to allow flexibility to PMDs.
If we want to provide some detailed error handling to applications,
we could list main or all kind of errors.

Anyway, this inconsistency is not new, so it should not block the patches IMHO.



^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 00/11] ethdev: datapath-focused flow rules management
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (10 preceding siblings ...)
  2022-02-20  3:44           ` [PATCH v8 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
@ 2022-02-21 23:02           ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
                               ` (11 more replies)
  2022-02-22 16:41           ` [PATCH v8 00/10] " Ferruh Yigit
  12 siblings, 12 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
v9:
- changed sanity checks order
- added reconfiguration explanation
- added remarks on mandatory direction
- renamed operation attributes
- removed all checks in async API
- removed all errno descriptions

v8: fixed documentation indentation

v7:
- added sanity checks and device state validation
- added flow engine state validation
- added ingress/egress/transfer attibutes to templates
- moved user_data to a parameter list
- renamed asynchronous functions from "_q_" to "_async_"
- created a separate commit for indirect actions

v6: addressed more review comments
- fixed typos
- rewrote code snippets
- add a way to get queue size
- renamed port/queue attibutes parameters

v5: changed titles for testpmd commits

v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (11):
  ethdev: introduce flow engine configuration
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  ethdev: bring in async indirect actions operations
  app/testpmd: add flow engine configuration
  app/testpmd: add flow template management
  app/testpmd: add flow table management
  app/testpmd: add async flow create/destroy operations
  app/testpmd: add flow queue push operation
  app/testpmd: add flow queue pull operation
  app/testpmd: add async indirect actions operations

 app/test-pmd/cmdline_flow.c                   | 1726 ++++++++++++++++-
 app/test-pmd/config.c                         |  778 ++++++++
 app/test-pmd/testpmd.h                        |   67 +
 .../prog_guide/img/rte_flow_async_init.svg    |  205 ++
 .../prog_guide/img/rte_flow_async_usage.svg   |  354 ++++
 doc/guides/prog_guide/rte_flow.rst            |  345 ++++
 doc/guides/rel_notes/release_22_03.rst        |   26 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  383 +++-
 lib/ethdev/ethdev_driver.h                    |    7 +-
 lib/ethdev/rte_flow.c                         |  460 +++++
 lib/ethdev/rte_flow.h                         |  741 +++++++
 lib/ethdev/rte_flow_driver.h                  |  108 ++
 lib/ethdev/version.map                        |   15 +
 13 files changed, 5119 insertions(+), 96 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 01/11] ethdev: introduce flow engine configuration
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 02/11] ethdev: add flow item/action templates Alexander Kozyrev
                               ` (10 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  36 ++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/ethdev_driver.h             |   7 +-
 lib/ethdev/rte_flow.c                  |  68 +++++++++++++++
 lib/ethdev/rte_flow.h                  | 111 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 7 files changed, 239 insertions(+), 1 deletion(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 0e475019a6..c89161faef 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3606,6 +3606,42 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API engine configuration and allocates
+requested resources beforehand to avoid costly allocations later.
+Expected number of resources in an application allows PMD to prepare
+and optimize NIC hardware configuration and memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                      const struct rte_flow_port_attr *port_attr,
+                      struct rte_flow_error *error);
+
+Information about the number of available resources can be retrieved via
+``rte_flow_info_get()`` API.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 41923f50e6..68b41f2062 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -99,6 +99,12 @@ New Features
   The information of these properties is important for debug.
   As the information is private, a dump function is introduced.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve available resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 6d697a879a..42f0a3981e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -138,7 +138,12 @@ struct rte_eth_dev_data {
 		 * Indicates whether the device is configured:
 		 * CONFIGURED(1) / NOT CONFIGURED(0)
 		 */
-		dev_configured : 1;
+		dev_configured : 1,
+		/**
+		 * Indicates whether the flow engine is configured:
+		 * CONFIGURED(1) / NOT CONFIGURED(0)
+		 */
+		flow_configured : 1;
 
 	/** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
 	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7f93900bc8..7ec7a95a6b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1392,3 +1392,71 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (port_info == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_started != 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" already started.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (port_attr == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (likely(!!ops->configure)) {
+		ret = ops->configure(dev, port_attr, error);
+		if (ret == 0)
+			dev->data->flow_configured = 1;
+		return flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 765beb3e52..7e6f5eba46 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -43,6 +43,9 @@
 extern "C" {
 #endif
 
+#define RTE_FLOW_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__)
+
 /**
  * Flow rule attributes.
  *
@@ -4872,6 +4875,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine resources.
+ * The zero value means a resource is not supported.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Maximum number of counters.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t max_nb_counters;
+	/**
+	 * Maximum number of aging objects.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t max_nb_aging_objects;
+	/**
+	 * Maximum number traffic meters.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t max_nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get information about flow engine resources.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the resources information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine resources settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging objects to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_objects;
+	/**
+	 * Number of traffic meters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the settings.
+ * The port, however, may reject changes and keep the old config.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index d5cc56a560..0d849c153f 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -264,6 +264,8 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_capability_get;
 	rte_eth_ip_reassembly_conf_get;
 	rte_eth_ip_reassembly_conf_set;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 02/11] ethdev: add flow item/action templates
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                               ` (9 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 135 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 252 ++++++++++++++++++++++
 lib/ethdev/rte_flow.h                  | 280 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 718 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c89161faef..6cdfea09be 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3642,6 +3642,141 @@ Information about the number of available resources can be retrieved via
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	const struct rte_flow_pattern_template_attr attr = {.ingress = 1};
+	struct rte_flow_item_eth eth_m = {
+		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	};
+	struct rte_flow_item pattern[] = {
+		[0] = {.type = RTE_FLOW_ITEM_TYPE_ETH,
+		       .mask = &eth_m},
+		[1] = {.type = RTE_FLOW_ITEM_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &attr, &pattern, &err);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	rte_flow_actions_template_attr attr = {.ingress = 1};
+	struct rte_flow_action act[] = {
+		/* Mark ID is 4 for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action msk[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_actions_template *actions_template =
+		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_template_table_attr table_attr = {
+		.flow_attr.ingress = 1,
+		.nb_flows = 10000;
+	};
+	uint8_t nb_pattern_templ = 1;
+	struct rte_flow_pattern_template *pattern_templates[nb_pattern_templ];
+	pattern_templates[0] = pattern_template;
+	uint8_t nb_actions_templ = 1;
+	struct rte_flow_actions_template *actions_templates[nb_actions_templ];
+	actions_templates[0] = actions_template;
+	struct rte_flow_error error;
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, &table_attr,
+				&pattern_templates, nb_pattern_templ,
+				&actions_templates, nb_actions_templ,
+				&error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 68b41f2062..8211f5c22c 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -105,6 +105,14 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve available resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7ec7a95a6b..1f634637aa 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1460,3 +1460,255 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_STATE,
+				NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+							pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(pattern_template == NULL))
+		return 0;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (masks == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" masks is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+
+	}
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(actions_template == NULL))
+		return 0;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (table_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" table attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(template_table == NULL))
+		return 0;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7e6f5eba46..ffc38fcc3b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4983,6 +4983,286 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - If 1, matching is performed only on items with the mask member set
+	 * and matching on protocol layers specified without any masks is skipped.
+	 * - If 0, matching on protocol layers specified without any masks is done
+	 * as well. This is the standard behaviour of Flow API now.
+	 */
+	uint32_t relaxed_matching:1;
+	/**
+	 * Flow direction for the pattern template.
+	 * At least one direction must be specified.
+	 */
+	/** Pattern valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Pattern valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Pattern valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+__extension__
+struct rte_flow_actions_template_attr {
+	/**
+	 * Flow direction for the actions template.
+	 * At least one direction must be specified.
+	 */
+	/** Action valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Action valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Action valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0d849c153f..62ff791261 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -266,6 +266,12 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_conf_set;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 02/11] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
                               ` (8 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 .../prog_guide/img/rte_flow_async_init.svg    | 205 ++++++++++
 .../prog_guide/img/rte_flow_async_usage.svg   | 354 ++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 124 ++++++
 doc/guides/rel_notes/release_22_03.rst        |   7 +
 lib/ethdev/rte_flow.c                         |  83 +++-
 lib/ethdev/rte_flow.h                         | 241 ++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  35 ++
 lib/ethdev/version.map                        |   4 +
 8 files changed, 1051 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg
new file mode 100644
index 0000000000..f66e9c73d7
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_async_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
new file mode 100644
index 0000000000..bb978bca1e
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
@@ -0,0 +1,354 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_async_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84607"
+     inkscape:cy="305.37563"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="-9"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="587.84119"
+       y="279.47534"
+       width="200.65393"
+       height="46.049305"
+       stroke="#000000"
+       stroke-width="1.20888"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text21"
+       x="595.42902"
+       y="308">rte_flow_async_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="92.600937"
+       y="280.48242"
+       width="210.14578"
+       height="45.035149"
+       stroke="#000000"
+       stroke-width="1.24464"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect65" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text67"
+       x="100.2282"
+       y="308">rte_flow_async_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="357.15436"
+       y="540.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="393.08301"
+       y="569">rte_flow_pull()</text>
+    <rect
+       x="357.15436"
+       y="462.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect79" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text81"
+       x="389.19"
+       y="491">rte_flow_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 6cdfea09be..c6f6f0afba 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare
 and optimize NIC hardware configuration and memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                       const struct rte_flow_port_attr *port_attr,
+                      uint16_t nb_queue,
+                      const struct rte_flow_queue_attr *queue_attr[],
                       struct rte_flow_error *error);
 
 Information about the number of available resources can be retrieved via
@@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via
    int
    rte_flow_info_get(uint16_t port_id,
                      struct rte_flow_port_info *port_info,
+                     struct rte_flow_queue_info *queue_info,
                      struct rte_flow_error *error);
 
 Flow templates
@@ -3777,6 +3782,125 @@ and pattern and actions templates are created.
 				&actions_templates, nb_actions_templ,
 				&error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+  packet processing can continue while queue operations are processed by NIC.
+
+- Number of flow queues is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+  indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+  destroyed even if the operation is not successful and the rule is not inserted.
+
+- Application must wait for the creation operation result before enqueueing
+  the deletion operation to make sure the creation is processed by NIC.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_async_init:
+
+.. figure:: img/rte_flow_async_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_async_usage:
+
+.. figure:: img/rte_flow_async_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_async_create(uint16_t port_id,
+			      uint32_t queue_id,
+			      const struct rte_flow_op_attr *op_attr,
+			      struct rte_flow_template_table *template_table,
+			      const struct rte_flow_item pattern[],
+			      uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      uint8_t actions_template_index,
+			      void *user_data,
+			      struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_destroy(uint16_t port_id,
+			       uint32_t queue_id,
+			       const struct rte_flow_op_attr *op_attr,
+			       struct rte_flow *flow,
+			       void *user_data,
+			       struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_push(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_pull(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_op_result res[],
+		      uint16_t n_res,
+		      struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 8211f5c22c..2477f53ca6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -113,6 +113,13 @@ New Features
     ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+* ** Added functions for asynchronous flow rules creation/destruction
+
+  * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API
+    to enqueue flow creaion/destruction operations asynchronously as well as
+    ``rte_flow_pull`` to poll and retrieve results of these operations and
+    ``rte_flow_push`` to push all the in-flight	operations to the NIC.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 1f634637aa..c314129870 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id,
 	}
 	if (likely(!!ops->info_get)) {
 		return flow_err(port_id,
-				ops->info_get(dev, port_info, error),
+				ops->info_get(dev, port_info, queue_info, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1450,8 +1453,12 @@ rte_flow_configure(uint16_t port_id,
 		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
 		return -EINVAL;
 	}
+	if (queue_attr == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id);
+		return -EINVAL;
+	}
 	if (likely(!!ops->configure)) {
-		ret = ops->configure(dev, port_attr, error);
+		ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error);
 		if (ret == 0)
 			dev->data->flow_configured = 1;
 		return flow_err(port_id, ret, error);
@@ -1712,3 +1719,75 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_op_attr *op_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	flow = ops->async_create(dev, queue_id,
+				 op_attr, template_table,
+				 pattern, pattern_template_index,
+				 actions, actions_template_index,
+				 user_data, error);
+	if (flow == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return flow;
+}
+
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_op_attr *op_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	return flow_err(port_id,
+			ops->async_destroy(dev, queue_id,
+					   op_attr, flow,
+					   user_data, error),
+			error);
+}
+
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	return flow_err(port_id,
+			ops->push(dev, queue_id, error),
+			error);
+}
+
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_op_result res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	ret = ops->pull(dev, queue_id, res, n_res, error);
+	return ret ? ret : flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index ffc38fcc3b..3fb7cb03ae 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Maximum number of queues for asynchronous operations.
+	 */
+	uint32_t max_nb_queues;
 	/**
 	 * Maximum number of counters.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4901,6 +4905,21 @@ struct rte_flow_port_info {
 	uint32_t max_nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine asynchronous queues.
+ * The value only valid if @p port_attr.max_nb_queues is not zero.
+ *
+ */
+struct rte_flow_queue_info {
+	/**
+	 * Maximum number of operations a queue can hold.
+	 */
+	uint32_t max_size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4912,6 +4931,9 @@ struct rte_flow_port_info {
  * @param[out] port_info
  *   A pointer to a structure of type *rte_flow_port_info*
  *   to be filled with the resources information of the port.
+ * @param[out] queue_info
+ *   A pointer to a structure of type *rte_flow_queue_info*
+ *   to be filled with the asynchronous queues information.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4923,6 +4945,7 @@ __rte_experimental
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error);
 
 /**
@@ -4951,6 +4974,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine asynchronous queues settings.
+ * The value means default value picked by PMD.
+ *
+ */
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4970,6 +5008,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4981,6 +5024,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5263,6 +5308,202 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Asynchronous operation attributes.
+ */
+__extension__
+struct rte_flow_op_attr {
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] op_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule has been populated.
+ *   Only completion result indicates that if there was success or failure.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_op_attr *op_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] op_attr
+ *   Rule destruction operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_op_attr *op_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Asynchronous operation status.
+ */
+enum rte_flow_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Asynchronous operation result.
+ */
+__extension__
+struct rte_flow_op_result {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_op_result res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..5907dd63c3 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -156,11 +156,14 @@ struct rte_flow_ops {
 	int (*info_get)
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_queue_info *queue_info,
 		 struct rte_flow_error *err);
 	/** See rte_flow_configure() */
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +202,38 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_async_create() */
+	struct rte_flow *(*async_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_destroy() */
+	int (*async_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow *flow,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_push() */
+	int (*push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_pull() */
+	int (*pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_op_result res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 62ff791261..13c1a22118 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -272,6 +272,10 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_async_create;
+	rte_flow_async_destroy;
+	rte_flow_push;
+	rte_flow_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 04/11] ethdev: bring in async indirect actions operations
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (2 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
                               ` (7 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  50 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   5 ++
 lib/ethdev/rte_flow.c                  |  61 ++++++++++++++
 lib/ethdev/rte_flow.h                  | 109 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  26 ++++++
 lib/ethdev/version.map                 |   3 +
 6 files changed, 254 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c6f6f0afba..8148531073 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3861,6 +3861,56 @@ Enqueueing a flow rule destruction operation is similar to simple destruction.
 			       void *user_data,
 			       struct rte_flow_error *error);
 
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+``rte_flow_async_action_handle_destroy()`` even if the rule was rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
+
 Push enqueued operations
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 2477f53ca6..da186315a5 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -120,6 +120,11 @@ New Features
     ``rte_flow_pull`` to poll and retrieve results of these operations and
     ``rte_flow_push`` to push all the in-flight	operations to the NIC.
 
+  * ethdev: Added asynchronous API for indirect actions management:
+    ``rte_flow_async_action_handle_create``,
+    ``rte_flow_async_action_handle_destroy`` and
+    ``rte_flow_async_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index c314129870..9a902da660 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1791,3 +1791,64 @@ rte_flow_pull(uint16_t port_id,
 	ret = ops->pull(dev, queue_id, res, n_res, error);
 	return ret ? ret : flow_err(port_id, ret, error);
 }
+
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	handle = ops->async_action_handle_create(dev, queue_id, op_attr,
+					     indir_action_conf, action, user_data, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	ret = ops->async_action_handle_destroy(dev, queue_id, op_attr,
+					   action_handle, user_data, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(!ops->async_action_handle_update))
+		return rte_flow_error_set(error, ENOSYS,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  NULL, rte_strerror(ENOSYS));
+	ret = ops->async_action_handle_update(dev, queue_id, op_attr,
+					  action_handle, update, user_data, error);
+	return flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3fb7cb03ae..d8827dd184 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -5504,6 +5504,115 @@ rte_flow_pull(uint16_t port_id,
 	      uint16_t n_res,
 	      struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] op_attr
+ *   Indirect action creation operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   A valid handle in case of success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] op_attr
+ *   Indirect action destruction operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] op_attr
+ *   Indirect action update operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 5907dd63c3..2bff732d6a 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -234,6 +234,32 @@ struct rte_flow_ops {
 		 struct rte_flow_op_result res[],
 		 uint16_t n_res,
 		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_create() */
+	struct rte_flow_action_handle *(*async_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_action_handle_destroy() */
+	int (*async_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 void *user_data,
+		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_update() */
+	int (*async_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 void *user_data,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 13c1a22118..20391ab29e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -276,6 +276,9 @@ EXPERIMENTAL {
 	rte_flow_async_destroy;
 	rte_flow_push;
 	rte_flow_pull;
+	rte_flow_async_action_handle_create;
+	rte_flow_async_action_handle_destroy;
+	rte_flow_async_action_handle_update;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 05/11] app/testpmd: add flow engine configuration
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (3 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 06/11] app/testpmd: add flow template management Alexander Kozyrev
                               ` (6 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  61 ++++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  61 +++++++++-
 4 files changed, 252 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c0644d678c..0533a33ca2 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -868,6 +877,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -949,6 +963,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2045,6 +2069,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2270,7 +2297,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2285,6 +2314,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_OBJECTS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging objects",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_objects)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7736,6 +7824,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8964,6 +9079,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index de1ec14bc7..33a85cd7ca 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,67 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_queue_info queue_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	memset(&port_info, 0, sizeof(port_info));
+	memset(&queue_info, 0, sizeof(queue_info));
+	if (rte_flow_info_get(port_id, &port_info, &queue_info, &error))
+		return port_flow_complain(&error);
+	printf("Flow engine resources on port %u:\n"
+	       "Number of queues: %d\n"
+		   "Size of queues: %d\n"
+	       "Number of counters: %d\n"
+	       "Number of aging objects: %d\n"
+	       "Number of meter actions: %d\n",
+	       port_id, port_info.max_nb_queues,
+		   queue_info.max_size,
+	       port_info.max_nb_counters,
+	       port_info.max_nb_aging_objects,
+	       port_info.max_nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9cc248084f..c8f048aeef 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,51 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Flow engine resources on port #[...]:
+   Number of queues: #[...]
+   Size of queues: #[...]
+   Number of counters: #[...]
+   Number of aging objects: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 06/11] app/testpmd: add flow template management
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (4 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 07/11] app/testpmd: add flow table management Alexander Kozyrev
                               ` (5 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 456 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++
 app/test-pmd/testpmd.h                      |  24 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 101 +++++
 4 files changed, 782 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0533a33ca2..1aa32ea217 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,28 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -882,6 +908,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -890,10 +920,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -973,6 +1006,49 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2072,6 +2148,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2141,6 +2223,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2291,6 +2377,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2299,6 +2399,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2373,6 +2475,148 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute pattern to ingress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute pattern to egress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute pattern to transfer",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute actions to ingress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute actions to egress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute actions to transfer",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2695,7 +2939,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5975,7 +6219,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7851,6 +8097,132 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case PATTERN_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case PATTERN_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case PATTERN_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8820,6 +9192,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9088,6 +9508,38 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port,
+				in->args.vc.pat_templ_id,
+				&((const struct rte_flow_pattern_template_attr) {
+					.relaxed_matching = in->args.vc.attr.reserved,
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port,
+				in->args.vc.act_templ_id,
+				&((const struct rte_flow_actions_template_attr) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions,
+				in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 33a85cd7ca..ecaf4ca03c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2086,6 +2129,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_pattern_template_attr *attr,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_actions_template_attr *attr,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						attr, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..ce46d754a1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,17 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_pattern_template_attr *attr,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_actions_template_attr *attr,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c8f048aeef..2e6a23b12a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,26 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3448,6 +3468,87 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+	   template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 07/11] app/testpmd: add flow table management
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (5 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 06/11] app/testpmd: add flow template management Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
                               ` (4 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1aa32ea217..5715899c95 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -118,6 +120,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -912,6 +928,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1049,6 +1077,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2154,6 +2208,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2227,6 +2286,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2391,6 +2452,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2401,6 +2469,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2617,6 +2686,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8223,6 +8390,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9240,6 +9520,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9540,6 +9844,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ecaf4ca03c..cefbc64c0c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1653,6 +1653,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2289,6 +2332,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index ce46d754a1..fd02498faf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -916,6 +927,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2e6a23b12a..f63eb76a3a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3364,6 +3364,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3549,6 +3562,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 08/11] app/testpmd: add async flow create/destroy operations
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (6 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 07/11] app/testpmd: add flow table management Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
                               ` (3 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5715899c95..d359127df9 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -120,6 +122,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -918,6 +936,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -948,6 +968,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1103,6 +1124,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2213,6 +2246,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2288,6 +2327,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2459,6 +2500,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2481,7 +2529,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2784,6 +2833,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8503,6 +8630,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9544,6 +9776,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9855,6 +10109,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cefbc64c0c..d7ab57b124 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2460,6 +2460,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_op_attr op_attr = { .postpone = postpone };
+	struct rte_flow_op_result comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_async_create(port_id, queue_id, &op_attr, pt->table,
+		pattern, pattern_idx, actions, actions_idx, NULL, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_op_attr op_attr = { .postpone = postpone };
+	struct rte_flow_op_result comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_async_destroy(port_id, queue_id, &op_attr,
+						   pf->flow, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_pull(port_id, queue_id,
+						    &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index fd02498faf..62e874eaaf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -933,6 +933,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f63eb76a3a..194b350932 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3384,6 +3384,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3708,6 +3722,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_async_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4430,6 +4468,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_async_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 09/11] app/testpmd: add flow queue push operation
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (7 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
                               ` (2 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d359127df9..af36975cdf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -138,6 +139,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2252,6 +2256,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2530,7 +2537,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2911,6 +2919,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8735,6 +8758,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10120,6 +10171,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d7ab57b124..9ffb7d88dc 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2626,6 +2626,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 62e874eaaf..24a43fd82c 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 194b350932..4f1f908d4a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3398,6 +3398,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3616,6 +3620,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 10/11] app/testpmd: add flow queue pull operation
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (8 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-21 23:02             ` [PATCH v9 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index af36975cdf..d4b72724e6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -142,6 +143,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2259,6 +2263,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2538,7 +2545,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2934,6 +2942,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8786,6 +8809,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10174,6 +10225,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9ffb7d88dc..158d1b38a8 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2469,14 +2469,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_op_attr op_attr = { .postpone = postpone };
-	struct rte_flow_op_result comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2539,16 +2537,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2563,7 +2551,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_op_attr op_attr = { .postpone = postpone };
-	struct rte_flow_op_result comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2599,21 +2586,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_pull(port_id, queue_id,
-						    &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2654,6 +2626,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_op_result *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_op_result));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 24a43fd82c..5ea2408a0b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -941,6 +941,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 4f1f908d4a..5080ddb256 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3402,6 +3402,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3637,6 +3641,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3767,6 +3788,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4508,6 +4531,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v9 11/11] app/testpmd: add async indirect actions operations
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (9 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
@ 2022-02-21 23:02             ` Alexander Kozyrev
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-21 23:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d4b72724e6..b5f1191e55 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -127,6 +127,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -140,6 +141,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1135,6 +1156,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1144,6 +1166,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2260,6 +2312,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2873,6 +2931,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2926,6 +2991,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6501,6 +6650,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -10228,6 +10481,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 158d1b38a8..cc8e7aa138 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2598,6 +2598,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_op_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_async_action_handle_create(port_id, queue_id,
+					&attr, conf, action, NULL, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_op_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_async_action_handle_destroy(port_id,
+				queue_id, &attr, pia->handle, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_op_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_async_action_handle_update(port_id, queue_id, &attr,
+				    action_handle, action, NULL, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5ea2408a0b..31f766c965 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 5080ddb256..1083c6d538 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4792,6 +4792,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4821,6 +4846,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4844,6 +4888,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_async_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 00/10] ethdev: datapath-focused flow rules management
  2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
                             ` (11 preceding siblings ...)
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-22 16:41           ` Ferruh Yigit
  2022-02-22 16:49             ` Ferruh Yigit
  12 siblings, 1 reply; 220+ messages in thread
From: Ferruh Yigit @ 2022-02-22 16:41 UTC (permalink / raw)
  To: Alexander Kozyrev, dev, andrew.rybchenko
  Cc: orika, thomas, ivan.malov, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/20/2022 3:43 AM, Alexander Kozyrev wrote:
> Three major changes to a generic RTE Flow API were implemented in order
> to speed up flow rule insertion/destruction and adapt the API to the
> needs of a datapath-focused flow rules management applications:
> 
> 1. Pre-configuration hints.
> Application may give us some hints on what type of resources are needed.
> Introduce the configuration routine to prepare all the needed resources
> inside a PMD/HW before any flow rules are created at the init stage.
> 
> 2. Flow grouping using templates.
> Use the knowledge about which flow rules are to be used in an application
> and prepare item and action templates for them in advance. Group flow rules
> with common patterns and actions together for better resource management.
> 
> 3. Queue-based flow management.
> Perform flow rule insertion/destruction asynchronously to spare the datapath
> from blocking on RTE Flow API and allow it to continue with packet processing.
> Enqueue flow rules operations and poll for the results later.
> 
> testpmd examples are part of the patch series. PMD changes will follow.
> 
> RFC:https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
> 
> Signed-off-by: Alexander Kozyrev<akozyrev@nvidia.com>
> Acked-by: Ori Kam<orika@nvidia.com>
> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>

Since these are new APIs and won't impact existing code, I think
can be OK to get with -rc2, only concern may be testing.

@Andrew, can you please review this version too, if it is good
for you, we can proceed.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v8 00/10] ethdev: datapath-focused flow rules management
  2022-02-22 16:41           ` [PATCH v8 00/10] " Ferruh Yigit
@ 2022-02-22 16:49             ` Ferruh Yigit
  0 siblings, 0 replies; 220+ messages in thread
From: Ferruh Yigit @ 2022-02-22 16:49 UTC (permalink / raw)
  To: Alexander Kozyrev, dev, andrew.rybchenko
  Cc: orika, thomas, ivan.malov, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson

On 2/22/2022 4:41 PM, Ferruh Yigit wrote:
> On 2/20/2022 3:43 AM, Alexander Kozyrev wrote:
>> Three major changes to a generic RTE Flow API were implemented in order
>> to speed up flow rule insertion/destruction and adapt the API to the
>> needs of a datapath-focused flow rules management applications:
>>
>> 1. Pre-configuration hints.
>> Application may give us some hints on what type of resources are needed.
>> Introduce the configuration routine to prepare all the needed resources
>> inside a PMD/HW before any flow rules are created at the init stage.
>>
>> 2. Flow grouping using templates.
>> Use the knowledge about which flow rules are to be used in an application
>> and prepare item and action templates for them in advance. Group flow rules
>> with common patterns and actions together for better resource management.
>>
>> 3. Queue-based flow management.
>> Perform flow rule insertion/destruction asynchronously to spare the datapath
>> from blocking on RTE Flow API and allow it to continue with packet processing.
>> Enqueue flow rules operations and poll for the results later.
>>
>> testpmd examples are part of the patch series. PMD changes will follow.
>>
>> RFC:https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
>>
>> Signed-off-by: Alexander Kozyrev<akozyrev@nvidia.com>
>> Acked-by: Ori Kam<orika@nvidia.com>
>> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>
> 
> Since these are new APIs and won't impact existing code, I think
> can be OK to get with -rc2, only concern may be testing.
> 
> @Andrew, can you please review this version too, if it is good
> for you, we can proceed.

And this supposed to be a reply to v9, please review it:
https://patches.dpdk.org/user/todo/dpdk/?series=21774

^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 00/11] ethdev: datapath-focused flow rules management
  2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                               ` (10 preceding siblings ...)
  2022-02-21 23:02             ` [PATCH v9 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
@ 2022-02-23  3:02             ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
                                 ` (11 more replies)
  11 siblings, 12 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Three major changes to a generic RTE Flow API were implemented in order
to speed up flow rule insertion/destruction and adapt the API to the
needs of a datapath-focused flow rules management applications:

1. Pre-configuration hints.
Application may give us some hints on what type of resources are needed.
Introduce the configuration routine to prepare all the needed resources
inside a PMD/HW before any flow rules are created at the init stage.

2. Flow grouping using templates.
Use the knowledge about which flow rules are to be used in an application
and prepare item and action templates for them in advance. Group flow rules
with common patterns and actions together for better resource management.

3. Queue-based flow management.
Perform flow rule insertion/destruction asynchronously to spare the datapath
from blocking on RTE Flow API and allow it to continue with packet processing.
Enqueue flow rules operations and poll for the results later.

testpmd examples are part of the patch series. PMD changes will follow.

RFC: https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

---
v10: removed missed check in async API

v9:
- changed sanity checks order
- added reconfiguration explanation
- added remarks on mandatory direction
- renamed operation attributes
- removed all checks in async API
- removed all errno descriptions

v8: fixed documentation indentation

v7:
- added sanity checks and device state validation
- added flow engine state validation
- added ingress/egress/transfer attibutes to templates
- moved user_data to a parameter list
- renamed asynchronous functions from "_q_" to "_async_"
- created a separate commit for indirect actions

v6: addressed more review comments
- fixed typos
- rewrote code snippets
- add a way to get queue size
- renamed port/queue attibutes parameters

v5: changed titles for testpmd commits

v4: 
- removed structures versioning
- introduced new rte_flow_port_info structure for rte_flow_info_get API
- renamed rte_flow_table_create to rte_flow_template_table_create

v3: addressed review comments and updated documentation
- added API to get info about pre-configurable resources
- renamed rte_flow_item_template to rte_flow_pattern_template
- renamed drain operation attribute to postpone
- renamed rte_flow_q_drain to rte_flow_q_push
- renamed rte_flow_q_dequeue to rte_flow_q_pull

v2: fixed patch series thread

Alexander Kozyrev (11):
  ethdev: introduce flow engine configuration
  ethdev: add flow item/action templates
  ethdev: bring in async queue-based flow rules operations
  ethdev: bring in async indirect actions operations
  app/testpmd: add flow engine configuration
  app/testpmd: add flow template management
  app/testpmd: add flow table management
  app/testpmd: add async flow create/destroy operations
  app/testpmd: add flow queue push operation
  app/testpmd: add flow queue pull operation
  app/testpmd: add async indirect actions operations

 app/test-pmd/cmdline_flow.c                   | 1726 ++++++++++++++++-
 app/test-pmd/config.c                         |  778 ++++++++
 app/test-pmd/testpmd.h                        |   67 +
 .../prog_guide/img/rte_flow_async_init.svg    |  205 ++
 .../prog_guide/img/rte_flow_async_usage.svg   |  354 ++++
 doc/guides/prog_guide/rte_flow.rst            |  345 ++++
 doc/guides/rel_notes/release_22_03.rst        |   26 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst   |  383 +++-
 lib/ethdev/ethdev_driver.h                    |    7 +-
 lib/ethdev/rte_flow.c                         |  454 +++++
 lib/ethdev/rte_flow.h                         |  741 +++++++
 lib/ethdev/rte_flow_driver.h                  |  108 ++
 lib/ethdev/version.map                        |   15 +
 13 files changed, 5113 insertions(+), 96 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 01/11] ethdev: introduce flow engine configuration
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-24  8:22                 ` Andrew Rybchenko
  2022-02-23  3:02               ` [PATCH v10 02/11] ethdev: add flow item/action templates Alexander Kozyrev
                                 ` (10 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

The flow rules creation/destruction at a large scale incurs a performance
penalty and may negatively impact the packet processing when used
as part of the datapath logic. This is mainly because software/hardware
resources are allocated and prepared during the flow rule creation.

In order to optimize the insertion rate, PMD may use some hints provided
by the application at the initialization phase. The rte_flow_configure()
function allows to pre-allocate all the needed resources beforehand.
These resources can be used at a later stage without costly allocations.
Every PMD may use only the subset of hints and ignore unused ones or
fail in case the requested configuration is not supported.

The rte_flow_info_get() is available to retrieve the information about
supported pre-configurable resources. Both these functions must be called
before any other usage of the flow API engine.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  36 ++++++++
 doc/guides/rel_notes/release_22_03.rst |   6 ++
 lib/ethdev/ethdev_driver.h             |   7 +-
 lib/ethdev/rte_flow.c                  |  68 +++++++++++++++
 lib/ethdev/rte_flow.h                  | 111 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  10 +++
 lib/ethdev/version.map                 |   2 +
 7 files changed, 239 insertions(+), 1 deletion(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 0e475019a6..c89161faef 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3606,6 +3606,42 @@ Return values:
 
 - 0 on success, a negative errno value otherwise and ``rte_errno`` is set.
 
+Flow engine configuration
+-------------------------
+
+Configure flow API management.
+
+An application may provide some parameters at the initialization phase about
+rules engine configuration and/or expected flow rules characteristics.
+These parameters may be used by PMD to preallocate resources and configure NIC.
+
+Configuration
+~~~~~~~~~~~~~
+
+This function performs the flow API engine configuration and allocates
+requested resources beforehand to avoid costly allocations later.
+Expected number of resources in an application allows PMD to prepare
+and optimize NIC hardware configuration and memory layout in advance.
+``rte_flow_configure()`` must be called before any flow rule is created,
+but after an Ethernet device is configured.
+
+.. code-block:: c
+
+   int
+   rte_flow_configure(uint16_t port_id,
+                      const struct rte_flow_port_attr *port_attr,
+                      struct rte_flow_error *error);
+
+Information about the number of available resources can be retrieved via
+``rte_flow_info_get()`` API.
+
+.. code-block:: c
+
+   int
+   rte_flow_info_get(uint16_t port_id,
+                     struct rte_flow_port_info *port_info,
+                     struct rte_flow_error *error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 41923f50e6..68b41f2062 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -99,6 +99,12 @@ New Features
   The information of these properties is important for debug.
   As the information is private, a dump function is introduced.
 
+* ** Added functions to configure Flow API engine
+
+  * ethdev: Added ``rte_flow_configure`` API to configure Flow Management
+    engine, allowing to pre-allocate some resources for better performance.
+    Added ``rte_flow_info_get`` API to retrieve available resources.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 6d697a879a..42f0a3981e 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -138,7 +138,12 @@ struct rte_eth_dev_data {
 		 * Indicates whether the device is configured:
 		 * CONFIGURED(1) / NOT CONFIGURED(0)
 		 */
-		dev_configured : 1;
+		dev_configured : 1,
+		/**
+		 * Indicates whether the flow engine is configured:
+		 * CONFIGURED(1) / NOT CONFIGURED(0)
+		 */
+		flow_configured : 1;
 
 	/** Queues state: HAIRPIN(2) / STARTED(1) / STOPPED(0) */
 	uint8_t rx_queue_state[RTE_MAX_QUEUES_PER_PORT];
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7f93900bc8..7ec7a95a6b 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1392,3 +1392,71 @@ rte_flow_flex_item_release(uint16_t port_id,
 	ret = ops->flex_item_release(dev, handle, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (port_info == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (likely(!!ops->info_get)) {
+		return flow_err(port_id,
+				ops->info_get(dev, port_info, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (dev->data->dev_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (dev->data->dev_started != 0) {
+		RTE_FLOW_LOG(INFO,
+			"Device with port_id=%"PRIu16" already started.\n",
+			port_id);
+		return -EINVAL;
+	}
+	if (port_attr == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
+		return -EINVAL;
+	}
+	if (likely(!!ops->configure)) {
+		ret = ops->configure(dev, port_attr, error);
+		if (ret == 0)
+			dev->data->flow_configured = 1;
+		return flow_err(port_id, ret, error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 765beb3e52..7e6f5eba46 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -43,6 +43,9 @@
 extern "C" {
 #endif
 
+#define RTE_FLOW_LOG(level, ...) \
+	rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__)
+
 /**
  * Flow rule attributes.
  *
@@ -4872,6 +4875,114 @@ rte_flow_flex_item_release(uint16_t port_id,
 			   const struct rte_flow_item_flex_handle *handle,
 			   struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine resources.
+ * The zero value means a resource is not supported.
+ *
+ */
+struct rte_flow_port_info {
+	/**
+	 * Maximum number of counters.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t max_nb_counters;
+	/**
+	 * Maximum number of aging objects.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t max_nb_aging_objects;
+	/**
+	 * Maximum number traffic meters.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t max_nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get information about flow engine resources.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[out] port_info
+ *   A pointer to a structure of type *rte_flow_port_info*
+ *   to be filled with the resources information of the port.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_info_get(uint16_t port_id,
+		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine resources settings.
+ * The zero value means on demand resource allocations only.
+ *
+ */
+struct rte_flow_port_attr {
+	/**
+	 * Number of counters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_COUNT
+	 */
+	uint32_t nb_counters;
+	/**
+	 * Number of aging objects to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_AGE
+	 */
+	uint32_t nb_aging_objects;
+	/**
+	 * Number of traffic meters to configure.
+	 * @see RTE_FLOW_ACTION_TYPE_METER
+	 */
+	uint32_t nb_meters;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Configure the port's flow API engine.
+ *
+ * This API can only be invoked before the application
+ * starts using the rest of the flow library functions.
+ *
+ * The API can be invoked multiple times to change the settings.
+ * The port, however, may reject changes and keep the old config.
+ *
+ * Parameters in configuration attributes must not exceed
+ * numbers of resources returned by the rte_flow_info_get API.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] port_attr
+ *   Port configuration attributes.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_configure(uint16_t port_id,
+		   const struct rte_flow_port_attr *port_attr,
+		   struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index f691b04af4..7c29930d0f 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -152,6 +152,16 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_item_flex_handle *handle,
 		 struct rte_flow_error *error);
+	/** See rte_flow_info_get() */
+	int (*info_get)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_configure() */
+	int (*configure)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_port_attr *port_attr,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index d5cc56a560..0d849c153f 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -264,6 +264,8 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_capability_get;
 	rte_eth_ip_reassembly_conf_get;
 	rte_eth_ip_reassembly_conf_set;
+	rte_flow_info_get;
+	rte_flow_configure;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 02/11] ethdev: add flow item/action templates
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-24  8:34                 ` Andrew Rybchenko
  2022-02-23  3:02               ` [PATCH v10 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
                                 ` (9 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Treating every single flow rule as a completely independent and separate
entity negatively impacts the flow rules insertion rate. Oftentimes in an
application, many flow rules share a common structure (the same item mask
and/or action list) so they can be grouped and classified together.
This knowledge may be used as a source of optimization by a PMD/HW.

The pattern template defines common matching fields (the item mask) without
values. The actions template holds a list of action types that will be used
together in the same rule. The specific values for items and actions will
be given only during the rule creation.

A table combines pattern and actions templates along with shared flow rule
attributes (group ID, priority and traffic direction). This way a PMD/HW
can prepare all the resources needed for efficient flow rules creation in
the datapath. To avoid any hiccups due to memory reallocation, the maximum
number of flow rules is defined at the table creation time.

The flow rule creation is done by selecting a table, a pattern template
and an actions template (which are bound to the table), and setting unique
values for the items and actions.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     | 135 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   8 +
 lib/ethdev/rte_flow.c                  | 252 ++++++++++++++++++++++
 lib/ethdev/rte_flow.h                  | 280 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  37 ++++
 lib/ethdev/version.map                 |   6 +
 6 files changed, 718 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c89161faef..6cdfea09be 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3642,6 +3642,141 @@ Information about the number of available resources can be retrieved via
                      struct rte_flow_port_info *port_info,
                      struct rte_flow_error *error);
 
+Flow templates
+~~~~~~~~~~~~~~
+
+Oftentimes in an application, many flow rules share a common structure
+(the same pattern and/or action list) so they can be grouped and classified
+together. This knowledge may be used as a source of optimization by a PMD/HW.
+The flow rule creation is done by selecting a table, a pattern template
+and an actions template (which are bound to the table), and setting unique
+values for the items and actions. This API is not thread-safe.
+
+Pattern templates
+^^^^^^^^^^^^^^^^^
+
+The pattern template defines a common pattern (the item mask) without values.
+The mask value is used to select a field to match on, spec/last are ignored.
+The pattern template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_pattern_template *
+	rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+For example, to create a pattern template to match on the destination MAC:
+
+.. code-block:: c
+
+	const struct rte_flow_pattern_template_attr attr = {.ingress = 1};
+	struct rte_flow_item_eth eth_m = {
+		.dst.addr_bytes = "\xff\xff\xff\xff\xff\xff";
+	};
+	struct rte_flow_item pattern[] = {
+		[0] = {.type = RTE_FLOW_ITEM_TYPE_ETH,
+		       .mask = &eth_m},
+		[1] = {.type = RTE_FLOW_ITEM_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_pattern_template *pattern_template =
+		rte_flow_pattern_template_create(port, &attr, &pattern, &err);
+
+The concrete value to match on will be provided at the rule creation.
+
+Actions templates
+^^^^^^^^^^^^^^^^^
+
+The actions template holds a list of action types to be used in flow rules.
+The mask parameter allows specifying a shared constant value for every rule.
+The actions template may be used by multiple tables and must not be destroyed
+until all these tables are destroyed first.
+
+.. code-block:: c
+
+	struct rte_flow_actions_template *
+	rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+For example, to create an actions template with the same Mark ID
+but different Queue Index for every rule:
+
+.. code-block:: c
+
+	rte_flow_actions_template_attr attr = {.ingress = 1};
+	struct rte_flow_action act[] = {
+		/* Mark ID is 4 for every rule, Queue Index is unique */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 4}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_action msk[] = {
+		/* Assign to MARK mask any non-zero value to make it constant */
+		[0] = {.type = RTE_FLOW_ACTION_TYPE_MARK,
+		       .conf = &(struct rte_flow_action_mark){.id = 1}},
+		[1] = {.type = RTE_FLOW_ACTION_TYPE_QUEUE},
+		[2] = {.type = RTE_FLOW_ACTION_TYPE_END,},
+	};
+	struct rte_flow_error err;
+
+	struct rte_flow_actions_template *actions_template =
+		rte_flow_actions_template_create(port, &attr, &act, &msk, &err);
+
+The concrete value for Queue Index will be provided at the rule creation.
+
+Template table
+^^^^^^^^^^^^^^
+
+A template table combines a number of pattern and actions templates along with
+shared flow rule attributes (group ID, priority and traffic direction).
+This way a PMD/HW can prepare all the resources needed for efficient flow rules
+creation in the datapath. To avoid any hiccups due to memory reallocation,
+the maximum number of flow rules is defined at table creation time.
+Any flow rule creation beyond the maximum table size is rejected.
+Application may create another table to accommodate more rules in this case.
+
+.. code-block:: c
+
+	struct rte_flow_template_table *
+	rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+A table can be created only after the Flow Rules management is configured
+and pattern and actions templates are created.
+
+.. code-block:: c
+
+	rte_flow_template_table_attr table_attr = {
+		.flow_attr.ingress = 1,
+		.nb_flows = 10000;
+	};
+	uint8_t nb_pattern_templ = 1;
+	struct rte_flow_pattern_template *pattern_templates[nb_pattern_templ];
+	pattern_templates[0] = pattern_template;
+	uint8_t nb_actions_templ = 1;
+	struct rte_flow_actions_template *actions_templates[nb_actions_templ];
+	actions_templates[0] = actions_template;
+	struct rte_flow_error error;
+
+	struct rte_flow_template_table *table =
+		rte_flow_template_table_create(port, &table_attr,
+				&pattern_templates, nb_pattern_templ,
+				&actions_templates, nb_actions_templ,
+				&error);
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 68b41f2062..8211f5c22c 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -105,6 +105,14 @@ New Features
     engine, allowing to pre-allocate some resources for better performance.
     Added ``rte_flow_info_get`` API to retrieve available resources.
 
+  * ethdev: Added ``rte_flow_template_table_create`` API to group flow rules
+    with the same flow attributes and common matching patterns and actions
+    defined by ``rte_flow_pattern_template_create`` and
+    ``rte_flow_actions_template_create`` respectively.
+    Corresponding functions to destroy these entities are:
+    ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
+    and ``rte_flow_actions_template_destroy``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7ec7a95a6b..1f634637aa 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1460,3 +1460,255 @@ rte_flow_configure(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_pattern_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				RTE_FLOW_ERROR_TYPE_STATE,
+				NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (likely(!!ops->pattern_template_create)) {
+		template = ops->pattern_template_create(dev, template_attr,
+							pattern, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(pattern_template == NULL))
+		return 0;
+	if (likely(!!ops->pattern_template_destroy)) {
+		return flow_err(port_id,
+				ops->pattern_template_destroy(dev,
+							      pattern_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+			const struct rte_flow_actions_template_attr *template_attr,
+			const struct rte_flow_action actions[],
+			const struct rte_flow_action masks[],
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_actions_template *template;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (template_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" template attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (masks == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" masks is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+
+	}
+	if (likely(!!ops->actions_template_create)) {
+		template = ops->actions_template_create(dev, template_attr,
+							actions, masks, error);
+		if (template == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return template;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+			struct rte_flow_actions_template *actions_template,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(actions_template == NULL))
+		return 0;
+	if (likely(!!ops->actions_template_destroy)) {
+		return flow_err(port_id,
+				ops->actions_template_destroy(dev,
+							      actions_template,
+							      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+			const struct rte_flow_template_table_attr *table_attr,
+			struct rte_flow_pattern_template *pattern_templates[],
+			uint8_t nb_pattern_templates,
+			struct rte_flow_actions_template *actions_templates[],
+			uint8_t nb_actions_templates,
+			struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_template_table *table;
+
+	if (unlikely(!ops))
+		return NULL;
+	if (dev->data->flow_configured == 0) {
+		RTE_FLOW_LOG(INFO,
+			"Flow engine on port_id=%"PRIu16" is not configured.\n",
+			port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_STATE,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (table_attr == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" table attr is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (pattern_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" pattern templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (actions_templates == NULL) {
+		RTE_FLOW_LOG(ERR,
+			     "Port %"PRIu16" actions templates is NULL.\n",
+			     port_id);
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL, rte_strerror(EINVAL));
+		return NULL;
+	}
+	if (likely(!!ops->template_table_create)) {
+		table = ops->template_table_create(dev, table_attr,
+					pattern_templates, nb_pattern_templates,
+					actions_templates, nb_actions_templates,
+					error);
+		if (table == NULL)
+			flow_err(port_id, -rte_errno, error);
+		return table;
+	}
+	rte_flow_error_set(error, ENOTSUP,
+			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+			   NULL, rte_strerror(ENOTSUP));
+	return NULL;
+}
+
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+				struct rte_flow_template_table *template_table,
+				struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (unlikely(template_table == NULL))
+		return 0;
+	if (likely(!!ops->template_table_destroy)) {
+		return flow_err(port_id,
+				ops->template_table_destroy(dev,
+							    template_table,
+							    error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 7e6f5eba46..ffc38fcc3b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4983,6 +4983,286 @@ rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
 		   struct rte_flow_error *error);
 
+/**
+ * Opaque type returned after successful creation of pattern template.
+ * This handle can be used to manage the created pattern template.
+ */
+struct rte_flow_pattern_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow pattern template attributes.
+ */
+__extension__
+struct rte_flow_pattern_template_attr {
+	/**
+	 * Relaxed matching policy.
+	 * - If 1, matching is performed only on items with the mask member set
+	 * and matching on protocol layers specified without any masks is skipped.
+	 * - If 0, matching on protocol layers specified without any masks is done
+	 * as well. This is the standard behaviour of Flow API now.
+	 */
+	uint32_t relaxed_matching:1;
+	/**
+	 * Flow direction for the pattern template.
+	 * At least one direction must be specified.
+	 */
+	/** Pattern valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Pattern valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Pattern valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow pattern template.
+ *
+ * The pattern template defines common matching fields without values.
+ * For example, matching on 5 tuple TCP flow, the template will be
+ * eth(null) + IPv4(source + dest) + TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of items in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Pattern template attributes.
+ * @param[in] pattern
+ *   Pattern specification (list terminated by the END pattern item).
+ *   The spec member of an item is not used unless the end member is used.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_pattern_template *
+rte_flow_pattern_template_create(uint16_t port_id,
+		const struct rte_flow_pattern_template_attr *template_attr,
+		const struct rte_flow_item pattern[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow pattern template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] pattern_template
+ *   Handle of the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pattern_template_destroy(uint16_t port_id,
+		struct rte_flow_pattern_template *pattern_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of actions template.
+ * This handle can be used to manage the created actions template.
+ */
+struct rte_flow_actions_template;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow actions template attributes.
+ */
+__extension__
+struct rte_flow_actions_template_attr {
+	/**
+	 * Flow direction for the actions template.
+	 * At least one direction must be specified.
+	 */
+	/** Action valid for rules applied to ingress traffic. */
+	uint32_t ingress:1;
+	/** Action valid for rules applied to egress traffic. */
+	uint32_t egress:1;
+	/** Action valid for rules applied to transfer traffic. */
+	uint32_t transfer:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow actions template.
+ *
+ * The actions template holds a list of action types without values.
+ * For example, the template to change TCP ports is TCP(s_port + d_port),
+ * while values for each rule will be set during the flow rule creation.
+ * The number and order of actions in the template must be the same
+ * at the rule creation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_attr
+ *   Template attributes.
+ * @param[in] actions
+ *   Associated actions (list terminated by the END action).
+ *   The spec member is only used if @p masks spec is non-zero.
+ * @param[in] masks
+ *   List of actions that marks which of the action's member is constant.
+ *   A mask has the same format as the corresponding action.
+ *   If the action field in @p masks is not 0,
+ *   the corresponding value in an action from @p actions will be the part
+ *   of the template and used in all flow rules.
+ *   The order of actions in @p masks is the same as in @p actions.
+ *   In case of indirect actions present in @p actions,
+ *   the actual action type should be present in @p mask.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_actions_template *
+rte_flow_actions_template_create(uint16_t port_id,
+		const struct rte_flow_actions_template_attr *template_attr,
+		const struct rte_flow_action actions[],
+		const struct rte_flow_action masks[],
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow actions template.
+ *
+ * This function may be called only when
+ * there are no more tables referencing this template.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions_template
+ *   Handle to the template to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_actions_template_destroy(uint16_t port_id,
+		struct rte_flow_actions_template *actions_template,
+		struct rte_flow_error *error);
+
+/**
+ * Opaque type returned after successful creation of a template table.
+ * This handle can be used to manage the created template table.
+ */
+struct rte_flow_template_table;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Table attributes.
+ */
+struct rte_flow_template_table_attr {
+	/**
+	 * Flow attributes to be used in each rule generated from this table.
+	 */
+	struct rte_flow_attr flow_attr;
+	/**
+	 * Maximum number of flow rules that this table holds.
+	 */
+	uint32_t nb_flows;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Create flow template table.
+ *
+ * A template table consists of multiple pattern templates and actions
+ * templates associated with a single set of rule attributes (group ID,
+ * priority and traffic direction).
+ *
+ * Each rule is free to use any combination of pattern and actions templates
+ * and specify particular values for items and actions it would like to change.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] table_attr
+ *   Template table attributes.
+ * @param[in] pattern_templates
+ *   Array of pattern templates to be used in this table.
+ * @param[in] nb_pattern_templates
+ *   The number of pattern templates in the pattern_templates array.
+ * @param[in] actions_templates
+ *   Array of actions templates to be used in this table.
+ * @param[in] nb_actions_templates
+ *   The number of actions templates in the actions_templates array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_template_table *
+rte_flow_template_table_create(uint16_t port_id,
+		const struct rte_flow_template_table_attr *table_attr,
+		struct rte_flow_pattern_template *pattern_templates[],
+		uint8_t nb_pattern_templates,
+		struct rte_flow_actions_template *actions_templates[],
+		uint8_t nb_actions_templates,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Destroy flow template table.
+ *
+ * This function may be called only when
+ * there are no more flow rules referencing this table.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] template_table
+ *   Handle to the table to be destroyed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_template_table_destroy(uint16_t port_id,
+		struct rte_flow_template_table *template_table,
+		struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 7c29930d0f..2d96db1dc7 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -162,6 +162,43 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
 		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_create() */
+	struct rte_flow_pattern_template *(*pattern_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_pattern_template_attr *template_attr,
+		 const struct rte_flow_item pattern[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_pattern_template_destroy() */
+	int (*pattern_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_pattern_template *pattern_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_create() */
+	struct rte_flow_actions_template *(*actions_template_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_actions_template_attr *template_attr,
+		 const struct rte_flow_action actions[],
+		 const struct rte_flow_action masks[],
+		 struct rte_flow_error *err);
+	/** See rte_flow_actions_template_destroy() */
+	int (*actions_template_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_actions_template *actions_template,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_create() */
+	struct rte_flow_template_table *(*template_table_create)
+		(struct rte_eth_dev *dev,
+		 const struct rte_flow_template_table_attr *table_attr,
+		 struct rte_flow_pattern_template *pattern_templates[],
+		 uint8_t nb_pattern_templates,
+		 struct rte_flow_actions_template *actions_templates[],
+		 uint8_t nb_actions_templates,
+		 struct rte_flow_error *err);
+	/** See rte_flow_template_table_destroy() */
+	int (*template_table_destroy)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_template_table *template_table,
+		 struct rte_flow_error *err);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 0d849c153f..62ff791261 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -266,6 +266,12 @@ EXPERIMENTAL {
 	rte_eth_ip_reassembly_conf_set;
 	rte_flow_info_get;
 	rte_flow_configure;
+	rte_flow_pattern_template_create;
+	rte_flow_pattern_template_destroy;
+	rte_flow_actions_template_create;
+	rte_flow_actions_template_destroy;
+	rte_flow_template_table_create;
+	rte_flow_template_table_destroy;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 02/11] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-24  8:35                 ` Andrew Rybchenko
  2022-02-23  3:02               ` [PATCH v10 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
                                 ` (8 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

A new, faster, queue-based flow rules management mechanism is needed for
applications offloading rules inside the datapath. This asynchronous
and lockless mechanism frees the CPU for further packet processing and
reduces the performance impact of the flow rules creation/destruction
on the datapath. Note that queues are not thread-safe and the queue
should be accessed from the same thread for all queue operations.
It is the responsibility of the app to sync the queue functions in case
of multi-threaded access to the same queue.

The rte_flow_async_create() function enqueues a flow creation to the
requested queue. It benefits from already configured resources and sets
unique values on top of item and action templates. A flow rule is enqueued
on the specified flow queue and offloaded asynchronously to the hardware.
The function returns immediately to spare CPU for further packet
processing. The application must invoke the rte_flow_pull() function
to complete the flow rule operation offloading, to clear the queue, and to
receive the operation status. The rte_flow_async_destroy() function
enqueues a flow destruction to the requested queue.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 .../prog_guide/img/rte_flow_async_init.svg    | 205 ++++++++++
 .../prog_guide/img/rte_flow_async_usage.svg   | 354 ++++++++++++++++++
 doc/guides/prog_guide/rte_flow.rst            | 124 ++++++
 doc/guides/rel_notes/release_22_03.rst        |   7 +
 lib/ethdev/rte_flow.c                         |  83 +++-
 lib/ethdev/rte_flow.h                         | 241 ++++++++++++
 lib/ethdev/rte_flow_driver.h                  |  35 ++
 lib/ethdev/version.map                        |   4 +
 8 files changed, 1051 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg
 create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg

diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg
new file mode 100644
index 0000000000..f66e9c73d7
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg
@@ -0,0 +1,205 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="485"
+   height="535"
+   overflow="hidden"
+   version="1.1"
+   id="svg61"
+   sodipodi:docname="rte_flow_async_init.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview63"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.517757"
+     inkscape:cx="242.79249"
+     inkscape:cy="267.17057"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="2391"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g59" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="485"
+         height="535"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g59">
+    <rect
+       x="0"
+       y="0"
+       width="485"
+       height="535"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="0.500053"
+       y="79.5001"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(121.6 116)"
+       id="text13">
+         rte_eth_dev_configure
+         <tspan
+   font-size="24"
+   x="224.007"
+   y="0"
+   id="tspan11">()</tspan></text>
+    <rect
+       x="0.500053"
+       y="158.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect15" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="24"
+       transform="translate(140.273 195)"
+       id="text17">
+         rte_flow_configure()
+      </text>
+    <rect
+       x="0.500053"
+       y="236.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text21"
+       x="63.425903"
+       y="274">rte_flow_pattern_template_create()</text>
+    <rect
+       x="0.500053"
+       y="316.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect23" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text27"
+       x="69.379204"
+       y="353">rte_flow_actions_template_create()</text>
+    <rect
+       x="0.500053"
+       y="0.500053"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect29" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       transform="translate(177.233,37)"
+       id="text33">rte_eal_init()</text>
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 60)"
+       id="path35" />
+    <path
+       d="M2-1.08133e-05 2.00005 9.41805-1.99995 9.41807-2 1.08133e-05ZM6.00004 7.41802 0.000104987 19.4181-5.99996 7.41809Z"
+       transform="matrix(-1 0 0 1 241 138)"
+       id="path37" />
+    <path
+       d="M2-1.09108e-05 2.00005 9.2445-1.99995 9.24452-2 1.09108e-05ZM6.00004 7.24448 0.000104987 19.2445-5.99996 7.24455Z"
+       transform="matrix(-1 0 0 1 241 217)"
+       id="path39" />
+    <rect
+       x="0.500053"
+       y="395.5"
+       width="482"
+       height="59"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#FFFFFF"
+       id="rect41" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text47"
+       x="76.988998"
+       y="432">rte_flow_template_table_create()</text>
+    <path
+       d="M2-1.05859e-05 2.00005 9.83526-1.99995 9.83529-2 1.05859e-05ZM6.00004 7.83524 0.000104987 19.8353-5.99996 7.83531Z"
+       transform="matrix(-1 0 0 1 241 296)"
+       id="path49" />
+    <path
+       d="M243 375 243 384.191 239 384.191 239 375ZM247 382.191 241 394.191 235 382.191Z"
+       id="path51" />
+    <rect
+       x="0.500053"
+       y="473.5"
+       width="482"
+       height="60"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect53" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="24px"
+       id="text55"
+       x="149.30299"
+       y="511">rte_eth_dev_start()</text>
+    <path
+       d="M245 454 245 463.191 241 463.191 241 454ZM249 461.191 243 473.191 237 461.191Z"
+       id="path57" />
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
new file mode 100644
index 0000000000..bb978bca1e
--- /dev/null
+++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg
@@ -0,0 +1,354 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- SPDX-License-Identifier: BSD-3-Clause -->
+
+<!-- Copyright(c) 2022 NVIDIA Corporation & Affiliates -->
+
+<svg
+   width="880"
+   height="610"
+   overflow="hidden"
+   version="1.1"
+   id="svg103"
+   sodipodi:docname="rte_flow_async_usage.svg"
+   inkscape:version="1.1.1 (3bf5ae0d25, 2021-09-20)"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:svg="http://www.w3.org/2000/svg">
+  <sodipodi:namedview
+     id="namedview105"
+     pagecolor="#ffffff"
+     bordercolor="#666666"
+     borderopacity="1.0"
+     inkscape:pageshadow="2"
+     inkscape:pageopacity="0.0"
+     inkscape:pagecheckerboard="0"
+     showgrid="false"
+     inkscape:zoom="1.3311475"
+     inkscape:cx="439.84607"
+     inkscape:cy="305.37563"
+     inkscape:window-width="2400"
+     inkscape:window-height="1271"
+     inkscape:window-x="-9"
+     inkscape:window-y="-9"
+     inkscape:window-maximized="1"
+     inkscape:current-layer="g101" />
+  <defs
+     id="defs5">
+    <clipPath
+       id="clip0">
+      <rect
+         x="0"
+         y="0"
+         width="880"
+         height="610"
+         id="rect2" />
+    </clipPath>
+  </defs>
+  <g
+     clip-path="url(#clip0)"
+     id="g101">
+    <rect
+       x="0"
+       y="0"
+       width="880"
+       height="610"
+       fill="#FFFFFF"
+       id="rect7" />
+    <rect
+       x="333.5"
+       y="0.500053"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#A6A6A6"
+       id="rect9" />
+    <text
+       font-family="Consolas, Consolas_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       transform="translate(357.196,29)"
+       id="text11">rte_eth_rx_burst()</text>
+    <rect
+       x="333.5"
+       y="63.5001"
+       width="234"
+       height="45"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect13" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(394.666 91)"
+       id="text17">analyze <tspan
+   font-size="19"
+   x="60.9267"
+   y="0"
+   id="tspan15">packet </tspan></text>
+    <rect
+       x="587.84119"
+       y="279.47534"
+       width="200.65393"
+       height="46.049305"
+       stroke="#000000"
+       stroke-width="1.20888"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect19" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text21"
+       x="595.42902"
+       y="308">rte_flow_async_create()</text>
+    <path
+       d="M333.5 384 450.5 350.5 567.5 384 450.5 417.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path23" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(430.069 378)"
+       id="text27">more <tspan
+   font-size="19"
+   x="-12.94"
+   y="23"
+   id="tspan25">packets?</tspan></text>
+    <path
+       d="M689.249 325.5 689.249 338.402 450.5 338.402 450.833 338.069 450.833 343.971 450.167 343.971 450.167 337.735 688.916 337.735 688.582 338.069 688.582 325.5ZM454.5 342.638 450.5 350.638 446.5 342.638Z"
+       id="path29" />
+    <path
+       d="M450.833 45.5 450.833 56.8197 450.167 56.8197 450.167 45.5001ZM454.5 55.4864 450.5 63.4864 446.5 55.4864Z"
+       id="path31" />
+    <path
+       d="M450.833 108.5 450.833 120.375 450.167 120.375 450.167 108.5ZM454.5 119.041 450.5 127.041 446.5 119.041Z"
+       id="path33" />
+    <path
+       d="M451.833 507.5 451.833 533.61 451.167 533.61 451.167 507.5ZM455.5 532.277 451.5 540.277 447.5 532.277Z"
+       id="path35" />
+    <path
+       d="M0 0.333333-23.9993 0.333333-23.666 0-23.666 141.649-23.9993 141.316 562.966 141.316 562.633 141.649 562.633 124.315 563.299 124.315 563.299 141.983-24.3327 141.983-24.3327-0.333333 0-0.333333ZM558.966 125.649 562.966 117.649 566.966 125.649Z"
+       transform="matrix(-6.12323e-17 -1 -1 6.12323e-17 451.149 585.466)"
+       id="path37" />
+    <path
+       d="M333.5 160.5 450.5 126.5 567.5 160.5 450.5 194.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path39" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(417.576 155)"
+       id="text43">add new <tspan
+   font-size="19"
+   x="13.2867"
+   y="23"
+   id="tspan41">rule?</tspan></text>
+    <path
+       d="M567.5 160.167 689.267 160.167 689.267 273.228 688.6 273.228 688.6 160.5 688.933 160.833 567.5 160.833ZM692.933 271.894 688.933 279.894 684.933 271.894Z"
+       id="path45" />
+    <rect
+       x="602.5"
+       y="127.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect47" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(611.34 148)"
+       id="text49">yes</text>
+    <rect
+       x="254.5"
+       y="126.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect51" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(267.182 147)"
+       id="text53">no</text>
+    <path
+       d="M0-0.333333 251.563-0.333333 251.563 298.328 8.00002 298.328 8.00002 297.662 251.229 297.662 250.896 297.995 250.896 0 251.229 0.333333 0 0.333333ZM9.33333 301.995 1.33333 297.995 9.33333 293.995Z"
+       transform="matrix(1 0 0 -1 567.5 383.495)"
+       id="path55" />
+    <path
+       d="M86.5001 213.5 203.5 180.5 320.5 213.5 203.5 246.5Z"
+       stroke="#000000"
+       stroke-width="1.33333"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       fill-rule="evenodd"
+       id="path57" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(159.155 208)"
+       id="text61">destroy the <tspan
+   font-size="19"
+   x="24.0333"
+   y="23"
+   id="tspan59">rule?</tspan></text>
+    <path
+       d="M0-0.333333 131.029-0.333333 131.029 12.9778 130.363 12.9778 130.363 0 130.696 0.333333 0 0.333333ZM134.696 11.6445 130.696 19.6445 126.696 11.6445Z"
+       transform="matrix(-1 1.22465e-16 1.22465e-16 1 334.196 160.5)"
+       id="path63" />
+    <rect
+       x="92.600937"
+       y="280.48242"
+       width="210.14578"
+       height="45.035149"
+       stroke="#000000"
+       stroke-width="1.24464"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect65" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text67"
+       x="100.2282"
+       y="308">rte_flow_async_destroy()</text>
+    <path
+       d="M0 0.333333-24.0001 0.333333-23.6667 0-23.6667 49.9498-24.0001 49.6165 121.748 49.6165 121.748 59.958 121.082 59.958 121.082 49.9498 121.415 50.2832-24.3334 50.2832-24.3334-0.333333 0-0.333333ZM125.415 58.6247 121.415 66.6247 117.415 58.6247Z"
+       transform="matrix(-1 0 0 1 319.915 213.5)"
+       id="path69" />
+    <path
+       d="M86.5001 213.833 62.5002 213.833 62.8335 213.5 62.8335 383.95 62.5002 383.617 327.511 383.617 327.511 384.283 62.1668 384.283 62.1668 213.167 86.5001 213.167ZM326.178 379.95 334.178 383.95 326.178 387.95Z"
+       id="path71" />
+    <path
+       d="M0-0.333333 12.8273-0.333333 12.8273 252.111 12.494 251.778 18.321 251.778 18.321 252.445 12.1607 252.445 12.1607 0 12.494 0.333333 0 0.333333ZM16.9877 248.111 24.9877 252.111 16.9877 256.111Z"
+       transform="matrix(1.83697e-16 1 1 -1.83697e-16 198.5 325.5)"
+       id="path73" />
+    <rect
+       x="357.15436"
+       y="540.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect75" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text77"
+       x="393.08301"
+       y="569">rte_flow_pull()</text>
+    <rect
+       x="357.15436"
+       y="462.45984"
+       width="183.59026"
+       height="45.08033"
+       stroke="#000000"
+       stroke-width="1.25785"
+       stroke-miterlimit="8"
+       fill="#ffffff"
+       id="rect79" />
+    <text
+       font-family="Calibri, Calibri_MSFontService, sans-serif"
+       font-weight="400"
+       font-size="19px"
+       id="text81"
+       x="389.19"
+       y="491">rte_flow_push()</text>
+    <path
+       d="M450.833 417.495 451.402 455.999 450.735 456.008 450.167 417.505ZM455.048 454.611 451.167 462.669 447.049 454.729Z"
+       id="path83" />
+    <rect
+       x="0.500053"
+       y="287.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect85" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(12.8617 308)"
+       id="text87">no</text>
+    <rect
+       x="357.5"
+       y="223.5"
+       width="47"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect89" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(367.001 244)"
+       id="text91">yes</text>
+    <rect
+       x="469.5"
+       y="421.5"
+       width="46"
+       height="30"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect93" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(481.872 442)"
+       id="text95">no</text>
+    <rect
+       x="832.5"
+       y="223.5"
+       width="46"
+       height="31"
+       stroke="#000000"
+       stroke-width="0.666667"
+       stroke-miterlimit="8"
+       fill="#D9D9D9"
+       id="rect97" />
+    <text
+       font-family="Calibri,Calibri_MSFontService,sans-serif"
+       font-weight="400"
+       font-size="19"
+       transform="translate(841.777 244)"
+       id="text99">yes</text>
+  </g>
+</svg>
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 6cdfea09be..c6f6f0afba 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare
 and optimize NIC hardware configuration and memory layout in advance.
 ``rte_flow_configure()`` must be called before any flow rule is created,
 but after an Ethernet device is configured.
+It also creates flow queues for asynchronous flow rules operations via
+queue-based API, see `Asynchronous operations`_ section.
 
 .. code-block:: c
 
    int
    rte_flow_configure(uint16_t port_id,
                       const struct rte_flow_port_attr *port_attr,
+                      uint16_t nb_queue,
+                      const struct rte_flow_queue_attr *queue_attr[],
                       struct rte_flow_error *error);
 
 Information about the number of available resources can be retrieved via
@@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via
    int
    rte_flow_info_get(uint16_t port_id,
                      struct rte_flow_port_info *port_info,
+                     struct rte_flow_queue_info *queue_info,
                      struct rte_flow_error *error);
 
 Flow templates
@@ -3777,6 +3782,125 @@ and pattern and actions templates are created.
 				&actions_templates, nb_actions_templ,
 				&error);
 
+Asynchronous operations
+-----------------------
+
+Flow rules management can be done via special lockless flow management queues.
+- Queue operations are asynchronous and not thread-safe.
+
+- Operations can thus be invoked by the app's datapath,
+  packet processing can continue while queue operations are processed by NIC.
+
+- Number of flow queues is configured at initialization stage.
+
+- Available operation types: rule creation, rule destruction,
+  indirect rule creation, indirect rule destruction, indirect rule update.
+
+- Operations may be reordered within a queue.
+
+- Operations can be postponed and pushed to NIC in batches.
+
+- Results pulling must be done on time to avoid queue overflows.
+
+- User data is returned as part of the result to identify an operation.
+
+- Flow handle is valid once the creation operation is enqueued and must be
+  destroyed even if the operation is not successful and the rule is not inserted.
+
+- Application must wait for the creation operation result before enqueueing
+  the deletion operation to make sure the creation is processed by NIC.
+
+The asynchronous flow rule insertion logic can be broken into two phases.
+
+1. Initialization stage as shown here:
+
+.. _figure_rte_flow_async_init:
+
+.. figure:: img/rte_flow_async_init.*
+
+2. Main loop as presented on a datapath application example:
+
+.. _figure_rte_flow_async_usage:
+
+.. figure:: img/rte_flow_async_usage.*
+
+Enqueue creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule creation operation is similar to simple creation.
+
+.. code-block:: c
+
+	struct rte_flow *
+	rte_flow_async_create(uint16_t port_id,
+			      uint32_t queue_id,
+			      const struct rte_flow_op_attr *op_attr,
+			      struct rte_flow_template_table *template_table,
+			      const struct rte_flow_item pattern[],
+			      uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      uint8_t actions_template_index,
+			      void *user_data,
+			      struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later
+by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW.
+
+Enqueue destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Enqueueing a flow rule destruction operation is similar to simple destruction.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_destroy(uint16_t port_id,
+			       uint32_t queue_id,
+			       const struct rte_flow_op_attr *op_attr,
+			       struct rte_flow *flow,
+			       void *user_data,
+			       struct rte_flow_error *error);
+
+Push enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pushing all internally stored rules from a queue to the NIC.
+
+.. code-block:: c
+
+	int
+	rte_flow_push(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_error *error);
+
+There is the postpone attribute in the queue operation attributes.
+When it is set, multiple operations can be bulked together and not sent to HW
+right away to save SW/HW interactions and prioritize throughput over latency.
+The application must invoke this function to actually push all outstanding
+operations to HW in this case.
+
+Pull enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pulling asynchronous operations results.
+
+The application must invoke this function in order to complete asynchronous
+flow rule operations and to receive flow rule operations statuses.
+
+.. code-block:: c
+
+	int
+	rte_flow_pull(uint16_t port_id,
+		      uint32_t queue_id,
+		      struct rte_flow_op_result res[],
+		      uint16_t n_res,
+		      struct rte_flow_error *error);
+
+Multiple outstanding operation results can be pulled simultaneously.
+User data may be provided during a flow creation/destruction in order
+to distinguish between multiple operations. User data is returned as part
+of the result to provide a method to detect which operation is completed.
+
 .. _flow_isolated_mode:
 
 Flow isolated mode
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 8211f5c22c..2477f53ca6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -113,6 +113,13 @@ New Features
     ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy``
     and ``rte_flow_actions_template_destroy``.
 
+* ** Added functions for asynchronous flow rules creation/destruction
+
+  * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API
+    to enqueue flow creaion/destruction operations asynchronously as well as
+    ``rte_flow_pull`` to poll and retrieve results of these operations and
+    ``rte_flow_push`` to push all the in-flight	operations to the NIC.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 1f634637aa..c314129870 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id,
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id,
 	}
 	if (likely(!!ops->info_get)) {
 		return flow_err(port_id,
-				ops->info_get(dev, port_info, error),
+				ops->info_get(dev, port_info, queue_info, error),
 				error);
 	}
 	return rte_flow_error_set(error, ENOTSUP,
@@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id,
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error)
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
@@ -1450,8 +1453,12 @@ rte_flow_configure(uint16_t port_id,
 		RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id);
 		return -EINVAL;
 	}
+	if (queue_attr == NULL) {
+		RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id);
+		return -EINVAL;
+	}
 	if (likely(!!ops->configure)) {
-		ret = ops->configure(dev, port_attr, error);
+		ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error);
 		if (ret == 0)
 			dev->data->flow_configured = 1;
 		return flow_err(port_id, ret, error);
@@ -1712,3 +1719,75 @@ rte_flow_template_table_destroy(uint16_t port_id,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
 }
+
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_op_attr *op_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow *flow;
+
+	flow = ops->async_create(dev, queue_id,
+				 op_attr, template_table,
+				 pattern, pattern_template_index,
+				 actions, actions_template_index,
+				 user_data, error);
+	if (flow == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return flow;
+}
+
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_op_attr *op_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	return flow_err(port_id,
+			ops->async_destroy(dev, queue_id,
+					   op_attr, flow,
+					   user_data, error),
+			error);
+}
+
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	return flow_err(port_id,
+			ops->push(dev, queue_id, error),
+			error);
+}
+
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_op_result res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	ret = ops->pull(dev, queue_id, res, n_res, error);
+	return ret ? ret : flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index ffc38fcc3b..3fb7cb03ae 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id,
  *
  */
 struct rte_flow_port_info {
+	/**
+	 * Maximum number of queues for asynchronous operations.
+	 */
+	uint32_t max_nb_queues;
 	/**
 	 * Maximum number of counters.
 	 * @see RTE_FLOW_ACTION_TYPE_COUNT
@@ -4901,6 +4905,21 @@ struct rte_flow_port_info {
 	uint32_t max_nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Information about flow engine asynchronous queues.
+ * The value only valid if @p port_attr.max_nb_queues is not zero.
+ *
+ */
+struct rte_flow_queue_info {
+	/**
+	 * Maximum number of operations a queue can hold.
+	 */
+	uint32_t max_size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4912,6 +4931,9 @@ struct rte_flow_port_info {
  * @param[out] port_info
  *   A pointer to a structure of type *rte_flow_port_info*
  *   to be filled with the resources information of the port.
+ * @param[out] queue_info
+ *   A pointer to a structure of type *rte_flow_queue_info*
+ *   to be filled with the asynchronous queues information.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4923,6 +4945,7 @@ __rte_experimental
 int
 rte_flow_info_get(uint16_t port_id,
 		  struct rte_flow_port_info *port_info,
+		  struct rte_flow_queue_info *queue_info,
 		  struct rte_flow_error *error);
 
 /**
@@ -4951,6 +4974,21 @@ struct rte_flow_port_attr {
 	uint32_t nb_meters;
 };
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Flow engine asynchronous queues settings.
+ * The value means default value picked by PMD.
+ *
+ */
+struct rte_flow_queue_attr {
+	/**
+	 * Number of flow rule operations a queue can hold.
+	 */
+	uint32_t size;
+};
+
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice.
@@ -4970,6 +5008,11 @@ struct rte_flow_port_attr {
  *   Port identifier of Ethernet device.
  * @param[in] port_attr
  *   Port configuration attributes.
+ * @param[in] nb_queue
+ *   Number of flow queues to be configured.
+ * @param[in] queue_attr
+ *   Array that holds attributes for each flow queue.
+ *   Number of elements is set in @p port_attr.nb_queues.
  * @param[out] error
  *   Perform verbose error reporting if not NULL.
  *   PMDs initialize this structure in case of error only.
@@ -4981,6 +5024,8 @@ __rte_experimental
 int
 rte_flow_configure(uint16_t port_id,
 		   const struct rte_flow_port_attr *port_attr,
+		   uint16_t nb_queue,
+		   const struct rte_flow_queue_attr *queue_attr[],
 		   struct rte_flow_error *error);
 
 /**
@@ -5263,6 +5308,202 @@ rte_flow_template_table_destroy(uint16_t port_id,
 		struct rte_flow_template_table *template_table,
 		struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Asynchronous operation attributes.
+ */
+__extension__
+struct rte_flow_op_attr {
+	 /**
+	  * When set, the requested action will not be sent to the HW immediately.
+	  * The application must call the rte_flow_queue_push to actually send it.
+	  */
+	uint32_t postpone:1;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule creation operation.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue used to insert the rule.
+ * @param[in] op_attr
+ *   Rule creation operation attributes.
+ * @param[in] template_table
+ *   Template table to select templates from.
+ * @param[in] pattern
+ *   List of pattern items to be used.
+ *   The list order should match the order in the pattern template.
+ *   The spec is the only relevant member of the item that is being used.
+ * @param[in] pattern_template_index
+ *   Pattern template index in the table.
+ * @param[in] actions
+ *   List of actions to be used.
+ *   The list order should match the order in the actions template.
+ * @param[in] actions_template_index
+ *   Actions template index in the table.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Handle on success, NULL otherwise and rte_errno is set.
+ *   The rule handle doesn't mean that the rule has been populated.
+ *   Only completion result indicates that if there was success or failure.
+ */
+__rte_experimental
+struct rte_flow *
+rte_flow_async_create(uint16_t port_id,
+		      uint32_t queue_id,
+		      const struct rte_flow_op_attr *op_attr,
+		      struct rte_flow_template_table *template_table,
+		      const struct rte_flow_item pattern[],
+		      uint8_t pattern_template_index,
+		      const struct rte_flow_action actions[],
+		      uint8_t actions_template_index,
+		      void *user_data,
+		      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue rule destruction operation.
+ *
+ * This function enqueues a destruction operation on the queue.
+ * Application should assume that after calling this function
+ * the rule handle is not valid anymore.
+ * Completion indicates the full removal of the rule from the HW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to destroy the rule.
+ *   This must match the queue on which the rule was created.
+ * @param[in] op_attr
+ *   Rule destruction operation attributes.
+ * @param[in] flow
+ *   Flow handle to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_destroy(uint16_t port_id,
+		       uint32_t queue_id,
+		       const struct rte_flow_op_attr *op_attr,
+		       struct rte_flow *flow,
+		       void *user_data,
+		       struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Push all internally stored rules to the HW.
+ * Postponed rules are rules that were inserted with the postpone flag set.
+ * Can be used to notify the HW about batch of rules prepared by the SW to
+ * reduce the number of communications between the HW and SW.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to be pushed.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_push(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Asynchronous operation status.
+ */
+enum rte_flow_op_status {
+	/**
+	 * The operation was completed successfully.
+	 */
+	RTE_FLOW_OP_SUCCESS,
+	/**
+	 * The operation was not completed successfully.
+	 */
+	RTE_FLOW_OP_ERROR,
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Asynchronous operation result.
+ */
+__extension__
+struct rte_flow_op_result {
+	/**
+	 * Returns the status of the operation that this completion signals.
+	 */
+	enum rte_flow_op_status status;
+	/**
+	 * The user data that will be returned on the completion events.
+	 */
+	void *user_data;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Pull a rte flow operation.
+ * The application must invoke this function in order to complete
+ * the flow rule offloading and to retrieve the flow rule operation status.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue which is used to pull the operation.
+ * @param[out] res
+ *   Array of results that will be set.
+ * @param[in] n_res
+ *   Maximum number of results that can be returned.
+ *   This value is equal to the size of the res array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   Number of results that were pulled,
+ *   a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_pull(uint16_t port_id,
+	      uint32_t queue_id,
+	      struct rte_flow_op_result res[],
+	      uint16_t n_res,
+	      struct rte_flow_error *error);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2d96db1dc7..5907dd63c3 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -156,11 +156,14 @@ struct rte_flow_ops {
 	int (*info_get)
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_port_info *port_info,
+		 struct rte_flow_queue_info *queue_info,
 		 struct rte_flow_error *err);
 	/** See rte_flow_configure() */
 	int (*configure)
 		(struct rte_eth_dev *dev,
 		 const struct rte_flow_port_attr *port_attr,
+		 uint16_t nb_queue,
+		 const struct rte_flow_queue_attr *queue_attr[],
 		 struct rte_flow_error *err);
 	/** See rte_flow_pattern_template_create() */
 	struct rte_flow_pattern_template *(*pattern_template_create)
@@ -199,6 +202,38 @@ struct rte_flow_ops {
 		(struct rte_eth_dev *dev,
 		 struct rte_flow_template_table *template_table,
 		 struct rte_flow_error *err);
+	/** See rte_flow_async_create() */
+	struct rte_flow *(*async_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow_template_table *template_table,
+		 const struct rte_flow_item pattern[],
+		 uint8_t pattern_template_index,
+		 const struct rte_flow_action actions[],
+		 uint8_t actions_template_index,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_destroy() */
+	int (*async_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow *flow,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_push() */
+	int (*push)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_error *err);
+	/** See rte_flow_pull() */
+	int (*pull)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 struct rte_flow_op_result res[],
+		 uint16_t n_res,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 62ff791261..13c1a22118 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -272,6 +272,10 @@ EXPERIMENTAL {
 	rte_flow_actions_template_destroy;
 	rte_flow_template_table_create;
 	rte_flow_template_table_destroy;
+	rte_flow_async_create;
+	rte_flow_async_destroy;
+	rte_flow_push;
+	rte_flow_pull;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 04/11] ethdev: bring in async indirect actions operations
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (2 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-24  8:37                 ` Andrew Rybchenko
  2022-02-23  3:02               ` [PATCH v10 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
                                 ` (7 subsequent siblings)
  11 siblings, 1 reply; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Queue-based flow rules management mechanism is suitable
not only for flow rules creation/destruction, but also
for speeding up other types of Flow API management.
Indirect action object operations may be executed
asynchronously as well. Provide async versions for all
indirect action operations, namely:
rte_flow_async_action_handle_create,
rte_flow_async_action_handle_destroy and
rte_flow_async_action_handle_update.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/prog_guide/rte_flow.rst     |  50 ++++++++++++
 doc/guides/rel_notes/release_22_03.rst |   5 ++
 lib/ethdev/rte_flow.c                  |  55 +++++++++++++
 lib/ethdev/rte_flow.h                  | 109 +++++++++++++++++++++++++
 lib/ethdev/rte_flow_driver.h           |  26 ++++++
 lib/ethdev/version.map                 |   3 +
 6 files changed, 248 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index c6f6f0afba..8148531073 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3861,6 +3861,56 @@ Enqueueing a flow rule destruction operation is similar to simple destruction.
 			       void *user_data,
 			       struct rte_flow_error *error);
 
+Enqueue indirect action creation operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action creation API.
+
+.. code-block:: c
+
+	struct rte_flow_action_handle *
+	rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+A valid handle in case of success is returned. It must be destroyed later by
+``rte_flow_async_action_handle_destroy()`` even if the rule was rejected.
+
+Enqueue indirect action destruction operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action destruction API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+Enqueue indirect action update operation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Asynchronous version of indirect action update API.
+
+.. code-block:: c
+
+	int
+	rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_q_ops_attr *q_ops_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
+
 Push enqueued operations
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 2477f53ca6..da186315a5 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -120,6 +120,11 @@ New Features
     ``rte_flow_pull`` to poll and retrieve results of these operations and
     ``rte_flow_push`` to push all the in-flight	operations to the NIC.
 
+  * ethdev: Added asynchronous API for indirect actions management:
+    ``rte_flow_async_action_handle_create``,
+    ``rte_flow_async_action_handle_destroy`` and
+    ``rte_flow_async_action_handle_update``.
+
 * **Updated AF_XDP PMD**
 
   * Added support for libxdp >=v1.2.2.
diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index c314129870..2c35a2f13e 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -1791,3 +1791,58 @@ rte_flow_pull(uint16_t port_id,
 	ret = ops->pull(dev, queue_id, res, n_res, error);
 	return ret ? ret : flow_err(port_id, ret, error);
 }
+
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	struct rte_flow_action_handle *handle;
+
+	handle = ops->async_action_handle_create(dev, queue_id, op_attr,
+					     indir_action_conf, action, user_data, error);
+	if (handle == NULL)
+		flow_err(port_id, -rte_errno, error);
+	return handle;
+}
+
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	ret = ops->async_action_handle_destroy(dev, queue_id, op_attr,
+					   action_handle, user_data, error);
+	return flow_err(port_id, ret, error);
+}
+
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
+
+	ret = ops->async_action_handle_update(dev, queue_id, op_attr,
+					  action_handle, update, user_data, error);
+	return flow_err(port_id, ret, error);
+}
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 3fb7cb03ae..d8827dd184 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -5504,6 +5504,115 @@ rte_flow_pull(uint16_t port_id,
 	      uint16_t n_res,
 	      struct rte_flow_error *error);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action creation operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to create the rule.
+ * @param[in] op_attr
+ *   Indirect action creation operation attributes.
+ * @param[in] indir_action_conf
+ *   Action configuration for the indirect action object creation.
+ * @param[in] action
+ *   Specific configuration of the indirect action object.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   A valid handle in case of success, NULL otherwise and rte_errno is set.
+ */
+__rte_experimental
+struct rte_flow_action_handle *
+rte_flow_async_action_handle_create(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		const struct rte_flow_indir_action_conf *indir_action_conf,
+		const struct rte_flow_action *action,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action destruction operation.
+ * The destroy queue must be the same
+ * as the queue on which the action was created.
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to destroy the rule.
+ * @param[in] op_attr
+ *   Indirect action destruction operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be destroyed.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_destroy(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		void *user_data,
+		struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Enqueue indirect action update operation.
+ * @see rte_flow_action_handle_create
+ *
+ * @param[in] port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] queue_id
+ *   Flow queue which is used to update the rule.
+ * @param[in] op_attr
+ *   Indirect action update operation attributes.
+ * @param[in] action_handle
+ *   Handle for the indirect action object to be updated.
+ * @param[in] update
+ *   Update profile specification used to modify the action pointed by handle.
+ *   *update* could be with the same type of the immediate action corresponding
+ *   to the *handle* argument when creating, or a wrapper structure includes
+ *   action configuration to be updated and bit fields to indicate the member
+ *   of fields inside the action to update.
+ * @param[in] user_data
+ *   The user data that will be returned on the completion events.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL.
+ *   PMDs initialize this structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_async_action_handle_update(uint16_t port_id,
+		uint32_t queue_id,
+		const struct rte_flow_op_attr *op_attr,
+		struct rte_flow_action_handle *action_handle,
+		const void *update,
+		void *user_data,
+		struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 5907dd63c3..2bff732d6a 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -234,6 +234,32 @@ struct rte_flow_ops {
 		 struct rte_flow_op_result res[],
 		 uint16_t n_res,
 		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_create() */
+	struct rte_flow_action_handle *(*async_action_handle_create)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 const struct rte_flow_indir_action_conf *indir_action_conf,
+		 const struct rte_flow_action *action,
+		 void *user_data,
+		 struct rte_flow_error *err);
+	/** See rte_flow_async_action_handle_destroy() */
+	int (*async_action_handle_destroy)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 void *user_data,
+		 struct rte_flow_error *error);
+	/** See rte_flow_async_action_handle_update() */
+	int (*async_action_handle_update)
+		(struct rte_eth_dev *dev,
+		 uint32_t queue_id,
+		 const struct rte_flow_op_attr *op_attr,
+		 struct rte_flow_action_handle *action_handle,
+		 const void *update,
+		 void *user_data,
+		 struct rte_flow_error *error);
 };
 
 /**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 13c1a22118..20391ab29e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -276,6 +276,9 @@ EXPERIMENTAL {
 	rte_flow_async_destroy;
 	rte_flow_push;
 	rte_flow_pull;
+	rte_flow_async_action_handle_create;
+	rte_flow_async_action_handle_destroy;
+	rte_flow_async_action_handle_update;
 };
 
 INTERNAL {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 05/11] app/testpmd: add flow engine configuration
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (3 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 06/11] app/testpmd: add flow template management Alexander Kozyrev
                                 ` (6 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_configure API.
Provide the command line interface for the Flow management.
Usage example: flow configure 0 queues_number 8 queues_size 256

Implement rte_flow_info_get API to get available resources:
Usage example: flow info 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 126 +++++++++++++++++++-
 app/test-pmd/config.c                       |  61 ++++++++++
 app/test-pmd/testpmd.h                      |   7 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  61 +++++++++-
 4 files changed, 252 insertions(+), 3 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index c0644d678c..0533a33ca2 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -72,6 +72,8 @@ enum index {
 	/* Top-level command. */
 	FLOW,
 	/* Sub-level commands. */
+	INFO,
+	CONFIGURE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -122,6 +124,13 @@ enum index {
 	DUMP_ALL,
 	DUMP_ONE,
 
+	/* Configure arguments */
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+
 	/* Indirect action arguments */
 	INDIRECT_ACTION_CREATE,
 	INDIRECT_ACTION_UPDATE,
@@ -868,6 +877,11 @@ struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
 	union {
+		struct {
+			struct rte_flow_port_attr port_attr;
+			uint32_t nb_queue;
+			struct rte_flow_queue_attr queue_attr;
+		} configure; /**< Configuration arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -949,6 +963,16 @@ static const enum index next_flex_item[] = {
 	ZERO,
 };
 
+static const enum index next_config_attr[] = {
+	CONFIG_QUEUES_NUMBER,
+	CONFIG_QUEUES_SIZE,
+	CONFIG_COUNTERS_NUMBER,
+	CONFIG_AGING_OBJECTS_NUMBER,
+	CONFIG_METERS_NUMBER,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2045,6 +2069,9 @@ static int parse_aged(struct context *, const struct token *,
 static int parse_isolate(struct context *, const struct token *,
 			 const char *, unsigned int,
 			 void *, unsigned int);
+static int parse_configure(struct context *, const struct token *,
+			   const char *, unsigned int,
+			   void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2270,7 +2297,9 @@ static const struct token token_list[] = {
 		.type = "{command} {port_id} [{arg} [...]]",
 		.help = "manage ingress/egress flow rules",
 		.next = NEXT(NEXT_ENTRY
-			     (INDIRECT_ACTION,
+			     (INFO,
+			      CONFIGURE,
+			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
 			      DESTROY,
@@ -2285,6 +2314,65 @@ static const struct token token_list[] = {
 		.call = parse_init,
 	},
 	/* Top-level command. */
+	[INFO] = {
+		.name = "info",
+		.help = "get information about flow engine",
+		.next = NEXT(NEXT_ENTRY(END),
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Top-level command. */
+	[CONFIGURE] = {
+		.name = "configure",
+		.help = "configure flow engine",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_configure,
+	},
+	/* Configure arguments. */
+	[CONFIG_QUEUES_NUMBER] = {
+		.name = "queues_number",
+		.help = "number of queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.nb_queue)),
+	},
+	[CONFIG_QUEUES_SIZE] = {
+		.name = "queues_size",
+		.help = "number of elements in queues",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.queue_attr.size)),
+	},
+	[CONFIG_COUNTERS_NUMBER] = {
+		.name = "counters_number",
+		.help = "number of counters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_counters)),
+	},
+	[CONFIG_AGING_OBJECTS_NUMBER] = {
+		.name = "aging_counters_number",
+		.help = "number of aging objects",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_aging_objects)),
+	},
+	[CONFIG_METERS_NUMBER] = {
+		.name = "meters_number",
+		.help = "number of meters",
+		.next = NEXT(next_config_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.configure.port_attr.nb_meters)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -7736,6 +7824,33 @@ parse_isolate(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for info/configure command. */
+static int
+parse_configure(struct context *ctx, const struct token *token,
+		const char *str, unsigned int len,
+		void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != INFO && ctx->curr != CONFIGURE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8964,6 +9079,15 @@ static void
 cmd_flow_parsed(const struct buffer *in)
 {
 	switch (in->command) {
+	case INFO:
+		port_flow_get_info(in->port);
+		break;
+	case CONFIGURE:
+		port_flow_configure(in->port,
+				    &in->args.configure.port_attr,
+				    in->args.configure.nb_queue,
+				    &in->args.configure.queue_attr);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index de1ec14bc7..33a85cd7ca 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,67 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+/** Get info about flow management resources. */
+int
+port_flow_get_info(portid_t port_id)
+{
+	struct rte_flow_port_info port_info;
+	struct rte_flow_queue_info queue_info;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x99, sizeof(error));
+	memset(&port_info, 0, sizeof(port_info));
+	memset(&queue_info, 0, sizeof(queue_info));
+	if (rte_flow_info_get(port_id, &port_info, &queue_info, &error))
+		return port_flow_complain(&error);
+	printf("Flow engine resources on port %u:\n"
+	       "Number of queues: %d\n"
+		   "Size of queues: %d\n"
+	       "Number of counters: %d\n"
+	       "Number of aging objects: %d\n"
+	       "Number of meter actions: %d\n",
+	       port_id, port_info.max_nb_queues,
+		   queue_info.max_size,
+	       port_info.max_nb_counters,
+	       port_info.max_nb_aging_objects,
+	       port_info.max_nb_meters);
+	return 0;
+}
+
+/** Configure flow management resources. */
+int
+port_flow_configure(portid_t port_id,
+	const struct rte_flow_port_attr *port_attr,
+	uint16_t nb_queue,
+	const struct rte_flow_queue_attr *queue_attr)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	const struct rte_flow_queue_attr *attr_list[nb_queue];
+	int std_queue;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	port->queue_nb = nb_queue;
+	port->queue_sz = queue_attr->size;
+	for (std_queue = 0; std_queue < nb_queue; std_queue++)
+		attr_list[std_queue] = queue_attr;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x66, sizeof(error));
+	if (rte_flow_configure(port_id, port_attr, nb_queue, attr_list, &error))
+		return port_flow_complain(&error);
+	printf("Configure flows on port %u: "
+	       "number of queues %d with %d elements\n",
+	       port_id, nb_queue, queue_attr->size);
+	return 0;
+}
+
 /** Create indirect action */
 int
 port_action_handle_create(portid_t port_id, uint32_t id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 9967825044..096b6825eb 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -243,6 +243,8 @@ struct rte_port {
 	struct rte_eth_txconf   tx_conf[RTE_MAX_QUEUES_PER_PORT+1]; /**< per queue tx configuration */
 	struct rte_ether_addr   *mc_addr_pool; /**< pool of multicast addrs */
 	uint32_t                mc_addr_nb; /**< nb. of addr. in mc_addr_pool */
+	queueid_t               queue_nb; /**< nb. of queues for flow rules */
+	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
@@ -885,6 +887,11 @@ struct rte_flow_action_handle *port_action_handle_get_by_id(portid_t port_id,
 							    uint32_t id);
 int port_action_handle_update(portid_t port_id, uint32_t id,
 			      const struct rte_flow_action *action);
+int port_flow_get_info(portid_t port_id);
+int port_flow_configure(portid_t port_id,
+			const struct rte_flow_port_attr *port_attr,
+			uint16_t nb_queue,
+			const struct rte_flow_queue_attr *queue_attr);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 9cc248084f..c8f048aeef 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3308,8 +3308,8 @@ Flow rules management
 ---------------------
 
 Control of the generic flow API (*rte_flow*) is fully exposed through the
-``flow`` command (validation, creation, destruction, queries and operation
-modes).
+``flow`` command (configuration, validation, creation, destruction, queries
+and operation modes).
 
 Considering *rte_flow* overlaps with all `Filter Functions`_, using both
 features simultaneously may cause undefined side-effects and is therefore
@@ -3332,6 +3332,18 @@ The first parameter stands for the operation mode. Possible operations and
 their general syntax are described below. They are covered in detail in the
 following sections.
 
+- Get info about flow engine::
+
+   flow info {port_id}
+
+- Configure flow engine::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3391,6 +3403,51 @@ following sections.
 
    flow tunnel list {port_id}
 
+Retrieving info about flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow info`` retrieves info on pre-configurable resources in the underlying
+device to give a hint of possible values for flow engine configuration.
+
+``rte_flow_info_get()``::
+
+   flow info {port_id}
+
+If successful, it will show::
+
+   Flow engine resources on port #[...]:
+   Number of queues: #[...]
+   Size of queues: #[...]
+   Number of counters: #[...]
+   Number of aging objects: #[...]
+   Number of meters: #[...]
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Configuring flow management engine
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow configure`` pre-allocates all the needed resources in the underlying
+device to be used later at the flow creation. Flow queues are allocated as well
+for asynchronous flow creation/destruction operations. It is bound to
+``rte_flow_configure()``::
+
+   flow configure {port_id}
+       [queues_number {number}] [queues_size {size}]
+       [counters_number {number}]
+       [aging_counters_number {number}]
+       [meters_number {number}]
+
+If successful, it will show::
+
+   Configure flows on port #[...]: number of queues #[...] with #[...] elements
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 06/11] app/testpmd: add flow template management
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (4 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 07/11] app/testpmd: add flow table management Alexander Kozyrev
                                 ` (5 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pattern_template and
rte_flow_actions_template APIs. Provide the command line interface
for the template creation/destruction. Usage example:
  testpmd> flow pattern_template 0 create pattern_template_id 2
           template eth dst is 00:16:3e:31:15:c3 / end
  testpmd> flow actions_template 0 create actions_template_id 4
           template drop / end mask drop / end
  testpmd> flow actions_template 0 destroy actions_template 4
  testpmd> flow pattern_template 0 destroy pattern_template 2

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 456 +++++++++++++++++++-
 app/test-pmd/config.c                       | 203 +++++++++
 app/test-pmd/testpmd.h                      |  24 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 101 +++++
 4 files changed, 782 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0533a33ca2..1aa32ea217 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -56,6 +56,8 @@ enum index {
 	COMMON_POLICY_ID,
 	COMMON_FLEX_HANDLE,
 	COMMON_FLEX_TOKEN,
+	COMMON_PATTERN_TEMPLATE_ID,
+	COMMON_ACTIONS_TEMPLATE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -74,6 +76,8 @@ enum index {
 	/* Sub-level commands. */
 	INFO,
 	CONFIGURE,
+	PATTERN_TEMPLATE,
+	ACTIONS_TEMPLATE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -92,6 +96,28 @@ enum index {
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
+	/* Pattern template arguments. */
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_DESTROY_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+
+	/* Actions template arguments. */
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ACTIONS_TEMPLATE_MASK,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -882,6 +908,10 @@ struct buffer {
 			uint32_t nb_queue;
 			struct rte_flow_queue_attr queue_attr;
 		} configure; /**< Configuration arguments. */
+		struct {
+			uint32_t *template_id;
+			uint32_t template_id_n;
+		} templ_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -890,10 +920,13 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t pat_templ_id;
+			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
 			struct tunnel_ops tunnel_ops;
 			struct rte_flow_item *pattern;
 			struct rte_flow_action *actions;
+			struct rte_flow_action *masks;
 			uint32_t pattern_n;
 			uint32_t actions_n;
 			uint8_t *data;
@@ -973,6 +1006,49 @@ static const enum index next_config_attr[] = {
 	ZERO,
 };
 
+static const enum index next_pt_subcmd[] = {
+	PATTERN_TEMPLATE_CREATE,
+	PATTERN_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_pt_attr[] = {
+	PATTERN_TEMPLATE_CREATE_ID,
+	PATTERN_TEMPLATE_RELAXED_MATCHING,
+	PATTERN_TEMPLATE_INGRESS,
+	PATTERN_TEMPLATE_EGRESS,
+	PATTERN_TEMPLATE_TRANSFER,
+	PATTERN_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_pt_destroy_attr[] = {
+	PATTERN_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
+static const enum index next_at_subcmd[] = {
+	ACTIONS_TEMPLATE_CREATE,
+	ACTIONS_TEMPLATE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_at_attr[] = {
+	ACTIONS_TEMPLATE_CREATE_ID,
+	ACTIONS_TEMPLATE_INGRESS,
+	ACTIONS_TEMPLATE_EGRESS,
+	ACTIONS_TEMPLATE_TRANSFER,
+	ACTIONS_TEMPLATE_SPEC,
+	ZERO,
+};
+
+static const enum index next_at_destroy_attr[] = {
+	ACTIONS_TEMPLATE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2072,6 +2148,12 @@ static int parse_isolate(struct context *, const struct token *,
 static int parse_configure(struct context *, const struct token *,
 			   const char *, unsigned int,
 			   void *, unsigned int);
+static int parse_template(struct context *, const struct token *,
+			  const char *, unsigned int,
+			  void *, unsigned int);
+static int parse_template_destroy(struct context *, const struct token *,
+				  const char *, unsigned int,
+				  void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2141,6 +2223,10 @@ static int comp_set_modify_field_op(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
 static int comp_set_modify_field_id(struct context *, const struct token *,
 			      unsigned int, char *, unsigned int);
+static int comp_pattern_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
+static int comp_actions_template_id(struct context *, const struct token *,
+				    unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2291,6 +2377,20 @@ static const struct token token_list[] = {
 		.call = parse_flex_handle,
 		.comp = comp_none,
 	},
+	[COMMON_PATTERN_TEMPLATE_ID] = {
+		.name = "{pattern_template_id}",
+		.type = "PATTERN_TEMPLATE_ID",
+		.help = "pattern template id",
+		.call = parse_int,
+		.comp = comp_pattern_template_id,
+	},
+	[COMMON_ACTIONS_TEMPLATE_ID] = {
+		.name = "{actions_template_id}",
+		.type = "ACTIONS_TEMPLATE_ID",
+		.help = "actions template id",
+		.call = parse_int,
+		.comp = comp_actions_template_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2299,6 +2399,8 @@ static const struct token token_list[] = {
 		.next = NEXT(NEXT_ENTRY
 			     (INFO,
 			      CONFIGURE,
+			      PATTERN_TEMPLATE,
+			      ACTIONS_TEMPLATE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2373,6 +2475,148 @@ static const struct token token_list[] = {
 					args.configure.port_attr.nb_meters)),
 	},
 	/* Top-level command. */
+	[PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage pattern templates",
+		.next = NEXT(next_pt_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[PATTERN_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create pattern template",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy pattern template",
+		.next = NEXT(NEXT_ENTRY(PATTERN_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Pattern template arguments. */
+	[PATTERN_TEMPLATE_CREATE_ID] = {
+		.name = "pattern_template_id",
+		.help = "specify a pattern template id to create",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.pat_templ_id)),
+	},
+	[PATTERN_TEMPLATE_DESTROY_ID] = {
+		.name = "pattern_template",
+		.help = "specify a pattern template id to destroy",
+		.next = NEXT(next_pt_destroy_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[PATTERN_TEMPLATE_RELAXED_MATCHING] = {
+		.name = "relaxed",
+		.help = "is matching relaxed",
+		.next = NEXT(next_pt_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY_BF(struct buffer,
+			     args.vc.attr.reserved, 1)),
+	},
+	[PATTERN_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute pattern to ingress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute pattern to egress",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute pattern to transfer",
+		.next = NEXT(next_pt_attr),
+		.call = parse_template,
+	},
+	[PATTERN_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify item to create pattern template",
+		.next = NEXT(next_item),
+	},
+	/* Top-level command. */
+	[ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage actions templates",
+		.next = NEXT(next_at_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template,
+	},
+	/* Sub-level commands. */
+	[ACTIONS_TEMPLATE_CREATE] = {
+		.name = "create",
+		.help = "create actions template",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy actions template",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_template_destroy,
+	},
+	/* Actions template arguments. */
+	[ACTIONS_TEMPLATE_CREATE_ID] = {
+		.name = "actions_template_id",
+		.help = "specify an actions template id to create",
+		.next = NEXT(NEXT_ENTRY(ACTIONS_TEMPLATE_MASK),
+			     NEXT_ENTRY(ACTIONS_TEMPLATE_SPEC),
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.act_templ_id)),
+	},
+	[ACTIONS_TEMPLATE_DESTROY_ID] = {
+		.name = "actions_template",
+		.help = "specify an actions template id to destroy",
+		.next = NEXT(next_at_destroy_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.templ_destroy.template_id)),
+		.call = parse_template_destroy,
+	},
+	[ACTIONS_TEMPLATE_INGRESS] = {
+		.name = "ingress",
+		.help = "attribute actions to ingress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_EGRESS] = {
+		.name = "egress",
+		.help = "attribute actions to egress",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_TRANSFER] = {
+		.name = "transfer",
+		.help = "attribute actions to transfer",
+		.next = NEXT(next_at_attr),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_SPEC] = {
+		.name = "template",
+		.help = "specify action to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	[ACTIONS_TEMPLATE_MASK] = {
+		.name = "mask",
+		.help = "specify action mask to create actions template",
+		.next = NEXT(next_action),
+		.call = parse_template,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -2695,7 +2939,7 @@ static const struct token token_list[] = {
 		.name = "end",
 		.help = "end list of pattern items",
 		.priv = PRIV_ITEM(END, 0),
-		.next = NEXT(NEXT_ENTRY(ACTIONS)),
+		.next = NEXT(NEXT_ENTRY(ACTIONS, END)),
 		.call = parse_vc,
 	},
 	[ITEM_VOID] = {
@@ -5975,7 +6219,9 @@ parse_vc(struct context *ctx, const struct token *token,
 	if (!out)
 		return len;
 	if (!out->command) {
-		if (ctx->curr != VALIDATE && ctx->curr != CREATE)
+		if (ctx->curr != VALIDATE && ctx->curr != CREATE &&
+		    ctx->curr != PATTERN_TEMPLATE_CREATE &&
+		    ctx->curr != ACTIONS_TEMPLATE_CREATE)
 			return -1;
 		if (sizeof(*out) > size)
 			return -1;
@@ -7851,6 +8097,132 @@ parse_configure(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for template create command. */
+static int
+parse_template(struct context *ctx, const struct token *token,
+	       const char *str, unsigned int len,
+	       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PATTERN_TEMPLATE &&
+		    ctx->curr != ACTIONS_TEMPLATE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case PATTERN_TEMPLATE_CREATE:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.pat_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case PATTERN_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case PATTERN_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case PATTERN_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case ACTIONS_TEMPLATE_CREATE:
+		out->args.vc.act_templ_id = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_SPEC:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_MASK:
+		out->args.vc.masks =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.actions +
+						out->args.vc.actions_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.masks;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS_TEMPLATE_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case ACTIONS_TEMPLATE_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for template destroy command. */
+static int
+parse_template_destroy(struct context *ctx, const struct token *token,
+		       const char *str, unsigned int len,
+		       void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command ||
+		out->command == PATTERN_TEMPLATE ||
+		out->command == ACTIONS_TEMPLATE) {
+		if (ctx->curr != PATTERN_TEMPLATE_DESTROY &&
+			ctx->curr != ACTIONS_TEMPLATE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.templ_destroy.template_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	template_id = out->args.templ_destroy.template_id
+		    + out->args.templ_destroy.template_id_n++;
+	if ((uint8_t *)template_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = template_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -8820,6 +9192,54 @@ comp_set_modify_field_id(struct context *ctx, const struct token *token,
 	return -1;
 }
 
+/** Complete available pattern template IDs. */
+static int
+comp_pattern_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->pattern_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
+/** Complete available actions template IDs. */
+static int
+comp_actions_template_id(struct context *ctx, const struct token *token,
+			 unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_template *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->actions_templ_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9088,6 +9508,38 @@ cmd_flow_parsed(const struct buffer *in)
 				    in->args.configure.nb_queue,
 				    &in->args.configure.queue_attr);
 		break;
+	case PATTERN_TEMPLATE_CREATE:
+		port_flow_pattern_template_create(in->port,
+				in->args.vc.pat_templ_id,
+				&((const struct rte_flow_pattern_template_attr) {
+					.relaxed_matching = in->args.vc.attr.reserved,
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.pattern);
+		break;
+	case PATTERN_TEMPLATE_DESTROY:
+		port_flow_pattern_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
+	case ACTIONS_TEMPLATE_CREATE:
+		port_flow_actions_template_create(in->port,
+				in->args.vc.act_templ_id,
+				&((const struct rte_flow_actions_template_attr) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions,
+				in->args.vc.masks);
+		break;
+	case ACTIONS_TEMPLATE_DESTROY:
+		port_flow_actions_template_destroy(in->port,
+				in->args.templ_destroy.template_id_n,
+				in->args.templ_destroy.template_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 33a85cd7ca..ecaf4ca03c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1610,6 +1610,49 @@ action_alloc(portid_t port_id, uint32_t id,
 	return 0;
 }
 
+static int
+template_alloc(uint32_t id, struct port_template **template,
+	       struct port_template **list)
+{
+	struct port_template *lst = *list;
+	struct port_template **ppt;
+	struct port_template *pt = NULL;
+
+	*template = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest template ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of port template failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Template #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*template = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2086,6 +2129,166 @@ age_action_get(const struct rte_flow_action *actions)
 	return NULL;
 }
 
+/** Create pattern template */
+int
+port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_pattern_template_attr *attr,
+				  const struct rte_flow_item *pattern)
+{
+	struct rte_port *port;
+	struct port_template *pit;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pit, &port->pattern_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pit->template.pattern_template = rte_flow_pattern_template_create(port_id,
+						attr, pattern, &error);
+	if (!pit->template.pattern_template) {
+		uint32_t destroy_id = pit->id;
+		port_flow_pattern_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Pattern template #%u created\n", pit->id);
+	return 0;
+}
+
+/** Destroy pattern template */
+int
+port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->pattern_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pit = *tmp;
+
+			if (template[i] != pit->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pit->template.pattern_template &&
+			    rte_flow_pattern_template_destroy(port_id,
+							   pit->template.pattern_template,
+							   &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pit->next;
+			printf("Pattern template #%u destroyed\n", pit->id);
+			free(pit);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Create actions template */
+int
+port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				  const struct rte_flow_actions_template_attr *attr,
+				  const struct rte_flow_action *actions,
+				  const struct rte_flow_action *masks)
+{
+	struct rte_port *port;
+	struct port_template *pat;
+	int ret;
+	struct rte_flow_error error;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	ret = template_alloc(id, &pat, &port->actions_templ_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pat->template.actions_template = rte_flow_actions_template_create(port_id,
+						attr, actions, masks, &error);
+	if (!pat->template.actions_template) {
+		uint32_t destroy_id = pat->id;
+		port_flow_actions_template_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	printf("Actions template #%u created\n", pat->id);
+	return 0;
+}
+
+/** Destroy actions template */
+int
+port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				   const uint32_t *template)
+{
+	struct rte_port *port;
+	struct port_template **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->actions_templ_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_template *pat = *tmp;
+
+			if (template[i] != pat->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pat->template.actions_template &&
+			    rte_flow_actions_template_destroy(port_id,
+					pat->template.actions_template, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pat->next;
+			printf("Actions template #%u destroyed\n", pat->id);
+			free(pat);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 096b6825eb..ce46d754a1 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -166,6 +166,17 @@ enum age_action_context_type {
 	ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION,
 };
 
+/** Descriptor for a template. */
+struct port_template {
+	struct port_template *next; /**< Next template in list. */
+	struct port_template *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Template ID. */
+	union {
+		struct rte_flow_pattern_template *pattern_template;
+		struct rte_flow_actions_template *actions_template;
+	} template; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -246,6 +257,8 @@ struct rte_port {
 	queueid_t               queue_nb; /**< nb. of queues for flow rules */
 	uint32_t                queue_sz; /**< size of a queue for flow rules */
 	uint8_t                 slave_flag; /**< bonding slave port */
+	struct port_template    *pattern_templ_list; /**< Pattern templates. */
+	struct port_template    *actions_templ_list; /**< Actions templates. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -892,6 +905,17 @@ int port_flow_configure(portid_t port_id,
 			const struct rte_flow_port_attr *port_attr,
 			uint16_t nb_queue,
 			const struct rte_flow_queue_attr *queue_attr);
+int port_flow_pattern_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_pattern_template_attr *attr,
+				      const struct rte_flow_item *pattern);
+int port_flow_pattern_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
+int port_flow_actions_template_create(portid_t port_id, uint32_t id,
+				      const struct rte_flow_actions_template_attr *attr,
+				      const struct rte_flow_action *actions,
+				      const struct rte_flow_action *masks);
+int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
+				       const uint32_t *template);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index c8f048aeef..2e6a23b12a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3344,6 +3344,26 @@ following sections.
        [aging_counters_number {number}]
        [meters_number {number}]
 
+- Create a pattern template::
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+- Destroy a pattern template::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+- Create an actions template::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+       template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+- Destroy an actions template::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3448,6 +3468,87 @@ Otherwise it will show an error message of the form::
 
    Caught error type [...] ([...]): [...]
 
+Creating pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template create`` creates the specified pattern template.
+It is bound to ``rte_flow_pattern_template_create()``::
+
+   flow pattern_template {port_id} create [pattern_template_id {id}]
+       [relaxed {boolean}] [ingress] [egress] [transfer]
+	   template {item} [/ {item} [...]] / end
+
+If successful, it will show::
+
+   Pattern template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying pattern templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pattern_template destroy`` destroys one or more pattern templates
+from their template ID (as returned by ``flow pattern_template create``),
+this command calls ``rte_flow_pattern_template_destroy()`` as many
+times as necessary::
+
+   flow pattern_template {port_id} destroy pattern_template {id} [...]
+
+If successful, it will show::
+
+   Pattern template #[...] destroyed
+
+It does not report anything for pattern template IDs that do not exist.
+The usual error message is shown when a pattern template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
+Creating actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template create`` creates the specified actions template.
+It is bound to ``rte_flow_actions_template_create()``::
+
+   flow actions_template {port_id} create [actions_template_id {id}]
+       [ingress] [egress] [transfer]
+	   template {action} [/ {action} [...]] / end
+       mask {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Actions template #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
+Destroying actions templates
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow actions_template destroy`` destroys one or more actions templates
+from their template ID (as returned by ``flow actions_template create``),
+this command calls ``rte_flow_actions_template_destroy()`` as many
+times as necessary::
+
+   flow actions_template {port_id} destroy actions_template {id} [...]
+
+If successful, it will show::
+
+   Actions template #[...] destroyed
+
+It does not report anything for actions template IDs that do not exist.
+The usual error message is shown when an actions template cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 07/11] app/testpmd: add flow table management
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (5 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 06/11] app/testpmd: add flow template management Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
                                 ` (4 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_table API.
Provide the command line interface for the flow
table creation/destruction. Usage example:
  testpmd> flow template_table 0 create table_id 6
    group 9 priority 4 ingress mode 1
    rules_number 64 pattern_template 2 actions_template 4
  testpmd> flow template_table 0 destroy table 6

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 315 ++++++++++++++++++++
 app/test-pmd/config.c                       | 171 +++++++++++
 app/test-pmd/testpmd.h                      |  17 ++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  53 ++++
 4 files changed, 556 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1aa32ea217..5715899c95 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -58,6 +58,7 @@ enum index {
 	COMMON_FLEX_TOKEN,
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
+	COMMON_TABLE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -78,6 +79,7 @@ enum index {
 	CONFIGURE,
 	PATTERN_TEMPLATE,
 	ACTIONS_TEMPLATE,
+	TABLE,
 	INDIRECT_ACTION,
 	VALIDATE,
 	CREATE,
@@ -118,6 +120,20 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Table arguments. */
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	TABLE_CREATE_ID,
+	TABLE_DESTROY_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+
 	/* Tunnel arguments. */
 	TUNNEL_CREATE,
 	TUNNEL_CREATE_TYPE,
@@ -912,6 +928,18 @@ struct buffer {
 			uint32_t *template_id;
 			uint32_t template_id_n;
 		} templ_destroy; /**< Template destroy arguments. */
+		struct {
+			uint32_t id;
+			struct rte_flow_template_table_attr attr;
+			uint32_t *pat_templ_id;
+			uint32_t pat_templ_id_n;
+			uint32_t *act_templ_id;
+			uint32_t act_templ_id_n;
+		} table; /**< Table arguments. */
+		struct {
+			uint32_t *table_id;
+			uint32_t table_id_n;
+		} table_destroy; /**< Template destroy arguments. */
 		struct {
 			uint32_t *action_id;
 			uint32_t action_id_n;
@@ -1049,6 +1077,32 @@ static const enum index next_at_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_table_subcmd[] = {
+	TABLE_CREATE,
+	TABLE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_table_attr[] = {
+	TABLE_CREATE_ID,
+	TABLE_GROUP,
+	TABLE_PRIORITY,
+	TABLE_INGRESS,
+	TABLE_EGRESS,
+	TABLE_TRANSFER,
+	TABLE_RULES_NUMBER,
+	TABLE_PATTERN_TEMPLATE,
+	TABLE_ACTIONS_TEMPLATE,
+	END,
+	ZERO,
+};
+
+static const enum index next_table_destroy_attr[] = {
+	TABLE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2154,6 +2208,11 @@ static int parse_template(struct context *, const struct token *,
 static int parse_template_destroy(struct context *, const struct token *,
 				  const char *, unsigned int,
 				  void *, unsigned int);
+static int parse_table(struct context *, const struct token *,
+		       const char *, unsigned int, void *, unsigned int);
+static int parse_table_destroy(struct context *, const struct token *,
+			       const char *, unsigned int,
+			       void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2227,6 +2286,8 @@ static int comp_pattern_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
+static int comp_table_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2391,6 +2452,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_actions_template_id,
 	},
+	[COMMON_TABLE_ID] = {
+		.name = "{table_id}",
+		.type = "TABLE_ID",
+		.help = "table id",
+		.call = parse_int,
+		.comp = comp_table_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2401,6 +2469,7 @@ static const struct token token_list[] = {
 			      CONFIGURE,
 			      PATTERN_TEMPLATE,
 			      ACTIONS_TEMPLATE,
+			      TABLE,
 			      INDIRECT_ACTION,
 			      VALIDATE,
 			      CREATE,
@@ -2617,6 +2686,104 @@ static const struct token token_list[] = {
 		.call = parse_template,
 	},
 	/* Top-level command. */
+	[TABLE] = {
+		.name = "template_table",
+		.type = "{command} {port_id} [{arg} [...]]",
+		.help = "manage template tables",
+		.next = NEXT(next_table_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table,
+	},
+	/* Sub-level commands. */
+	[TABLE_CREATE] = {
+		.name = "create",
+		.help = "create template table",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy template table",
+		.next = NEXT(NEXT_ENTRY(TABLE_DESTROY_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_table_destroy,
+	},
+	/* Table  arguments. */
+	[TABLE_CREATE_ID] = {
+		.name = "table_id",
+		.help = "specify table id to create",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.table.id)),
+	},
+	[TABLE_DESTROY_ID] = {
+		.name = "table",
+		.help = "specify table id to destroy",
+		.next = NEXT(next_table_destroy_attr,
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table_destroy.table_id)),
+		.call = parse_table_destroy,
+	},
+	[TABLE_GROUP] = {
+		.name = "group",
+		.help = "specify a group",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_GROUP_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.group)),
+	},
+	[TABLE_PRIORITY] = {
+		.name = "priority",
+		.help = "specify a priority level",
+		.next = NEXT(next_table_attr, NEXT_ENTRY(COMMON_PRIORITY_LEVEL)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.flow_attr.priority)),
+	},
+	[TABLE_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_table_attr),
+		.call = parse_table,
+	},
+	[TABLE_RULES_NUMBER] = {
+		.name = "rules_number",
+		.help = "number of rules in table",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.table.attr.nb_flows)),
+	},
+	[TABLE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_PATTERN_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.pat_templ_id)),
+		.call = parse_table,
+	},
+	[TABLE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template id",
+		.next = NEXT(next_table_attr,
+			     NEXT_ENTRY(COMMON_ACTIONS_TEMPLATE_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.table.act_templ_id)),
+		.call = parse_table,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8223,6 +8390,119 @@ parse_template_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for table create command. */
+static int
+parse_table(struct context *ctx, const struct token *token,
+	    const char *str, unsigned int len,
+	    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *template_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != TABLE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	}
+	switch (ctx->curr) {
+	case TABLE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table.id = UINT32_MAX;
+		return len;
+	case TABLE_PATTERN_TEMPLATE:
+		out->args.table.pat_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		template_id = out->args.table.pat_templ_id
+				+ out->args.table.pat_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_ACTIONS_TEMPLATE:
+		out->args.table.act_templ_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.table.pat_templ_id +
+						out->args.table.pat_templ_id_n),
+					       sizeof(double));
+		template_id = out->args.table.act_templ_id
+				+ out->args.table.act_templ_id_n++;
+		if ((uint8_t *)template_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = template_id;
+		ctx->objmask = NULL;
+		return len;
+	case TABLE_INGRESS:
+		out->args.table.attr.flow_attr.ingress = 1;
+		return len;
+	case TABLE_EGRESS:
+		out->args.table.attr.flow_attr.egress = 1;
+		return len;
+	case TABLE_TRANSFER:
+		out->args.table.attr.flow_attr.transfer = 1;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for table destroy command. */
+static int
+parse_table_destroy(struct context *ctx, const struct token *token,
+		    const char *str, unsigned int len,
+		    void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *table_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == TABLE) {
+		if (ctx->curr != TABLE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.table_destroy.table_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	table_id = out->args.table_destroy.table_id
+		    + out->args.table_destroy.table_id_n++;
+	if ((uint8_t *)table_id > (uint8_t *)out + size)
+		return -1;
+	ctx->objdata = 0;
+	ctx->object = table_id;
+	ctx->objmask = NULL;
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9240,6 +9520,30 @@ comp_actions_template_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available table IDs. */
+static int
+comp_table_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+	struct port_table *pt;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (pt = port->table_list; pt != NULL; pt = pt->next) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", pt->id);
+		++i;
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9540,6 +9844,17 @@ cmd_flow_parsed(const struct buffer *in)
 				in->args.templ_destroy.template_id_n,
 				in->args.templ_destroy.template_id);
 		break;
+	case TABLE_CREATE:
+		port_flow_template_table_create(in->port, in->args.table.id,
+			&in->args.table.attr, in->args.table.pat_templ_id_n,
+			in->args.table.pat_templ_id, in->args.table.act_templ_id_n,
+			in->args.table.act_templ_id);
+		break;
+	case TABLE_DESTROY:
+		port_flow_template_table_destroy(in->port,
+					in->args.table_destroy.table_id_n,
+					in->args.table_destroy.table_id);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ecaf4ca03c..cefbc64c0c 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1653,6 +1653,49 @@ template_alloc(uint32_t id, struct port_template **template,
 	return 0;
 }
 
+static int
+table_alloc(uint32_t id, struct port_table **table,
+	    struct port_table **list)
+{
+	struct port_table *lst = *list;
+	struct port_table **ppt;
+	struct port_table *pt = NULL;
+
+	*table = NULL;
+	if (id == UINT32_MAX) {
+		/* taking first available ID */
+		if (lst) {
+			if (lst->id == UINT32_MAX - 1) {
+				printf("Highest table ID is already"
+				" assigned, delete it first\n");
+				return -ENOMEM;
+			}
+			id = lst->id + 1;
+		} else {
+			id = 0;
+		}
+	}
+	pt = calloc(1, sizeof(*pt));
+	if (!pt) {
+		printf("Allocation of table failed\n");
+		return -ENOMEM;
+	}
+	ppt = list;
+	while (*ppt && (*ppt)->id > id)
+		ppt = &(*ppt)->next;
+	if (*ppt && (*ppt)->id == id) {
+		printf("Table #%u is already assigned,"
+			" delete it first\n", id);
+		free(pt);
+		return -EINVAL;
+	}
+	pt->next = *ppt;
+	pt->id = id;
+	*ppt = pt;
+	*table = pt;
+	return 0;
+}
+
 /** Get info about flow management resources. */
 int
 port_flow_get_info(portid_t port_id)
@@ -2289,6 +2332,134 @@ port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 	return ret;
 }
 
+/** Create table */
+int
+port_flow_template_table_create(portid_t port_id, uint32_t id,
+		const struct rte_flow_template_table_attr *table_attr,
+		uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		uint32_t nb_actions_templates, uint32_t *actions_templates)
+{
+	struct rte_port *port;
+	struct port_table *pt;
+	struct port_template *temp = NULL;
+	int ret;
+	uint32_t i;
+	struct rte_flow_error error;
+	struct rte_flow_pattern_template
+			*flow_pattern_templates[nb_pattern_templates];
+	struct rte_flow_actions_template
+			*flow_actions_templates[nb_actions_templates];
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	for (i = 0; i < nb_pattern_templates; ++i) {
+		bool found = false;
+		temp = port->pattern_templ_list;
+		while (temp) {
+			if (pattern_templates[i] == temp->id) {
+				flow_pattern_templates[i] =
+					temp->template.pattern_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Pattern template #%u is invalid\n",
+			       pattern_templates[i]);
+			return -EINVAL;
+		}
+	}
+	for (i = 0; i < nb_actions_templates; ++i) {
+		bool found = false;
+		temp = port->actions_templ_list;
+		while (temp) {
+			if (actions_templates[i] == temp->id) {
+				flow_actions_templates[i] =
+					temp->template.actions_template;
+				found = true;
+				break;
+			}
+			temp = temp->next;
+		}
+		if (!found) {
+			printf("Actions template #%u is invalid\n",
+			       actions_templates[i]);
+			return -EINVAL;
+		}
+	}
+	ret = table_alloc(id, &pt, &port->table_list);
+	if (ret)
+		return ret;
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x22, sizeof(error));
+	pt->table = rte_flow_template_table_create(port_id, table_attr,
+		      flow_pattern_templates, nb_pattern_templates,
+		      flow_actions_templates, nb_actions_templates,
+		      &error);
+
+	if (!pt->table) {
+		uint32_t destroy_id = pt->id;
+		port_flow_template_table_destroy(port_id, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pt->nb_pattern_templates = nb_pattern_templates;
+	pt->nb_actions_templates = nb_actions_templates;
+	printf("Template table #%u created\n", pt->id);
+	return 0;
+}
+
+/** Destroy table */
+int
+port_flow_template_table_destroy(portid_t port_id,
+				 uint32_t n, const uint32_t *table)
+{
+	struct rte_port *port;
+	struct port_table **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+	tmp = &port->table_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_table *pt = *tmp;
+
+			if (table[i] != pt->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+
+			if (pt->table &&
+			    rte_flow_template_table_destroy(port_id,
+							    pt->table,
+							    &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pt->next;
+			printf("Template table #%u destroyed\n", pt->id);
+			free(pt);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index ce46d754a1..fd02498faf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -177,6 +177,16 @@ struct port_template {
 	} template; /**< PMD opaque template object */
 };
 
+/** Descriptor for a flow table. */
+struct port_table {
+	struct port_table *next; /**< Next table in list. */
+	struct port_table *tmp; /**< Temporary linking. */
+	uint32_t id; /**< Table ID. */
+	uint32_t nb_pattern_templates; /**< Number of pattern templates. */
+	uint32_t nb_actions_templates; /**< Number of actions templates. */
+	struct rte_flow_template_table *table; /**< PMD opaque template object */
+};
+
 /** Descriptor for a single flow. */
 struct port_flow {
 	struct port_flow *next; /**< Next flow in list. */
@@ -259,6 +269,7 @@ struct rte_port {
 	uint8_t                 slave_flag; /**< bonding slave port */
 	struct port_template    *pattern_templ_list; /**< Pattern templates. */
 	struct port_template    *actions_templ_list; /**< Actions templates. */
+	struct port_table       *table_list; /**< Flow tables. */
 	struct port_flow        *flow_list; /**< Associated flows. */
 	struct port_indirect_action *actions_list;
 	/**< Associated indirect actions. */
@@ -916,6 +927,12 @@ int port_flow_actions_template_create(portid_t port_id, uint32_t id,
 				      const struct rte_flow_action *masks);
 int port_flow_actions_template_destroy(portid_t port_id, uint32_t n,
 				       const uint32_t *template);
+int port_flow_template_table_create(portid_t port_id, uint32_t id,
+		   const struct rte_flow_template_table_attr *table_attr,
+		   uint32_t nb_pattern_templates, uint32_t *pattern_templates,
+		   uint32_t nb_actions_templates, uint32_t *actions_templates);
+int port_flow_template_table_destroy(portid_t port_id,
+			    uint32_t n, const uint32_t *table);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 2e6a23b12a..f63eb76a3a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3364,6 +3364,19 @@ following sections.
 
    flow actions_template {port_id} destroy actions_template {id} [...]
 
+- Create a table::
+
+   flow table {port_id} create
+       [table_id {id}]
+       [group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+- Destroy a table::
+
+   flow table {port_id} destroy table {id} [...]
+
 - Check whether a flow rule can be created::
 
    flow validate {port_id}
@@ -3549,6 +3562,46 @@ The usual error message is shown when an actions template cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Creating template table
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table create`` creates the specified template table.
+It is bound to ``rte_flow_template_table_create()``::
+
+   flow template_table {port_id} create
+       [table_id {id}] [group {group_id}]
+       [priority {level}] [ingress] [egress] [transfer]
+       rules_number {number}
+       pattern_template {pattern_template_id}
+       actions_template {actions_template_id}
+
+If successful, it will show::
+
+   Template table #[...] created
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+Destroying flow table
+~~~~~~~~~~~~~~~~~~~~~
+
+``flow template_table destroy`` destroys one or more template tables
+from their table ID (as returned by ``flow template_table create``),
+this command calls ``rte_flow_template_table_destroy()`` as many
+times as necessary::
+
+   flow template_table {port_id} destroy table {id} [...]
+
+If successful, it will show::
+
+   Template table #[...] destroyed
+
+It does not report anything for table IDs that do not exist.
+The usual error message is shown when a table cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 08/11] app/testpmd: add async flow create/destroy operations
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (6 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 07/11] app/testpmd: add flow table management Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
                                 ` (3 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_q_create/rte_flow_q_destroy API.
Provide the command line interface for enqueueing flow
creation/destruction operations. Usage example:
  testpmd> flow queue 0 create 0 postpone no
           template_table 6 pattern_template 0 actions_template 0
           pattern eth dst is 00:16:3e:31:15:c3 / end actions drop / end
  testpmd> flow queue 0 destroy 0 postpone yes rule 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 267 +++++++++++++++++++-
 app/test-pmd/config.c                       | 166 ++++++++++++
 app/test-pmd/testpmd.h                      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  57 +++++
 4 files changed, 496 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 5715899c95..d359127df9 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -59,6 +59,7 @@ enum index {
 	COMMON_PATTERN_TEMPLATE_ID,
 	COMMON_ACTIONS_TEMPLATE_ID,
 	COMMON_TABLE_ID,
+	COMMON_QUEUE_ID,
 
 	/* TOP-level command. */
 	ADD,
@@ -92,6 +93,7 @@ enum index {
 	ISOLATE,
 	TUNNEL,
 	FLEX,
+	QUEUE,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -120,6 +122,22 @@ enum index {
 	ACTIONS_TEMPLATE_SPEC,
 	ACTIONS_TEMPLATE_MASK,
 
+	/* Queue arguments. */
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+
+	/* Queue create arguments. */
+	QUEUE_CREATE_ID,
+	QUEUE_CREATE_POSTPONE,
+	QUEUE_TEMPLATE_TABLE,
+	QUEUE_PATTERN_TEMPLATE,
+	QUEUE_ACTIONS_TEMPLATE,
+	QUEUE_SPEC,
+
+	/* Queue destroy arguments. */
+	QUEUE_DESTROY_ID,
+	QUEUE_DESTROY_POSTPONE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -918,6 +936,8 @@ struct token {
 struct buffer {
 	enum index command; /**< Flow command. */
 	portid_t port; /**< Affected port ID. */
+	queueid_t queue; /** Async queue ID. */
+	bool postpone; /** Postpone async operation */
 	union {
 		struct {
 			struct rte_flow_port_attr port_attr;
@@ -948,6 +968,7 @@ struct buffer {
 			uint32_t action_id;
 		} ia; /* Indirect action query arguments */
 		struct {
+			uint32_t table_id;
 			uint32_t pat_templ_id;
 			uint32_t act_templ_id;
 			struct rte_flow_attr attr;
@@ -1103,6 +1124,18 @@ static const enum index next_table_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_queue_subcmd[] = {
+	QUEUE_CREATE,
+	QUEUE_DESTROY,
+	ZERO,
+};
+
+static const enum index next_queue_destroy_attr[] = {
+	QUEUE_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2213,6 +2246,12 @@ static int parse_table(struct context *, const struct token *,
 static int parse_table_destroy(struct context *, const struct token *,
 			       const char *, unsigned int,
 			       void *, unsigned int);
+static int parse_qo(struct context *, const struct token *,
+		    const char *, unsigned int,
+		    void *, unsigned int);
+static int parse_qo_destroy(struct context *, const struct token *,
+			    const char *, unsigned int,
+			    void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2288,6 +2327,8 @@ static int comp_actions_template_id(struct context *, const struct token *,
 				    unsigned int, char *, unsigned int);
 static int comp_table_id(struct context *, const struct token *,
 			 unsigned int, char *, unsigned int);
+static int comp_queue_id(struct context *, const struct token *,
+			 unsigned int, char *, unsigned int);
 
 /** Token definitions. */
 static const struct token token_list[] = {
@@ -2459,6 +2500,13 @@ static const struct token token_list[] = {
 		.call = parse_int,
 		.comp = comp_table_id,
 	},
+	[COMMON_QUEUE_ID] = {
+		.name = "{queue_id}",
+		.type = "QUEUE_ID",
+		.help = "queue id",
+		.call = parse_int,
+		.comp = comp_queue_id,
+	},
 	/* Top-level command. */
 	[FLOW] = {
 		.name = "flow",
@@ -2481,7 +2529,8 @@ static const struct token token_list[] = {
 			      QUERY,
 			      ISOLATE,
 			      TUNNEL,
-			      FLEX)),
+			      FLEX,
+			      QUEUE)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2784,6 +2833,84 @@ static const struct token token_list[] = {
 		.call = parse_table,
 	},
 	/* Top-level command. */
+	[QUEUE] = {
+		.name = "queue",
+		.help = "queue a flow rule operation",
+		.next = NEXT(next_queue_subcmd, NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_qo,
+	},
+	/* Sub-level commands. */
+	[QUEUE_CREATE] = {
+		.name = "create",
+		.help = "create a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_TEMPLATE_TABLE),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy a flow rule",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qo_destroy,
+	},
+	/* Queue  arguments. */
+	[QUEUE_TEMPLATE_TABLE] = {
+		.name = "template table",
+		.help = "specify table id",
+		.next = NEXT(NEXT_ENTRY(QUEUE_PATTERN_TEMPLATE),
+			     NEXT_ENTRY(COMMON_TABLE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.table_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_PATTERN_TEMPLATE] = {
+		.name = "pattern_template",
+		.help = "specify pattern template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_ACTIONS_TEMPLATE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.pat_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_ACTIONS_TEMPLATE] = {
+		.name = "actions_template",
+		.help = "specify actions template index",
+		.next = NEXT(NEXT_ENTRY(QUEUE_CREATE_POSTPONE),
+			     NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY(struct buffer,
+					args.vc.act_templ_id)),
+		.call = parse_qo,
+	},
+	[QUEUE_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(NEXT_ENTRY(ITEM_PATTERN),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo,
+	},
+	[QUEUE_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(NEXT_ENTRY(QUEUE_DESTROY_ID),
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+		.call = parse_qo_destroy,
+	},
+	[QUEUE_DESTROY_ID] = {
+		.name = "rule",
+		.help = "specify rule id to destroy",
+		.next = NEXT(next_queue_destroy_attr,
+			NEXT_ENTRY(COMMON_UNSIGNED)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.destroy.rule)),
+		.call = parse_qo_destroy,
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8503,6 +8630,111 @@ parse_table_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for queue create commands. */
+static int
+parse_qo(struct context *ctx, const struct token *token,
+	 const char *str, unsigned int len,
+	 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_CREATE:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_TEMPLATE_TABLE:
+	case QUEUE_PATTERN_TEMPLATE:
+	case QUEUE_ACTIONS_TEMPLATE:
+	case QUEUE_CREATE_POSTPONE:
+		return len;
+	case ITEM_PATTERN:
+		out->args.vc.pattern =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		ctx->object = out->args.vc.pattern;
+		ctx->objmask = NULL;
+		return len;
+	case ACTIONS:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)
+					       (out->args.vc.pattern +
+						out->args.vc.pattern_n),
+					       sizeof(double));
+		ctx->object = out->args.vc.actions;
+		ctx->objmask = NULL;
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for queue destroy command. */
+static int
+parse_qo_destroy(struct context *ctx, const struct token *token,
+		 const char *str, unsigned int len,
+		 void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *flow_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.destroy.rule =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_DESTROY_ID:
+		flow_id = out->args.destroy.rule
+				+ out->args.destroy.rule_n++;
+		if ((uint8_t *)flow_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = flow_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -9544,6 +9776,28 @@ comp_table_id(struct context *ctx, const struct token *token,
 	return i;
 }
 
+/** Complete available queue IDs. */
+static int
+comp_queue_id(struct context *ctx, const struct token *token,
+	      unsigned int ent, char *buf, unsigned int size)
+{
+	unsigned int i = 0;
+	struct rte_port *port;
+
+	(void)token;
+	if (port_id_is_invalid(ctx->port, DISABLED_WARN) ||
+	    ctx->port == (portid_t)RTE_PORT_ALL)
+		return -1;
+	port = &ports[ctx->port];
+	for (i = 0; i < port->queue_nb; i++) {
+		if (buf && i == ent)
+			return snprintf(buf, size, "%u", i);
+	}
+	if (buf)
+		return -1;
+	return i;
+}
+
 /** Internal context. */
 static struct context cmd_flow_context;
 
@@ -9855,6 +10109,17 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.table_destroy.table_id_n,
 					in->args.table_destroy.table_id);
 		break;
+	case QUEUE_CREATE:
+		port_queue_flow_create(in->port, in->queue, in->postpone,
+				       in->args.vc.table_id, in->args.vc.pat_templ_id,
+				       in->args.vc.act_templ_id, in->args.vc.pattern,
+				       in->args.vc.actions);
+		break;
+	case QUEUE_DESTROY:
+		port_queue_flow_destroy(in->port, in->queue, in->postpone,
+					in->args.destroy.rule_n,
+					in->args.destroy.rule);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cefbc64c0c..d7ab57b124 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2460,6 +2460,172 @@ port_flow_template_table_destroy(portid_t port_id,
 	return ret;
 }
 
+/** Enqueue create flow rule operation. */
+int
+port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+		       bool postpone, uint32_t table_id,
+		       uint32_t pattern_idx, uint32_t actions_idx,
+		       const struct rte_flow_item *pattern,
+		       const struct rte_flow_action *actions)
+{
+	struct rte_flow_op_attr op_attr = { .postpone = postpone };
+	struct rte_flow_op_result comp = { 0 };
+	struct rte_flow *flow;
+	struct rte_port *port;
+	struct port_flow *pf;
+	struct port_table *pt;
+	uint32_t id = 0;
+	bool found;
+	int ret = 0;
+	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
+	struct rte_flow_action_age *age = age_action_get(actions);
+
+	port = &ports[port_id];
+	if (port->flow_list) {
+		if (port->flow_list->id == UINT32_MAX) {
+			printf("Highest rule ID is already assigned,"
+			       " delete it first");
+			return -ENOMEM;
+		}
+		id = port->flow_list->id + 1;
+	}
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	found = false;
+	pt = port->table_list;
+	while (pt) {
+		if (table_id == pt->id) {
+			found = true;
+			break;
+		}
+		pt = pt->next;
+	}
+	if (!found) {
+		printf("Table #%u is invalid\n", table_id);
+		return -EINVAL;
+	}
+
+	if (pattern_idx >= pt->nb_pattern_templates) {
+		printf("Pattern template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       pattern_idx, pt->nb_pattern_templates);
+		return -EINVAL;
+	}
+	if (actions_idx >= pt->nb_actions_templates) {
+		printf("Actions template index #%u is invalid,"
+		       " %u templates present in the table\n",
+		       actions_idx, pt->nb_actions_templates);
+		return -EINVAL;
+	}
+
+	pf = port_flow_new(NULL, pattern, actions, &error);
+	if (!pf)
+		return port_flow_complain(&error);
+	if (age) {
+		pf->age_type = ACTION_AGE_CONTEXT_TYPE_FLOW;
+		age->context = &pf->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x11, sizeof(error));
+	flow = rte_flow_async_create(port_id, queue_id, &op_attr, pt->table,
+		pattern, pattern_idx, actions, actions_idx, NULL, &error);
+	if (!flow) {
+		uint32_t flow_id = pf->id;
+		port_queue_flow_destroy(port_id, queue_id, true, 1, &flow_id);
+		return port_flow_complain(&error);
+	}
+
+	while (ret == 0) {
+		/* Poisoning to make sure PMDs update it in case of error. */
+		memset(&error, 0x22, sizeof(error));
+		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
+		if (ret < 0) {
+			printf("Failed to pull queue\n");
+			return -EINVAL;
+		}
+	}
+
+	pf->next = port->flow_list;
+	pf->id = id;
+	pf->flow = flow;
+	port->flow_list = pf;
+	printf("Flow rule #%u creation enqueued\n", pf->id);
+	return 0;
+}
+
+/** Enqueue number of destroy flow rules operations. */
+int
+port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			bool postpone, uint32_t n, const uint32_t *rule)
+{
+	struct rte_flow_op_attr op_attr = { .postpone = postpone };
+	struct rte_flow_op_result comp = { 0 };
+	struct rte_port *port;
+	struct port_flow **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->flow_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_flow *pf = *tmp;
+
+			if (rule[i] != pf->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMD
+			 * update it in case of error.
+			 */
+			memset(&error, 0x33, sizeof(error));
+			if (rte_flow_async_destroy(port_id, queue_id, &op_attr,
+						   pf->flow, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+
+			while (ret == 0) {
+				/*
+				 * Poisoning to make sure PMD
+				 * update it in case of error.
+				 */
+				memset(&error, 0x44, sizeof(error));
+				ret = rte_flow_pull(port_id, queue_id,
+						    &comp, 1, &error);
+				if (ret < 0) {
+					printf("Failed to pull queue\n");
+					return -EINVAL;
+				}
+			}
+
+			printf("Flow rule #%u destruction enqueued\n", pf->id);
+			*tmp = pf->next;
+			free(pf);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index fd02498faf..62e874eaaf 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -933,6 +933,13 @@ int port_flow_template_table_create(portid_t port_id, uint32_t id,
 		   uint32_t nb_actions_templates, uint32_t *actions_templates);
 int port_flow_template_table_destroy(portid_t port_id,
 			    uint32_t n, const uint32_t *table);
+int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
+			   bool postpone, uint32_t table_id,
+			   uint32_t pattern_idx, uint32_t actions_idx,
+			   const struct rte_flow_item *pattern,
+			   const struct rte_flow_action *actions);
+int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
+			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index f63eb76a3a..194b350932 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3384,6 +3384,20 @@ following sections.
        pattern {item} [/ {item} [...]] / end
        actions {action} [/ {action} [...]] / end
 
+- Enqueue creation of a flow rule::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+- Enqueue destruction of specific flow rules::
+
+   flow queue {port_id} destroy {queue_id}
+       [postpone {boolean}] rule {rule_id} [...]
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3708,6 +3722,30 @@ one.
 
 **All unspecified object values are automatically initialized to 0.**
 
+Enqueueing creation of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue create`` adds creation operation of a flow rule to a queue.
+It is bound to ``rte_flow_async_create()``::
+
+   flow queue {port_id} create {queue_id}
+       [postpone {boolean}] template_table {table_id}
+       pattern_template {pattern_template_index}
+       actions_template {actions_template_index}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will return a flow rule ID usable with other commands::
+
+   Flow rule #[...] creaion enqueued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same pattern items and actions as ``flow create``,
+their format is described in `Creating flow rules`_.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4430,6 +4468,25 @@ Non-existent rule IDs are ignored::
    Flow rule #0 destroyed
    testpmd>
 
+Enqueueing destruction of flow rules
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue destroy`` adds destruction operations to destroy one or more rules
+from their rule ID (as returned by ``flow queue create``) to a queue,
+this command calls ``rte_flow_async_destroy()`` as many times as necessary::
+
+   flow queue {port_id} destroy {queue_id}
+        [postpone {boolean}] rule {rule_id} [...]
+
+If successful, it will show::
+
+   Flow rule #[...] destruction enqueued
+
+It does not report anything for rule IDs that do not exist. The usual error
+message is shown when a rule cannot be destroyed::
+
+   Caught error type [...] ([...]): [...]
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 09/11] app/testpmd: add flow queue push operation
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (7 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
                                 ` (2 subsequent siblings)
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_push API.
Provide the command line interface for pushing operations.
Usage example: flow queue 0 push 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 ++++++++++++++++++++-
 app/test-pmd/config.c                       | 28 +++++++++++
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 21 ++++++++
 4 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d359127df9..af36975cdf 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -94,6 +94,7 @@ enum index {
 	TUNNEL,
 	FLEX,
 	QUEUE,
+	PUSH,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -138,6 +139,9 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Push arguments. */
+	PUSH_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2252,6 +2256,9 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_push(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2530,7 +2537,8 @@ static const struct token token_list[] = {
 			      ISOLATE,
 			      TUNNEL,
 			      FLEX,
-			      QUEUE)),
+			      QUEUE,
+			      PUSH)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2911,6 +2919,21 @@ static const struct token token_list[] = {
 		.call = parse_qo_destroy,
 	},
 	/* Top-level command. */
+	[PUSH] = {
+		.name = "push",
+		.help = "push enqueued operations",
+		.next = NEXT(NEXT_ENTRY(PUSH_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_push,
+	},
+	/* Sub-level commands. */
+	[PUSH_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8735,6 +8758,34 @@ parse_qo_destroy(struct context *ctx, const struct token *token,
 	}
 }
 
+/** Parse tokens for push queue command. */
+static int
+parse_push(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PUSH)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10120,6 +10171,9 @@ cmd_flow_parsed(const struct buffer *in)
 					in->args.destroy.rule_n,
 					in->args.destroy.rule);
 		break;
+	case PUSH:
+		port_queue_flow_push(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index d7ab57b124..9ffb7d88dc 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2626,6 +2626,34 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Push all the queue operations in the queue to the NIC. */
+int
+port_queue_flow_push(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_error error;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	memset(&error, 0x55, sizeof(error));
+	ret = rte_flow_push(port_id, queue_id, &error);
+	if (ret < 0) {
+		printf("Failed to push operations in the queue\n");
+		return -EINVAL;
+	}
+	printf("Queue #%u operations pushed\n", queue_id);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 62e874eaaf..24a43fd82c 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 194b350932..4f1f908d4a 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3398,6 +3398,10 @@ following sections.
    flow queue {port_id} destroy {queue_id}
        [postpone {boolean}] rule {rule_id} [...]
 
+- Push enqueued operations::
+
+   flow push {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3616,6 +3620,23 @@ The usual error message is shown when a table cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+Pushing enqueued operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow push`` pushes all the outstanding enqueued operations
+to the underlying device immediately.
+It is bound to ``rte_flow_push()``::
+
+   flow push {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] operations pushed
+
+The usual error message is shown when operations cannot be pushed::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 10/11] app/testpmd: add flow queue pull operation
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (8 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-23  3:02               ` [PATCH v10 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
  2022-02-24 13:07               ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Ferruh Yigit
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_pull API.
Provide the command line interface for pulling operations results.
Usage example: flow pull 0 queue 0

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 56 +++++++++++++++-
 app/test-pmd/config.c                       | 74 +++++++++++++--------
 app/test-pmd/testpmd.h                      |  1 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 25 +++++++
 4 files changed, 127 insertions(+), 29 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index af36975cdf..d4b72724e6 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -95,6 +95,7 @@ enum index {
 	FLEX,
 	QUEUE,
 	PUSH,
+	PULL,
 
 	/* Flex arguments */
 	FLEX_ITEM_INIT,
@@ -142,6 +143,9 @@ enum index {
 	/* Push arguments. */
 	PUSH_QUEUE,
 
+	/* Pull arguments. */
+	PULL_QUEUE,
+
 	/* Table arguments. */
 	TABLE_CREATE,
 	TABLE_DESTROY,
@@ -2259,6 +2263,9 @@ static int parse_qo_destroy(struct context *, const struct token *,
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
+static int parse_pull(struct context *, const struct token *,
+		      const char *, unsigned int,
+		      void *, unsigned int);
 static int parse_tunnel(struct context *, const struct token *,
 			const char *, unsigned int,
 			void *, unsigned int);
@@ -2538,7 +2545,8 @@ static const struct token token_list[] = {
 			      TUNNEL,
 			      FLEX,
 			      QUEUE,
-			      PUSH)),
+			      PUSH,
+			      PULL)),
 		.call = parse_init,
 	},
 	/* Top-level command. */
@@ -2934,6 +2942,21 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 	},
 	/* Top-level command. */
+	[PULL] = {
+		.name = "pull",
+		.help = "pull flow operations results",
+		.next = NEXT(NEXT_ENTRY(PULL_QUEUE), NEXT_ENTRY(COMMON_PORT_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, port)),
+		.call = parse_pull,
+	},
+	/* Sub-level commands. */
+	[PULL_QUEUE] = {
+		.name = "queue",
+		.help = "specify queue id",
+		.next = NEXT(NEXT_ENTRY(END), NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+	},
+	/* Top-level command. */
 	[INDIRECT_ACTION] = {
 		.name = "indirect_action",
 		.type = "{command} {port_id} [{arg} [...]]",
@@ -8786,6 +8809,34 @@ parse_push(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for pull command. */
+static int
+parse_pull(struct context *ctx, const struct token *token,
+	   const char *str, unsigned int len,
+	   void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != PULL)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.vc.data = (uint8_t *)out + size;
+	}
+	return len;
+}
+
 static int
 parse_flex(struct context *ctx, const struct token *token,
 	     const char *str, unsigned int len,
@@ -10174,6 +10225,9 @@ cmd_flow_parsed(const struct buffer *in)
 	case PUSH:
 		port_queue_flow_push(in->port, in->queue);
 		break;
+	case PULL:
+		port_queue_flow_pull(in->port, in->queue);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 9ffb7d88dc..158d1b38a8 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2469,14 +2469,12 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		       const struct rte_flow_action *actions)
 {
 	struct rte_flow_op_attr op_attr = { .postpone = postpone };
-	struct rte_flow_op_result comp = { 0 };
 	struct rte_flow *flow;
 	struct rte_port *port;
 	struct port_flow *pf;
 	struct port_table *pt;
 	uint32_t id = 0;
 	bool found;
-	int ret = 0;
 	struct rte_flow_error error = { RTE_FLOW_ERROR_TYPE_NONE, NULL, NULL };
 	struct rte_flow_action_age *age = age_action_get(actions);
 
@@ -2539,16 +2537,6 @@ port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 		return port_flow_complain(&error);
 	}
 
-	while (ret == 0) {
-		/* Poisoning to make sure PMDs update it in case of error. */
-		memset(&error, 0x22, sizeof(error));
-		ret = rte_flow_pull(port_id, queue_id, &comp, 1, &error);
-		if (ret < 0) {
-			printf("Failed to pull queue\n");
-			return -EINVAL;
-		}
-	}
-
 	pf->next = port->flow_list;
 	pf->id = id;
 	pf->flow = flow;
@@ -2563,7 +2551,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			bool postpone, uint32_t n, const uint32_t *rule)
 {
 	struct rte_flow_op_attr op_attr = { .postpone = postpone };
-	struct rte_flow_op_result comp = { 0 };
 	struct rte_port *port;
 	struct port_flow **tmp;
 	uint32_t c = 0;
@@ -2599,21 +2586,6 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 				ret = port_flow_complain(&error);
 				continue;
 			}
-
-			while (ret == 0) {
-				/*
-				 * Poisoning to make sure PMD
-				 * update it in case of error.
-				 */
-				memset(&error, 0x44, sizeof(error));
-				ret = rte_flow_pull(port_id, queue_id,
-						    &comp, 1, &error);
-				if (ret < 0) {
-					printf("Failed to pull queue\n");
-					return -EINVAL;
-				}
-			}
-
 			printf("Flow rule #%u destruction enqueued\n", pf->id);
 			*tmp = pf->next;
 			free(pf);
@@ -2654,6 +2626,52 @@ port_queue_flow_push(portid_t port_id, queueid_t queue_id)
 	return ret;
 }
 
+/** Pull queue operation results from the queue. */
+int
+port_queue_flow_pull(portid_t port_id, queueid_t queue_id)
+{
+	struct rte_port *port;
+	struct rte_flow_op_result *res;
+	struct rte_flow_error error;
+	int ret = 0;
+	int success = 0;
+	int i;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	res = calloc(port->queue_sz, sizeof(struct rte_flow_op_result));
+	if (!res) {
+		printf("Failed to allocate memory for pulled results\n");
+		return -ENOMEM;
+	}
+
+	memset(&error, 0x66, sizeof(error));
+	ret = rte_flow_pull(port_id, queue_id, res,
+				 port->queue_sz, &error);
+	if (ret < 0) {
+		printf("Failed to pull a operation results\n");
+		free(res);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < ret; i++) {
+		if (res[i].status == RTE_FLOW_OP_SUCCESS)
+			success++;
+	}
+	printf("Queue #%u pulled %u operations (%u failed, %u succeeded)\n",
+	       queue_id, ret, ret - success, success);
+	free(res);
+	return ret;
+}
+
 /** Create flow rule. */
 int
 port_flow_create(portid_t port_id,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 24a43fd82c..5ea2408a0b 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -941,6 +941,7 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
+int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
 		       const struct rte_flow_attr *attr,
 		       const struct rte_flow_item *pattern,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 4f1f908d4a..5080ddb256 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3402,6 +3402,10 @@ following sections.
 
    flow push {port_id} queue {queue_id}
 
+- Pull all operations results from a queue::
+
+   flow pull {port_id} queue {queue_id}
+
 - Create a flow rule::
 
    flow create {port_id}
@@ -3637,6 +3641,23 @@ The usual error message is shown when operations cannot be pushed::
 
    Caught error type [...] ([...]): [...]
 
+Pulling flow operations results
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow pull`` asks the underlying device about flow queue operations
+results and return all the processed (successfully or not) operations.
+It is bound to ``rte_flow_pull()``::
+
+   flow pull {port_id} queue {queue_id}
+
+If successful, it will show::
+
+   Queue #[...] pulled #[...] operations (#[...] failed, #[...] succeeded)
+
+The usual error message is shown when operations results cannot be pulled::
+
+   Caught error type [...] ([...]): [...]
+
 Creating a tunnel stub for offload
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -3767,6 +3788,8 @@ Otherwise it will show an error message of the form::
 This command uses the same pattern items and actions as ``flow create``,
 their format is described in `Creating flow rules`_.
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Attributes
 ^^^^^^^^^^
 
@@ -4508,6 +4531,8 @@ message is shown when a rule cannot be destroyed::
 
    Caught error type [...] ([...]): [...]
 
+``flow queue pull`` must be called to retrieve the operation status.
+
 Querying flow rules
 ~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* [PATCH v10 11/11] app/testpmd: add async indirect actions operations
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (9 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
@ 2022-02-23  3:02               ` Alexander Kozyrev
  2022-02-24 13:07               ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Ferruh Yigit
  11 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-02-23  3:02 UTC (permalink / raw)
  To: dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, ferruh.yigit,
	mohammad.abdul.awal, qi.z.zhang, jerinj, ajit.khaparde,
	bruce.richardson

Add testpmd support for the rte_flow_async_action_handle API.
Provide the command line interface for operations dequeue.
Usage example:
  flow queue 0 indirect_action 0 create action_id 9
    ingress postpone yes action rss / end
  flow queue 0 indirect_action 0 update action_id 9
    action queue index 0 / end
flow queue 0 indirect_action 0 destroy action_id 9

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 276 ++++++++++++++++++++
 app/test-pmd/config.c                       | 131 ++++++++++
 app/test-pmd/testpmd.h                      |  10 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  65 +++++
 4 files changed, 482 insertions(+)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d4b72724e6..b5f1191e55 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -127,6 +127,7 @@ enum index {
 	/* Queue arguments. */
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 
 	/* Queue create arguments. */
 	QUEUE_CREATE_ID,
@@ -140,6 +141,26 @@ enum index {
 	QUEUE_DESTROY_ID,
 	QUEUE_DESTROY_POSTPONE,
 
+	/* Queue indirect action arguments */
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+
+	/* Queue indirect action create arguments */
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+
+	/* Queue indirect action update arguments */
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+
+	/* Queue indirect action destroy arguments */
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+
 	/* Push arguments. */
 	PUSH_QUEUE,
 
@@ -1135,6 +1156,7 @@ static const enum index next_table_destroy_attr[] = {
 static const enum index next_queue_subcmd[] = {
 	QUEUE_CREATE,
 	QUEUE_DESTROY,
+	QUEUE_INDIRECT_ACTION,
 	ZERO,
 };
 
@@ -1144,6 +1166,36 @@ static const enum index next_queue_destroy_attr[] = {
 	ZERO,
 };
 
+static const enum index next_qia_subcmd[] = {
+	QUEUE_INDIRECT_ACTION_CREATE,
+	QUEUE_INDIRECT_ACTION_UPDATE,
+	QUEUE_INDIRECT_ACTION_DESTROY,
+	ZERO,
+};
+
+static const enum index next_qia_create_attr[] = {
+	QUEUE_INDIRECT_ACTION_CREATE_ID,
+	QUEUE_INDIRECT_ACTION_INGRESS,
+	QUEUE_INDIRECT_ACTION_EGRESS,
+	QUEUE_INDIRECT_ACTION_TRANSFER,
+	QUEUE_INDIRECT_ACTION_CREATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_update_attr[] = {
+	QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE,
+	QUEUE_INDIRECT_ACTION_SPEC,
+	ZERO,
+};
+
+static const enum index next_qia_destroy_attr[] = {
+	QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE,
+	QUEUE_INDIRECT_ACTION_DESTROY_ID,
+	END,
+	ZERO,
+};
+
 static const enum index next_ia_create_attr[] = {
 	INDIRECT_ACTION_CREATE_ID,
 	INDIRECT_ACTION_INGRESS,
@@ -2260,6 +2312,12 @@ static int parse_qo(struct context *, const struct token *,
 static int parse_qo_destroy(struct context *, const struct token *,
 			    const char *, unsigned int,
 			    void *, unsigned int);
+static int parse_qia(struct context *, const struct token *,
+		     const char *, unsigned int,
+		     void *, unsigned int);
+static int parse_qia_destroy(struct context *, const struct token *,
+			     const char *, unsigned int,
+			     void *, unsigned int);
 static int parse_push(struct context *, const struct token *,
 		      const char *, unsigned int,
 		      void *, unsigned int);
@@ -2873,6 +2931,13 @@ static const struct token token_list[] = {
 		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
 		.call = parse_qo_destroy,
 	},
+	[QUEUE_INDIRECT_ACTION] = {
+		.name = "indirect_action",
+		.help = "queue indirect actions",
+		.next = NEXT(next_qia_subcmd, NEXT_ENTRY(COMMON_QUEUE_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, queue)),
+		.call = parse_qia,
+	},
 	/* Queue  arguments. */
 	[QUEUE_TEMPLATE_TABLE] = {
 		.name = "template table",
@@ -2926,6 +2991,90 @@ static const struct token token_list[] = {
 					    args.destroy.rule)),
 		.call = parse_qo_destroy,
 	},
+	/* Queue indirect action arguments */
+	[QUEUE_INDIRECT_ACTION_CREATE] = {
+		.name = "create",
+		.help = "create indirect action",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_UPDATE] = {
+		.name = "update",
+		.help = "update indirect action",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY] = {
+		.name = "destroy",
+		.help = "destroy indirect action",
+		.next = NEXT(next_qia_destroy_attr),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action destroy arguments. */
+	[QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone destroy operation",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_DESTROY_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to destroy",
+		.next = NEXT(next_qia_destroy_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY_PTR(struct buffer,
+					    args.ia_destroy.action_id)),
+		.call = parse_qia_destroy,
+	},
+	/* Indirect action update arguments. */
+	[QUEUE_INDIRECT_ACTION_UPDATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone update operation",
+		.next = NEXT(next_qia_update_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	/* Indirect action create arguments. */
+	[QUEUE_INDIRECT_ACTION_CREATE_ID] = {
+		.name = "action_id",
+		.help = "specify a indirect action id to create",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_INDIRECT_ACTION_ID)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, args.vc.attr.group)),
+	},
+	[QUEUE_INDIRECT_ACTION_INGRESS] = {
+		.name = "ingress",
+		.help = "affect rule to ingress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_EGRESS] = {
+		.name = "egress",
+		.help = "affect rule to egress",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_TRANSFER] = {
+		.name = "transfer",
+		.help = "affect rule to transfer",
+		.next = NEXT(next_qia_create_attr),
+		.call = parse_qia,
+	},
+	[QUEUE_INDIRECT_ACTION_CREATE_POSTPONE] = {
+		.name = "postpone",
+		.help = "postpone create operation",
+		.next = NEXT(next_qia_create_attr,
+			     NEXT_ENTRY(COMMON_BOOLEAN)),
+		.args = ARGS(ARGS_ENTRY(struct buffer, postpone)),
+	},
+	[QUEUE_INDIRECT_ACTION_SPEC] = {
+		.name = "action",
+		.help = "specify action to create indirect handle",
+		.next = NEXT(next_action),
+	},
 	/* Top-level command. */
 	[PUSH] = {
 		.name = "push",
@@ -6501,6 +6650,110 @@ parse_ia_destroy(struct context *ctx, const struct token *token,
 	return len;
 }
 
+/** Parse tokens for indirect action commands. */
+static int
+parse_qia(struct context *ctx, const struct token *token,
+	  const char *str, unsigned int len,
+	  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command) {
+		if (ctx->curr != QUEUE)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->args.vc.data = (uint8_t *)out + size;
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		out->args.vc.actions =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		out->args.vc.attr.group = UINT32_MAX;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_EGRESS:
+		out->args.vc.attr.egress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_INGRESS:
+		out->args.vc.attr.ingress = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_TRANSFER:
+		out->args.vc.attr.transfer = 1;
+		return len;
+	case QUEUE_INDIRECT_ACTION_CREATE_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
+/** Parse tokens for indirect action destroy command. */
+static int
+parse_qia_destroy(struct context *ctx, const struct token *token,
+		  const char *str, unsigned int len,
+		  void *buf, unsigned int size)
+{
+	struct buffer *out = buf;
+	uint32_t *action_id;
+
+	/* Token name must match. */
+	if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+		return -1;
+	/* Nothing else to do if there is no buffer. */
+	if (!out)
+		return len;
+	if (!out->command || out->command == QUEUE) {
+		if (ctx->curr != QUEUE_INDIRECT_ACTION_DESTROY)
+			return -1;
+		if (sizeof(*out) > size)
+			return -1;
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		out->args.ia_destroy.action_id =
+			(void *)RTE_ALIGN_CEIL((uintptr_t)(out + 1),
+					       sizeof(double));
+		return len;
+	}
+	switch (ctx->curr) {
+	case QUEUE_INDIRECT_ACTION:
+		out->command = ctx->curr;
+		ctx->objdata = 0;
+		ctx->object = out;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_ID:
+		action_id = out->args.ia_destroy.action_id
+				+ out->args.ia_destroy.action_id_n++;
+		if ((uint8_t *)action_id > (uint8_t *)out + size)
+			return -1;
+		ctx->objdata = 0;
+		ctx->object = action_id;
+		ctx->objmask = NULL;
+		return len;
+	case QUEUE_INDIRECT_ACTION_DESTROY_POSTPONE:
+		return len;
+	default:
+		return -1;
+	}
+}
+
 /** Parse tokens for meter policy action commands. */
 static int
 parse_mp(struct context *ctx, const struct token *token,
@@ -10228,6 +10481,29 @@ cmd_flow_parsed(const struct buffer *in)
 	case PULL:
 		port_queue_flow_pull(in->port, in->queue);
 		break;
+	case QUEUE_INDIRECT_ACTION_CREATE:
+		port_queue_action_handle_create(
+				in->port, in->queue, in->postpone,
+				in->args.vc.attr.group,
+				&((const struct rte_flow_indir_action_conf) {
+					.ingress = in->args.vc.attr.ingress,
+					.egress = in->args.vc.attr.egress,
+					.transfer = in->args.vc.attr.transfer,
+				}),
+				in->args.vc.actions);
+		break;
+	case QUEUE_INDIRECT_ACTION_DESTROY:
+		port_queue_action_handle_destroy(in->port,
+					   in->queue, in->postpone,
+					   in->args.ia_destroy.action_id_n,
+					   in->args.ia_destroy.action_id);
+		break;
+	case QUEUE_INDIRECT_ACTION_UPDATE:
+		port_queue_action_handle_update(in->port,
+						in->queue, in->postpone,
+						in->args.vc.attr.group,
+						in->args.vc.actions);
+		break;
 	case INDIRECT_ACTION_CREATE:
 		port_action_handle_create(
 				in->port, in->args.vc.attr.group,
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 158d1b38a8..cc8e7aa138 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2598,6 +2598,137 @@ port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 	return ret;
 }
 
+/** Enqueue indirect action create operation. */
+int
+port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+				bool postpone, uint32_t id,
+				const struct rte_flow_indir_action_conf *conf,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_op_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action *pia;
+	int ret;
+	struct rte_flow_error error;
+
+	ret = action_alloc(port_id, id, &pia);
+	if (ret)
+		return ret;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (action->type == RTE_FLOW_ACTION_TYPE_AGE) {
+		struct rte_flow_action_age *age =
+			(struct rte_flow_action_age *)(uintptr_t)(action->conf);
+
+		pia->age_type = ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION;
+		age->context = &pia->age_type;
+	}
+	/* Poisoning to make sure PMDs update it in case of error. */
+	memset(&error, 0x88, sizeof(error));
+	pia->handle = rte_flow_async_action_handle_create(port_id, queue_id,
+					&attr, conf, action, NULL, &error);
+	if (!pia->handle) {
+		uint32_t destroy_id = pia->id;
+		port_queue_action_handle_destroy(port_id, queue_id,
+						 postpone, 1, &destroy_id);
+		return port_flow_complain(&error);
+	}
+	pia->type = action->type;
+	printf("Indirect action #%u creation queued\n", pia->id);
+	return 0;
+}
+
+/** Enqueue indirect action destroy operation. */
+int
+port_queue_action_handle_destroy(portid_t port_id,
+				 uint32_t queue_id, bool postpone,
+				 uint32_t n, const uint32_t *actions)
+{
+	const struct rte_flow_op_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct port_indirect_action **tmp;
+	uint32_t c = 0;
+	int ret = 0;
+
+	if (port_id_is_invalid(port_id, ENABLED_WARN) ||
+	    port_id == (portid_t)RTE_PORT_ALL)
+		return -EINVAL;
+	port = &ports[port_id];
+
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	tmp = &port->actions_list;
+	while (*tmp) {
+		uint32_t i;
+
+		for (i = 0; i != n; ++i) {
+			struct rte_flow_error error;
+			struct port_indirect_action *pia = *tmp;
+
+			if (actions[i] != pia->id)
+				continue;
+			/*
+			 * Poisoning to make sure PMDs update it in case
+			 * of error.
+			 */
+			memset(&error, 0x99, sizeof(error));
+
+			if (pia->handle &&
+			    rte_flow_async_action_handle_destroy(port_id,
+				queue_id, &attr, pia->handle, NULL, &error)) {
+				ret = port_flow_complain(&error);
+				continue;
+			}
+			*tmp = pia->next;
+			printf("Indirect action #%u destruction queued\n",
+			       pia->id);
+			free(pia);
+			break;
+		}
+		if (i == n)
+			tmp = &(*tmp)->next;
+		++c;
+	}
+	return ret;
+}
+
+/** Enqueue indirect action update operation. */
+int
+port_queue_action_handle_update(portid_t port_id,
+				uint32_t queue_id, bool postpone, uint32_t id,
+				const struct rte_flow_action *action)
+{
+	const struct rte_flow_op_attr attr = { .postpone = postpone};
+	struct rte_port *port;
+	struct rte_flow_error error;
+	struct rte_flow_action_handle *action_handle;
+
+	action_handle = port_action_handle_get_by_id(port_id, id);
+	if (!action_handle)
+		return -EINVAL;
+
+	port = &ports[port_id];
+	if (queue_id >= port->queue_nb) {
+		printf("Queue #%u is invalid\n", queue_id);
+		return -EINVAL;
+	}
+
+	if (rte_flow_async_action_handle_update(port_id, queue_id, &attr,
+				    action_handle, action, NULL, &error)) {
+		return port_flow_complain(&error);
+	}
+	printf("Indirect action #%u update queued\n", id);
+	return 0;
+}
+
 /** Push all the queue operations in the queue to the NIC. */
 int
 port_queue_flow_push(portid_t port_id, queueid_t queue_id)
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 5ea2408a0b..31f766c965 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -940,6 +940,16 @@ int port_queue_flow_create(portid_t port_id, queueid_t queue_id,
 			   const struct rte_flow_action *actions);
 int port_queue_flow_destroy(portid_t port_id, queueid_t queue_id,
 			    bool postpone, uint32_t n, const uint32_t *rule);
+int port_queue_action_handle_create(portid_t port_id, uint32_t queue_id,
+			bool postpone, uint32_t id,
+			const struct rte_flow_indir_action_conf *conf,
+			const struct rte_flow_action *action);
+int port_queue_action_handle_destroy(portid_t port_id,
+				     uint32_t queue_id, bool postpone,
+				     uint32_t n, const uint32_t *action);
+int port_queue_action_handle_update(portid_t port_id, uint32_t queue_id,
+				    bool postpone, uint32_t id,
+				    const struct rte_flow_action *action);
 int port_queue_flow_push(portid_t port_id, queueid_t queue_id);
 int port_queue_flow_pull(portid_t port_id, queueid_t queue_id);
 int port_flow_validate(portid_t port_id,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 5080ddb256..1083c6d538 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -4792,6 +4792,31 @@ port 0::
 	testpmd> flow indirect_action 0 create action_id \
 		ingress action rss queues 0 1 end / end
 
+Enqueueing creation of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action create`` adds creation operation of an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_create()``::
+
+   flow queue {port_id} create {queue_id} [postpone {boolean}]
+       table {table_id} item_template {item_template_id}
+       action_template {action_template_id}
+       pattern {item} [/ {item} [...]] / end
+       actions {action} [/ {action} [...]] / end
+
+If successful, it will show::
+
+   Indirect action #[...] creation queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+This command uses the same parameters as  ``flow indirect_action create``,
+described in `Creating indirect actions`_.
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Updating indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4821,6 +4846,25 @@ Update indirect rss action having id 100 on port 0 with rss to queues 0 and 3
 
    testpmd> flow indirect_action 0 update 100 action rss queues 0 3 end / end
 
+Enqueueing update of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action update`` adds update operation for an indirect
+action to a queue. It is bound to ``rte_flow_async_action_handle_update()``::
+
+   flow queue {port_id} indirect_action {queue_id} update
+      {indirect_action_id} [postpone {boolean}] action {action} / end
+
+If successful, it will show::
+
+   Indirect action #[...] update queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Destroying indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -4844,6 +4888,27 @@ Destroy indirect actions having id 100 & 101::
 
    testpmd> flow indirect_action 0 destroy action_id 100 action_id 101
 
+Enqueueing destruction of indirect actions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow queue indirect_action destroy`` adds destruction operation to destroy
+one or more indirect actions from their indirect action IDs (as returned by
+``flow queue {port_id} indirect_action {queue_id} create``) to a queue.
+It is bound to ``rte_flow_async_action_handle_destroy()``::
+
+   flow queue {port_id} indirect_action {queue_id} destroy
+      [postpone {boolean}] action_id {indirect_action_id} [...]
+
+If successful, it will show::
+
+   Indirect action #[...] destruction queued
+
+Otherwise it will show an error message of the form::
+
+   Caught error type [...] ([...]): [...]
+
+``flow queue pull`` must be called to retrieve the operation status.
+
 Query indirect actions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v10 01/11] ethdev: introduce flow engine configuration
  2022-02-23  3:02               ` [PATCH v10 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
@ 2022-02-24  8:22                 ` Andrew Rybchenko
  0 siblings, 0 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-24  8:22 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/23/22 06:02, Alexander Kozyrev wrote:
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
> 
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
> 
> The rte_flow_info_get() is available to retrieve the information about
> supported pre-configurable resources. Both these functions must be called
> before any other usage of the flow API engine.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v10 02/11] ethdev: add flow item/action templates
  2022-02-23  3:02               ` [PATCH v10 02/11] ethdev: add flow item/action templates Alexander Kozyrev
@ 2022-02-24  8:34                 ` Andrew Rybchenko
  0 siblings, 0 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-24  8:34 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/23/22 06:02, Alexander Kozyrev wrote:
> Treating every single flow rule as a completely independent and separate
> entity negatively impacts the flow rules insertion rate. Oftentimes in an
> application, many flow rules share a common structure (the same item mask
> and/or action list) so they can be grouped and classified together.
> This knowledge may be used as a source of optimization by a PMD/HW.
> 
> The pattern template defines common matching fields (the item mask) without
> values. The actions template holds a list of action types that will be used
> together in the same rule. The specific values for items and actions will
> be given only during the rule creation.
> 
> A table combines pattern and actions templates along with shared flow rule
> attributes (group ID, priority and traffic direction). This way a PMD/HW
> can prepare all the resources needed for efficient flow rules creation in
> the datapath. To avoid any hiccups due to memory reallocation, the maximum
> number of flow rules is defined at the table creation time.
> 
> The flow rule creation is done by selecting a table, a pattern template
> and an actions template (which are bound to the table), and setting unique
> values for the items and actions.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v10 03/11] ethdev: bring in async queue-based flow rules operations
  2022-02-23  3:02               ` [PATCH v10 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
@ 2022-02-24  8:35                 ` Andrew Rybchenko
  0 siblings, 0 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-24  8:35 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/23/22 06:02, Alexander Kozyrev wrote:
> A new, faster, queue-based flow rules management mechanism is needed for
> applications offloading rules inside the datapath. This asynchronous
> and lockless mechanism frees the CPU for further packet processing and
> reduces the performance impact of the flow rules creation/destruction
> on the datapath. Note that queues are not thread-safe and the queue
> should be accessed from the same thread for all queue operations.
> It is the responsibility of the app to sync the queue functions in case
> of multi-threaded access to the same queue.
> 
> The rte_flow_async_create() function enqueues a flow creation to the
> requested queue. It benefits from already configured resources and sets
> unique values on top of item and action templates. A flow rule is enqueued
> on the specified flow queue and offloaded asynchronously to the hardware.
> The function returns immediately to spare CPU for further packet
> processing. The application must invoke the rte_flow_pull() function
> to complete the flow rule operation offloading, to clear the queue, and to
> receive the operation status. The rte_flow_async_destroy() function
> enqueues a flow destruction to the requested queue.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v10 04/11] ethdev: bring in async indirect actions operations
  2022-02-23  3:02               ` [PATCH v10 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
@ 2022-02-24  8:37                 ` Andrew Rybchenko
  0 siblings, 0 replies; 220+ messages in thread
From: Andrew Rybchenko @ 2022-02-24  8:37 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, ferruh.yigit, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/23/22 06:02, Alexander Kozyrev wrote:
> Queue-based flow rules management mechanism is suitable
> not only for flow rules creation/destruction, but also
> for speeding up other types of Flow API management.
> Indirect action object operations may be executed
> asynchronously as well. Provide async versions for all
> indirect action operations, namely:
> rte_flow_async_action_handle_create,
> rte_flow_async_action_handle_destroy and
> rte_flow_async_action_handle_update.
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v10 00/11] ethdev: datapath-focused flow rules management
  2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
                                 ` (10 preceding siblings ...)
  2022-02-23  3:02               ` [PATCH v10 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
@ 2022-02-24 13:07               ` Ferruh Yigit
  2022-02-24 13:13                 ` Ferruh Yigit
  11 siblings, 1 reply; 220+ messages in thread
From: Ferruh Yigit @ 2022-02-24 13:07 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/23/2022 3:02 AM, Alexander Kozyrev wrote:
> Three major changes to a generic RTE Flow API were implemented in order
> to speed up flow rule insertion/destruction and adapt the API to the
> needs of a datapath-focused flow rules management applications:
> 
> 1. Pre-configuration hints.
> Application may give us some hints on what type of resources are needed.
> Introduce the configuration routine to prepare all the needed resources
> inside a PMD/HW before any flow rules are created at the init stage.
> 
> 2. Flow grouping using templates.
> Use the knowledge about which flow rules are to be used in an application
> and prepare item and action templates for them in advance. Group flow rules
> with common patterns and actions together for better resource management.
> 
> 3. Queue-based flow management.
> Perform flow rule insertion/destruction asynchronously to spare the datapath
> from blocking on RTE Flow API and allow it to continue with packet processing.
> Enqueue flow rules operations and poll for the results later.
> 
> testpmd examples are part of the patch series. PMD changes will follow.
> 
> RFC:https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
> 
> Signed-off-by: Alexander Kozyrev<akozyrev@nvidia.com>
> Acked-by: Ori Kam<orika@nvidia.com>
> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>
> 
> ---
> v10: removed missed check in async API
> 
> v9:
> - changed sanity checks order
> - added reconfiguration explanation
> - added remarks on mandatory direction
> - renamed operation attributes
> - removed all checks in async API
> - removed all errno descriptions
> 
> v8: fixed documentation indentation
> 
> v7:
> - added sanity checks and device state validation
> - added flow engine state validation
> - added ingress/egress/transfer attibutes to templates
> - moved user_data to a parameter list
> - renamed asynchronous functions from "_q_" to"_async_"
> - created a separate commit for indirect actions
> 
> v6: addressed more review comments
> - fixed typos
> - rewrote code snippets
> - add a way to get queue size
> - renamed port/queue attibutes parameters
> 
> v5: changed titles for testpmd commits
> 
> v4:
> - removed structures versioning
> - introduced new rte_flow_port_info structure for rte_flow_info_get API
> - renamed rte_flow_table_create to rte_flow_template_table_create
> 
> v3: addressed review comments and updated documentation
> - added API to get info about pre-configurable resources
> - renamed rte_flow_item_template to rte_flow_pattern_template
> - renamed drain operation attribute to postpone
> - renamed rte_flow_q_drain to rte_flow_q_push
> - renamed rte_flow_q_dequeue to rte_flow_q_pull
> 
> v2: fixed patch series thread
> 
> Alexander Kozyrev (11):
>    ethdev: introduce flow engine configuration
>    ethdev: add flow item/action templates
>    ethdev: bring in async queue-based flow rules operations
>    ethdev: bring in async indirect actions operations
>    app/testpmd: add flow engine configuration
>    app/testpmd: add flow template management
>    app/testpmd: add flow table management
>    app/testpmd: add async flow create/destroy operations
>    app/testpmd: add flow queue push operation
>    app/testpmd: add flow queue pull operation
>    app/testpmd: add async indirect actions operations

Series applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v10 00/11] ethdev: datapath-focused flow rules management
  2022-02-24 13:07               ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Ferruh Yigit
@ 2022-02-24 13:13                 ` Ferruh Yigit
  2022-02-24 13:14                   ` Raslan Darawsheh
  0 siblings, 1 reply; 220+ messages in thread
From: Ferruh Yigit @ 2022-02-24 13:13 UTC (permalink / raw)
  To: Alexander Kozyrev, dev, Raslan Darawsheh
  Cc: orika, thomas, ivan.malov, andrew.rybchenko, mohammad.abdul.awal,
	qi.z.zhang, jerinj, ajit.khaparde, bruce.richardson

On 2/24/2022 1:07 PM, Ferruh Yigit wrote:
> On 2/23/2022 3:02 AM, Alexander Kozyrev wrote:
>> Three major changes to a generic RTE Flow API were implemented in order
>> to speed up flow rule insertion/destruction and adapt the API to the
>> needs of a datapath-focused flow rules management applications:
>>
>> 1. Pre-configuration hints.
>> Application may give us some hints on what type of resources are needed.
>> Introduce the configuration routine to prepare all the needed resources
>> inside a PMD/HW before any flow rules are created at the init stage.
>>
>> 2. Flow grouping using templates.
>> Use the knowledge about which flow rules are to be used in an application
>> and prepare item and action templates for them in advance. Group flow rules
>> with common patterns and actions together for better resource management.
>>
>> 3. Queue-based flow management.
>> Perform flow rule insertion/destruction asynchronously to spare the datapath
>> from blocking on RTE Flow API and allow it to continue with packet processing.
>> Enqueue flow rules operations and poll for the results later.
>>
>> testpmd examples are part of the patch series. PMD changes will follow.
>>
>> RFC:https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936226-1-akozyrev@nvidia.com/
>>
>> Signed-off-by: Alexander Kozyrev<akozyrev@nvidia.com>
>> Acked-by: Ori Kam<orika@nvidia.com>
>> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>
>>
>> ---
>> v10: removed missed check in async API
>>
>> v9:
>> - changed sanity checks order
>> - added reconfiguration explanation
>> - added remarks on mandatory direction
>> - renamed operation attributes
>> - removed all checks in async API
>> - removed all errno descriptions
>>
>> v8: fixed documentation indentation
>>
>> v7:
>> - added sanity checks and device state validation
>> - added flow engine state validation
>> - added ingress/egress/transfer attibutes to templates
>> - moved user_data to a parameter list
>> - renamed asynchronous functions from "_q_" to"_async_"
>> - created a separate commit for indirect actions
>>
>> v6: addressed more review comments
>> - fixed typos
>> - rewrote code snippets
>> - add a way to get queue size
>> - renamed port/queue attibutes parameters
>>
>> v5: changed titles for testpmd commits
>>
>> v4:
>> - removed structures versioning
>> - introduced new rte_flow_port_info structure for rte_flow_info_get API
>> - renamed rte_flow_table_create to rte_flow_template_table_create
>>
>> v3: addressed review comments and updated documentation
>> - added API to get info about pre-configurable resources
>> - renamed rte_flow_item_template to rte_flow_pattern_template
>> - renamed drain operation attribute to postpone
>> - renamed rte_flow_q_drain to rte_flow_q_push
>> - renamed rte_flow_q_dequeue to rte_flow_q_pull
>>
>> v2: fixed patch series thread
>>
>> Alexander Kozyrev (11):
>>    ethdev: introduce flow engine configuration
>>    ethdev: add flow item/action templates
>>    ethdev: bring in async queue-based flow rules operations
>>    ethdev: bring in async indirect actions operations
>>    app/testpmd: add flow engine configuration
>>    app/testpmd: add flow template management
>>    app/testpmd: add flow table management
>>    app/testpmd: add async flow create/destroy operations
>>    app/testpmd: add flow queue push operation
>>    app/testpmd: add flow queue pull operation
>>    app/testpmd: add async indirect actions operations
> 
> Series applied to dpdk-next-net/main, thanks.

+Raslan,

As ethdev patches merged, can proceed with driver ones.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v10 00/11] ethdev: datapath-focused flow rules management
  2022-02-24 13:13                 ` Ferruh Yigit
@ 2022-02-24 13:14                   ` Raslan Darawsheh
  0 siblings, 0 replies; 220+ messages in thread
From: Raslan Darawsheh @ 2022-02-24 13:14 UTC (permalink / raw)
  To: Ferruh Yigit, Alexander Kozyrev, dev
  Cc: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	ivan.malov, andrew.rybchenko, mohammad.abdul.awal, qi.z.zhang,
	jerinj, ajit.khaparde, bruce.richardson


> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Thursday, February 24, 2022 3:13 PM
> To: Alexander Kozyrev <akozyrev@nvidia.com>; dev@dpdk.org; Raslan
> Darawsheh <rasland@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; ivan.malov@oktetlabs.ru;
> andrew.rybchenko@oktetlabs.ru; mohammad.abdul.awal@intel.com;
> qi.z.zhang@intel.com; jerinj@marvell.com; ajit.khaparde@broadcom.com;
> bruce.richardson@intel.com
> Subject: Re: [PATCH v10 00/11] ethdev: datapath-focused flow rules
> management
> 
> On 2/24/2022 1:07 PM, Ferruh Yigit wrote:
> > On 2/23/2022 3:02 AM, Alexander Kozyrev wrote:
> >> Three major changes to a generic RTE Flow API were implemented in
> >> order to speed up flow rule insertion/destruction and adapt the API
> >> to the needs of a datapath-focused flow rules management applications:
> >>
> >> 1. Pre-configuration hints.
> >> Application may give us some hints on what type of resources are
> needed.
> >> Introduce the configuration routine to prepare all the needed
> >> resources inside a PMD/HW before any flow rules are created at the init
> stage.
> >>
> >> 2. Flow grouping using templates.
> >> Use the knowledge about which flow rules are to be used in an
> >> application and prepare item and action templates for them in
> >> advance. Group flow rules with common patterns and actions together
> for better resource management.
> >>
> >> 3. Queue-based flow management.
> >> Perform flow rule insertion/destruction asynchronously to spare the
> >> datapath from blocking on RTE Flow API and allow it to continue with
> packet processing.
> >> Enqueue flow rules operations and poll for the results later.
> >>
> >> testpmd examples are part of the patch series. PMD changes will follow.
> >>
> >>
> RFC:https://patchwork.dpdk.org/project/dpdk/cover/20211006044835.3936
> >> 226-1-akozyrev@nvidia.com/
> >>
> >> Signed-off-by: Alexander Kozyrev<akozyrev@nvidia.com>
> >> Acked-by: Ori Kam<orika@nvidia.com>
> >> Acked-by: Ajit Khaparde<ajit.khaparde@broadcom.com>
> >>
> >> ---
> >> v10: removed missed check in async API
> >>
> >> v9:
> >> - changed sanity checks order
> >> - added reconfiguration explanation
> >> - added remarks on mandatory direction
> >> - renamed operation attributes
> >> - removed all checks in async API
> >> - removed all errno descriptions
> >>
> >> v8: fixed documentation indentation
> >>
> >> v7:
> >> - added sanity checks and device state validation
> >> - added flow engine state validation
> >> - added ingress/egress/transfer attibutes to templates
> >> - moved user_data to a parameter list
> >> - renamed asynchronous functions from "_q_" to"_async_"
> >> - created a separate commit for indirect actions
> >>
> >> v6: addressed more review comments
> >> - fixed typos
> >> - rewrote code snippets
> >> - add a way to get queue size
> >> - renamed port/queue attibutes parameters
> >>
> >> v5: changed titles for testpmd commits
> >>
> >> v4:
> >> - removed structures versioning
> >> - introduced new rte_flow_port_info structure for rte_flow_info_get
> >> API
> >> - renamed rte_flow_table_create to rte_flow_template_table_create
> >>
> >> v3: addressed review comments and updated documentation
> >> - added API to get info about pre-configurable resources
> >> - renamed rte_flow_item_template to rte_flow_pattern_template
> >> - renamed drain operation attribute to postpone
> >> - renamed rte_flow_q_drain to rte_flow_q_push
> >> - renamed rte_flow_q_dequeue to rte_flow_q_pull
> >>
> >> v2: fixed patch series thread
> >>
> >> Alexander Kozyrev (11):
> >>    ethdev: introduce flow engine configuration
> >>    ethdev: add flow item/action templates
> >>    ethdev: bring in async queue-based flow rules operations
> >>    ethdev: bring in async indirect actions operations
> >>    app/testpmd: add flow engine configuration
> >>    app/testpmd: add flow template management
> >>    app/testpmd: add flow table management
> >>    app/testpmd: add async flow create/destroy operations
> >>    app/testpmd: add flow queue push operation
> >>    app/testpmd: add flow queue pull operation
> >>    app/testpmd: add async indirect actions operations
> >
> > Series applied to dpdk-next-net/main, thanks.
> 
> +Raslan,
> 
> As ethdev patches merged, can proceed with driver ones.

Thanks for the update, I'll move forward with merging the driver patches once testing for them is complete. 

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 220+ messages in thread

* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
  2022-01-19 13:07 [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Ivan Malov
@ 2022-01-25  1:09 ` Alexander Kozyrev
  0 siblings, 0 replies; 220+ messages in thread
From: Alexander Kozyrev @ 2022-01-25  1:09 UTC (permalink / raw)
  To: Ivan Malov, dev
  Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
	Qi Zhang, Jerin Jacob, Ajit Khaparde

Sorry, Ivan, missed your email last week since I wasn't in To list. Adding all the people back.

On Wednesday, January 19, 2022 8:07 Ivan Malov <ivan.malov@oktetlabs.ru> wrote:


> > +Rules management configuration
> > +------------------------------
> > +
> > +Configure flow rules management.
> 
> It is either "management OF ruleS" or "rule management".
> Perhaps fix similar occurrences across the series.
Yes, thank for catching this, "rule management", of course.

> > +	/**
> > +	 * Number of counter actions pre-configured.
> > +	 * If set to 0, PMD will allocate counters dynamically.
> > +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> > +	 */
> > +	uint32_t nb_counters;
> > +	/**
> > +	 * Number of aging actions pre-configured.
> > +	 * If set to 0, PMD will allocate aging dynamically.
> > +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> > +	 */
> > +	uint32_t nb_aging;
> > +	/**
> > +	 * Number of traffic metering actions pre-configured.
> > +	 * If set to 0, PMD will allocate meters dynamically.
> > +	 * @see RTE_FLOW_ACTION_TYPE_METER
> > +	 */
> > +	uint32_t nb_meters;
> 
> If duplication of the same description is undesirable,
> consider adding a common description for these fields:
> 
> /**
>   * Resource preallocation settings. Use zero to
>   * request that allocations be done on demand.
>   */
While this is true today and all these resources behave the same way if 0 is specified,
there is no guarantee the same behavior will preserve for any additional field in the future.
That is why I prefer to keep the descriptions separate for every single member here.


> Instead of "nb_aging", perhaps consider something like "nb_age_timers".
It is not technically correct, aging may be implemented as a timer or a counter.
nb_aging_flows maybe?

> > + * Configure flow rules module.
> > + * To pre-allocate resources as per the flow port attributes
> > + * this configuration function must be called before any flow rule is
> created.
> > + * Must be called only after Ethernet device is configured, but may be
> called
> > + * before or after the device is started as long as there are no flow rules.
> > + * No other rte_flow function should be called while this function is
> invoked.
> > + * This function can be called again to change the configuration.
> > + * Some PMDs may not support re-configuration at all,
> > + * or may only allow increasing the number of resources allocated.
> 
> Consider:
> 
> * Pre-configure the port's flow API engine.
> *
> * This API can only be invoked before the application
> * starts using the rest of the flow library functions.
> *
> * The API can be invoked multiple times to change the
> * settings. The port, however, may reject the changes.
Let me sink that in, the shorter description the better, I think.

^ permalink raw reply	[flat|nested] 220+ messages in thread

* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
@ 2022-01-19 13:07 Ivan Malov
  2022-01-25  1:09 ` Alexander Kozyrev
  0 siblings, 1 reply; 220+ messages in thread
From: Ivan Malov @ 2022-01-19 13:07 UTC (permalink / raw)
  To: dev

Hi,

> +Rules management configuration
> +------------------------------
> +
> +Configure flow rules management.

It is either "management OF ruleS" or "rule management".
Perhaps fix similar occurrences across the series.

> +	/**
> +	 * Number of counter actions pre-configured.
> +	 * If set to 0, PMD will allocate counters dynamically.
> +	 * @see RTE_FLOW_ACTION_TYPE_COUNT
> +	 */
> +	uint32_t nb_counters;
> +	/**
> +	 * Number of aging actions pre-configured.
> +	 * If set to 0, PMD will allocate aging dynamically.
> +	 * @see RTE_FLOW_ACTION_TYPE_AGE
> +	 */
> +	uint32_t nb_aging;
> +	/**
> +	 * Number of traffic metering actions pre-configured.
> +	 * If set to 0, PMD will allocate meters dynamically.
> +	 * @see RTE_FLOW_ACTION_TYPE_METER
> +	 */
> +	uint32_t nb_meters;

If duplication of the same description is undesirable,
consider adding a common description for these fields:

/**
  * Resource preallocation settings. Use zero to
  * request that allocations be done on demand.
  */

Instead of "nb_aging", perhaps consider something like "nb_age_timers".

> + * Configure flow rules module.
> + * To pre-allocate resources as per the flow port attributes
> + * this configuration function must be called before any flow rule is created.
> + * Must be called only after Ethernet device is configured, but may be called
> + * before or after the device is started as long as there are no flow rules.
> + * No other rte_flow function should be called while this function is invoked.
> + * This function can be called again to change the configuration.
> + * Some PMDs may not support re-configuration at all,
> + * or may only allow increasing the number of resources allocated.

Consider:

* Pre-configure the port's flow API engine.
*
* This API can only be invoked before the application
* starts using the rest of the flow library functions.
*
* The API can be invoked multiple times to change the
* settings. The port, however, may reject the changes.

--
Ivan M.

^ permalink raw reply	[flat|nested] 220+ messages in thread

end of thread, other threads:[~2022-02-24 13:14 UTC | newest]

Thread overview: 220+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-06  4:48 [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management Alexander Kozyrev
2021-10-06  4:48 ` [dpdk-dev] [PATCH 1/3] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2021-10-13  4:11   ` Ajit Khaparde
2021-10-13 13:15     ` Ori Kam
2021-10-31 17:27       ` Ajit Khaparde
2021-11-01 10:40         ` Ori Kam
2021-10-06  4:48 ` [dpdk-dev] [PATCH 2/3] ethdev: add flow item/action templates Alexander Kozyrev
2021-10-06 17:24   ` Ivan Malov
2021-10-13  1:25     ` Alexander Kozyrev
2021-10-13  2:26       ` Ajit Khaparde
2021-10-13  2:38         ` Alexander Kozyrev
2021-10-13 11:25       ` Ivan Malov
2021-10-06  4:48 ` [dpdk-dev] [PATCH 3/3] ethdev: add async queue-based flow rules operations Alexander Kozyrev
2021-10-06 16:24   ` Ivan Malov
2021-10-13  1:10     ` Alexander Kozyrev
2021-10-13  4:57   ` Ajit Khaparde
2021-10-13 13:17     ` Ori Kam
2022-01-18 15:30 ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-01-18 15:30   ` [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2022-01-24 14:36     ` Jerin Jacob
2022-01-24 17:35       ` Thomas Monjalon
2022-01-24 17:46         ` Jerin Jacob
2022-01-24 18:08           ` Bruce Richardson
2022-01-25  1:14             ` Alexander Kozyrev
2022-01-25 15:58             ` Ori Kam
2022-01-25 18:09               ` Bruce Richardson
2022-01-25 18:14                 ` Bruce Richardson
2022-01-26  9:45                   ` Ori Kam
2022-01-26 10:52                     ` Bruce Richardson
2022-01-26 11:21                       ` Thomas Monjalon
2022-01-26 12:19                         ` Ori Kam
2022-01-26 13:41                           ` Bruce Richardson
2022-01-26 15:12                             ` Ori Kam
2022-01-24 17:40       ` Ajit Khaparde
2022-01-25  1:28         ` Alexander Kozyrev
2022-01-25 18:44           ` Jerin Jacob
2022-01-26 22:02             ` Alexander Kozyrev
2022-01-27  9:34               ` Jerin Jacob
2022-01-18 15:30   ` [PATCH v2 02/10] ethdev: add flow item/action templates Alexander Kozyrev
2022-01-18 15:30   ` [PATCH v2 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-01-18 15:30   ` [PATCH v2 04/10] app/testpmd: implement rte flow configure Alexander Kozyrev
2022-01-18 15:33   ` [v2,05/10] app/testpmd: implement rte flow item/action template Alexander Kozyrev
2022-01-18 15:34   ` [v2,06/10] app/testpmd: implement rte flow table Alexander Kozyrev
2022-01-18 15:35   ` [v2,07/10] app/testpmd: implement rte flow queue create flow Alexander Kozyrev
2022-01-18 15:35   ` [v2,08/10] app/testpmd: implement rte flow queue drain Alexander Kozyrev
2022-01-18 15:36   ` [v2,09/10] app/testpmd: implement rte flow queue dequeue Alexander Kozyrev
2022-01-18 15:37   ` [v2,10/10] app/testpmd: implement rte flow queue indirect action Alexander Kozyrev
2022-01-19  7:16   ` [PATCH v2 00/10] ethdev: datapath-focused flow rules management Suanming Mou
2022-01-24 15:10     ` Ori Kam
2022-02-06  3:25   ` [PATCH v3 " Alexander Kozyrev
2022-02-06  3:25     ` [PATCH v3 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2022-02-07 13:15       ` Ori Kam
2022-02-07 14:52       ` Jerin Jacob
2022-02-07 17:59         ` Alexander Kozyrev
2022-02-07 18:24           ` Jerin Jacob
2022-02-06  3:25     ` [PATCH v3 02/10] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-07 13:16       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-07 13:18       ` Ori Kam
2022-02-08 10:56       ` Jerin Jacob
2022-02-08 14:11         ` Alexander Kozyrev
2022-02-08 15:23           ` Ivan Malov
2022-02-09  5:40             ` Alexander Kozyrev
2022-02-08 17:36           ` Jerin Jacob
2022-02-09  5:50           ` Jerin Jacob
2022-02-06  3:25     ` [PATCH v3 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
2022-02-07 13:19       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
2022-02-07 13:20       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
2022-02-07 13:22       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
2022-02-07 13:21       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
2022-02-07 13:22       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
2022-02-07 13:23       ` Ori Kam
2022-02-06  3:25     ` [PATCH v3 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
2022-02-07 13:23       ` Ori Kam
2022-01-19 13:07 [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Ivan Malov
2022-01-25  1:09 ` Alexander Kozyrev
     [not found] <20220206032526.816079-1-akozyrev@nvidia.com >
2022-02-09 21:37 ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 02/10] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 04/10] app/testpmd: implement rte flow configuration Alexander Kozyrev
2022-02-10  9:32     ` Thomas Monjalon
2022-02-09 21:38   ` [PATCH v4 05/10] app/testpmd: implement rte flow template management Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 06/10] app/testpmd: implement rte flow table management Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 07/10] app/testpmd: implement rte flow queue flow operations Alexander Kozyrev
2022-02-09 21:53     ` Ori Kam
2022-02-09 21:38   ` [PATCH v4 08/10] app/testpmd: implement rte flow push operations Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 09/10] app/testpmd: implement rte flow pull operations Alexander Kozyrev
2022-02-09 21:38   ` [PATCH v4 10/10] app/testpmd: implement rte flow queue indirect actions Alexander Kozyrev
2022-02-10 16:00   ` [PATCH v4 00/10] ethdev: datapath-focused flow rules management Ferruh Yigit
2022-02-10 16:12     ` Asaf Penso
2022-02-10 16:33       ` Suanming Mou
2022-02-10 18:04     ` Ajit Khaparde
2022-02-11 10:22     ` Ivan Malov
2022-02-11 10:48     ` Jerin Jacob
2022-02-11  2:26   ` [PATCH v5 " Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2022-02-11 10:16       ` Andrew Rybchenko
2022-02-11 18:47         ` Alexander Kozyrev
2022-02-16 13:03           ` Andrew Rybchenko
2022-02-16 22:17             ` Alexander Kozyrev
2022-02-17 10:35               ` Andrew Rybchenko
2022-02-17 10:57                 ` Ori Kam
2022-02-17 11:04                   ` Andrew Rybchenko
2022-02-11  2:26     ` [PATCH v5 02/10] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-11 11:27       ` Andrew Rybchenko
2022-02-11 22:25         ` Alexander Kozyrev
2022-02-16 13:14           ` Andrew Rybchenko
2022-02-16 14:18             ` Ori Kam
2022-02-17 10:44               ` Andrew Rybchenko
2022-02-17 11:11                 ` Ori Kam
2022-02-11  2:26     ` [PATCH v5 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-11 12:42       ` Andrew Rybchenko
2022-02-12  2:19         ` Alexander Kozyrev
2022-02-12  9:25           ` Thomas Monjalon
2022-02-16 22:49             ` Alexander Kozyrev
2022-02-17  8:18               ` Thomas Monjalon
2022-02-17 11:02                 ` Andrew Rybchenko
2022-02-16 13:34           ` Andrew Rybchenko
2022-02-16 14:53             ` Ori Kam
2022-02-17 10:52               ` Andrew Rybchenko
2022-02-17 11:08                 ` Ori Kam
2022-02-17 14:16                   ` Ori Kam
2022-02-17 14:34                     ` Thomas Monjalon
2022-02-16 15:15             ` Ori Kam
2022-02-17 11:10               ` Andrew Rybchenko
2022-02-17 11:19                 ` Ori Kam
2022-02-11  2:26     ` [PATCH v5 04/10] app/testpmd: add flow engine configuration Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 05/10] app/testpmd: add flow template management Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 06/10] app/testpmd: add flow table management Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 07/10] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 08/10] app/testpmd: add flow queue push operation Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 09/10] app/testpmd: add flow queue pull operation Alexander Kozyrev
2022-02-11  2:26     ` [PATCH v5 10/10] app/testpmd: add async indirect actions creation/destruction Alexander Kozyrev
2022-02-12  4:19     ` [PATCH v6 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 02/10] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 03/10] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 04/10] app/testpmd: add flow engine configuration Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 05/10] app/testpmd: add flow template management Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 06/10] app/testpmd: add flow table management Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 07/10] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 08/10] app/testpmd: add flow queue push operation Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 09/10] app/testpmd: add flow queue pull operation Alexander Kozyrev
2022-02-12  4:19       ` [PATCH v6 10/10] app/testpmd: add async indirect actions creation/destruction Alexander Kozyrev
2022-02-19  4:11       ` [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 02/11] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 06/11] app/testpmd: add flow template management Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 07/11] app/testpmd: add flow table management Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
2022-02-19  4:11         ` [PATCH v7 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
2022-02-20  3:43         ` [PATCH v8 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-20  3:43           ` [PATCH v8 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
2022-02-21  9:47             ` Andrew Rybchenko
2022-02-21  9:52               ` Andrew Rybchenko
2022-02-21 12:53                 ` Ori Kam
2022-02-21 14:33                   ` Alexander Kozyrev
2022-02-21 14:53                   ` Andrew Rybchenko
2022-02-21 15:49                     ` Thomas Monjalon
2022-02-20  3:44           ` [PATCH v8 02/11] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-21 10:57             ` Andrew Rybchenko
2022-02-21 13:12               ` Ori Kam
2022-02-21 15:05                 ` Andrew Rybchenko
2022-02-21 15:43                   ` Ori Kam
2022-02-21 15:14                 ` Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-21 14:49             ` Andrew Rybchenko
2022-02-21 15:35               ` Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 06/11] app/testpmd: add flow template management Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 07/11] app/testpmd: add flow table management Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
2022-02-20  3:44           ` [PATCH v8 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
2022-02-21 23:02           ` [PATCH v9 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 02/11] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 06/11] app/testpmd: add flow template management Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 07/11] app/testpmd: add flow table management Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
2022-02-21 23:02             ` [PATCH v9 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
2022-02-23  3:02             ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 01/11] ethdev: introduce flow engine configuration Alexander Kozyrev
2022-02-24  8:22                 ` Andrew Rybchenko
2022-02-23  3:02               ` [PATCH v10 02/11] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-24  8:34                 ` Andrew Rybchenko
2022-02-23  3:02               ` [PATCH v10 03/11] ethdev: bring in async queue-based flow rules operations Alexander Kozyrev
2022-02-24  8:35                 ` Andrew Rybchenko
2022-02-23  3:02               ` [PATCH v10 04/11] ethdev: bring in async indirect actions operations Alexander Kozyrev
2022-02-24  8:37                 ` Andrew Rybchenko
2022-02-23  3:02               ` [PATCH v10 05/11] app/testpmd: add flow engine configuration Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 06/11] app/testpmd: add flow template management Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 07/11] app/testpmd: add flow table management Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 08/11] app/testpmd: add async flow create/destroy operations Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 09/11] app/testpmd: add flow queue push operation Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 10/11] app/testpmd: add flow queue pull operation Alexander Kozyrev
2022-02-23  3:02               ` [PATCH v10 11/11] app/testpmd: add async indirect actions operations Alexander Kozyrev
2022-02-24 13:07               ` [PATCH v10 00/11] ethdev: datapath-focused flow rules management Ferruh Yigit
2022-02-24 13:13                 ` Ferruh Yigit
2022-02-24 13:14                   ` Raslan Darawsheh
2022-02-22 16:41           ` [PATCH v8 00/10] " Ferruh Yigit
2022-02-22 16:49             ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).