Hi,

For some reason this mail stopped being plain text.

Pleas find my comment marked with [Ori]

 

Best,

Ori

 

From: Jack Min <jackmin@nvidia.com>
Sent: Thursday, June 2, 2022 1:24 PM
To: Ori Kam <orika@nvidia.com>; Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@xilinx.com>
Cc: dev@dpdk.org
Subject: Re: [RFC v2 2/2] ethdev: queue-based flow aged report

 

On 6/2/22 14:10, Ori Kam wrote:

Hi,

Hello,

 
 
-----Original Message-----
From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Sent: Wednesday, June 1, 2022 9:21 PM
Subject: Re: [RFC v2 2/2] ethdev: queue-based flow aged report
 
Again, summary must not be a statement.
 
On 6/1/22 10:39, Xiaoyu Min wrote:
When application use queue-based flow rule management and operate the
same flow rule on the same queue, e.g create/destroy/query, API of
querying aged flow rules should also have queue id parameter just like
other queue-based flow APIs.
 
By this way, PMD can work in more optimized way since resources are
isolated by queue and needn't synchronize.
 
If application do use queue-based flow management but configure port
without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate
a given flow rule on different queues, the queue id parameter will
be ignored.
 
In addition to the above change, another new API is added which help the
application get information about which queues have aged out flows after
RTE_ETH_EVENT_FLOW_AGED event received. The queried queue id can be
used in the above queue based query aged flows API.
 
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com>
---
  lib/ethdev/rte_flow.h        | 82 ++++++++++++++++++++++++++++++++++++
  lib/ethdev/rte_flow_driver.h | 13 ++++++
  2 files changed, 95 insertions(+)
 
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 38439fcd1d..a12becfe3b 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2810,6 +2810,7 @@ enum rte_flow_action_type {
     * See function rte_flow_get_aged_flows
     * see enum RTE_ETH_EVENT_FLOW_AGED
     * See struct rte_flow_query_age
+    * See function rte_flow_get_q_aged_flows
     */
   RTE_FLOW_ACTION_TYPE_AGE,
 
@@ -5624,6 +5625,87 @@ rte_flow_async_action_handle_update(uint16_t port_id,
            const void *update,
            void *user_data,
            struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get flow queues which have aged out flows on a given port.
+ *
+ * The application can use this function to query which queues have aged out flows after
+ * a RTE_ETH_EVENT_FLOW_AGED event is received so the returned queue id can be used to
+ * get aged out flows on this given queue by call rte_flow_get_q_aged_flows.
+ *
+ * This function can be called from the event callback or synchronously regardless of the event.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in, out] queue_id
+ *   Array of queue id that will be set.
+ * @param[in] nb_queue_id
+ *   Maximum number of the queue id that can be returned.
+ *   This value should be equal to the size of the queue_id array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. Initialized in case of
+ *   error only.
+ *
+ * @return
+ *   if nb_queue_id is 0, return the amount of all queues which have aged out flows.
+ *   if nb_queue_id is not 0 , return the amount of queues which have aged out flows
+ *   reported in the queue_id array, otherwise negative errno value.
 
I'm sorry, but it is unclear for me what happens if provided array is
insufficient to return all queues. IMHO, we still should provide as
much as we can. The question is how to report that we have more queues.
It looks like the only sensible way is to return value greater than
nb_queue_id.
 
I think that just like any other function, this function should return the max based on the requested number.
Returning bigger number may result in out of buf issues, or require extra validation step from application.
In addition as far as I can see the common practice in DPDK is to return the requested number.

Yes, it just likes other functions.

 
I have other concern with this function, from my understanding this function will be called on the service thread
that handels the aging event, after calling this function the application still needs to propagate the event to the
correct threads.
I think it will be better if the event itself will hold which queue triggered the aging. Or even better to get the

As discussed in v1, there seems no good place in the current callback function to pass this kind of information from driver to application.

Or you have a better idea?

[Ori] Maybe use the new queues, for example maybe application can get the notification as part of the polling function.
maybe it can even get the aged rules.

 
notification on the correct thread. (I know it is much more complicated but maybe it is worth the effort,
since this can be used for other cases)

Yes this will be better but I don't really know how to inform the correct thread. 

Or we just let application register callback per flow queue?

Something like: rte_flow_queue_callback_register(), and PMD call rte_flow_queue_callback_process() to invoke

the callback functions registered to this queue only?

[Ori] that is a good suggestion or the one I suggested above. And we can also improve the current event, the question is what
are the advantages of any approach and what are the downsides.

 

 
 
+ *
+ * @see rte_flow_action_age
+ * @see RTE_ETH_EVENT_FLOW_AGED
+ */
+
+__rte_experimental
+int
+rte_flow_get_aged_queues(uint16_t port_id, uint32_t queue_id[], uint32_t nb_queue_id,
+                   struct rte_flow_error *error);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Get aged-out flows of a given port on the given flow queue.
+ *
+ * RTE_ETH_EVENT_FLOW_AGED event will be triggered at least one new aged out flow was
+ * detected on any flow queue after the last call to rte_flow_get_q_aged_flows.
+ *
+ * The application can use rte_flow_get_aged_queues to query which queues have aged
+ * out flows after RTE_ETH_EVEN_FLOW_AGED event.
+ *
+ * If application configure port attribute without RTE_FLOW_PORT_FLAG_STRICT_QUEUE
+ * the @p queue_id will be ignored.
+ * This function can be called to get the aged flows asynchronously from the
+ * event callback or synchronously regardless the event.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param queue_id
+ *   Flow queue to query. Ignored when RTE_FLOW_PORT_FLAG_STRICT_QUEUE not set.
+ * @param[in, out] contexts
+ *   The address of an array of pointers to the aged-out flows contexts.
+ * @param[in] nb_contexts
+ *   The length of context array pointers.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. Initialized in case of
+ *   error only.
+ *
+ * @return
+ *   if nb_contexts is 0, return the amount of all aged contexts.
+ *   if nb_contexts is not 0 , return the amount of aged flows reported
+ *   in the context array, otherwise negative errno value.
+ *
+ * @see rte_flow_action_age
+ * @see RTE_ETH_EVENT_FLOW_AGED
+ * @see rte_flow_port_flag
+ */
+
+__rte_experimental
+int
+rte_flow_get_q_aged_flows(uint16_t port_id, uint32_t queue_id, void **contexts,
+                    uint32_t nb_contexts, struct rte_flow_error *error);
  #ifdef __cplusplus
  }
  #endif
diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
index 2bff732d6a..b665170bf4 100644
--- a/lib/ethdev/rte_flow_driver.h
+++ b/lib/ethdev/rte_flow_driver.h
@@ -260,6 +260,19 @@ struct rte_flow_ops {
             const void *update,
             void *user_data,
             struct rte_flow_error *error);
+   /** See rte_flow_get_aged_queues() */
+   int (*get_aged_queues)
+       (uint16_t port_id,
+            uint32_t queue_id[],
+            uint32_t nb_queue_id,
+            struct rte_flow_error *error);
+   /** See rte_flow_get_q_aged_flows() */
+   int (*get_q_aged_flows)
+       (uint16_t port_id,
+            uint32_t queue_id,
+            void **contexts,
+            uint32_t nb_contexts,
+            struct rte_flow_error *error);
  };
 
  /**