DPDK patches and discussions
 help / color / mirror / Atom feed
From: Suanming Mou <suanmingm@nvidia.com>
To: Ori Kam <orika@nvidia.com>, Thomas Monjalon <thomas@monjalon.net>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Cc: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v5 2/2] ethdev: make rte_flow API thread safe
Date: Thu, 15 Oct 2020 09:07:47 +0800	[thread overview]
Message-ID: <1602724067-390536-3-git-send-email-suanmingm@nvidia.com> (raw)
In-Reply-To: <1602724067-390536-1-git-send-email-suanmingm@nvidia.com>

Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.

For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.

And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.

This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.

A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.

For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.

The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---

v5:
 - Update ethdev patch commnets, doc and release notes.
 - Update fts_mutex -> flow_ops_mutex.

v4:
 - Change RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE to 0x0001, since
   RTE_ETH_DEV_CLOSE_REMOVE has been removed.
 - Update small comments.
 - Update release notes.

v3:
 - update flow_lock/unlock -> fts_enter/exit

v2:
 - Update commit info and description doc.
 - Add inline for the flow lock and unlock functions.
 - Remove the PMD sample part flag configuration.

---

 doc/guides/prog_guide/rte_flow.rst     | 12 +++--
 doc/guides/rel_notes/release_20_11.rst |  6 +++
 lib/librte_ethdev/rte_ethdev.c         |  2 +
 lib/librte_ethdev/rte_ethdev.h         |  2 +
 lib/librte_ethdev/rte_ethdev_core.h    |  4 ++
 lib/librte_ethdev/rte_flow.c           | 84 ++++++++++++++++++++++++++--------
 6 files changed, 87 insertions(+), 23 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index f26a6c2..a446b51 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3088,10 +3088,6 @@ Caveats
 - API operations are synchronous and blocking (``EAGAIN`` cannot be
   returned).
 
-- There is no provision for re-entrancy/multi-thread safety, although nothing
-  should prevent different devices from being configured at the same
-  time. PMDs may protect their control path functions accordingly.
-
 - Stopping the data path (TX/RX) should not be necessary when managing flow
   rules. If this cannot be achieved naturally or with workarounds (such as
   temporarily replacing the burst function pointers), an appropriate error
@@ -3143,6 +3139,14 @@ This interface additionally defines the following helper function:
 - ``rte_flow_ops_get()``: get generic flow operations structure from a
   port.
 
+If PMD interfaces don't support re-entrancy/multi-thread safety,
+the rte_flow API functions will protect threads by mutex per port.
+The application can check whether ``RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE``
+is set in ``dev_flags``, meaning the PMD is thread-safe regarding rte_flow,
+so the API level protection is disabled.
+Please note that this API-level mutex protects only rte_flow functions,
+other control path functions are not in scope.
+
 More will be added over time.
 
 Device compatibility
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 30db8f2..ae3381f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -67,6 +67,12 @@ New Features
   Added the FEC API which provides functions for query FEC capabilities and
   current FEC mode from device. Also, API for configuring FEC mode is also provided.
 
+* **Added thread safe support to rte_flow functions.**
+
+  Added ``RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE`` device flag to indicate
+  if PMD support thread safe operations. If PMD doesn't set the flag,
+  rte_flow API level functions will protect the flow operations with mutex.
+
 * **Updated Broadcom bnxt driver.**
 
   Updated the Broadcom bnxt driver with new features and improvements, including:
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 5b7979a..7817224 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -500,6 +500,7 @@ struct rte_eth_dev *
 	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
 	eth_dev->data->port_id = port_id;
 	eth_dev->data->mtu = RTE_ETHER_MTU;
+	pthread_mutex_init(&eth_dev->data->flow_ops_mutex, NULL);
 
 unlock:
 	rte_spinlock_unlock(&rte_eth_dev_shared_data->ownership_lock);
@@ -564,6 +565,7 @@ struct rte_eth_dev *
 		rte_free(eth_dev->data->mac_addrs);
 		rte_free(eth_dev->data->hash_mac_addrs);
 		rte_free(eth_dev->data->dev_private);
+		pthread_mutex_destroy(&eth_dev->data->flow_ops_mutex);
 		memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
 	}
 
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index f4cc591..f07dbc4 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1684,6 +1684,8 @@ struct rte_eth_dev_owner {
 	char name[RTE_ETH_MAX_OWNER_NAME_LEN]; /**< The owner name. */
 };
 
+/** PMD supports thread-safe flow operations */
+#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  0x0001
 /** Device supports link state interrupt */
 #define RTE_ETH_DEV_INTR_LSC     0x0002
 /** Device is a bonded slave */
diff --git a/lib/librte_ethdev/rte_ethdev_core.h b/lib/librte_ethdev/rte_ethdev_core.h
index fd3bf92..918a34e 100644
--- a/lib/librte_ethdev/rte_ethdev_core.h
+++ b/lib/librte_ethdev/rte_ethdev_core.h
@@ -5,6 +5,9 @@
 #ifndef _RTE_ETHDEV_CORE_H_
 #define _RTE_ETHDEV_CORE_H_
 
+#include <pthread.h>
+#include <sys/types.h>
+
 /**
  * @file
  *
@@ -180,6 +183,7 @@ struct rte_eth_dev_data {
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
 
+	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
 	void *reserved_ptrs[4];   /**< Reserved for future fields */
 } __rte_cache_aligned;
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index 0a8a7a2..1b0370f 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -208,6 +208,20 @@ struct rte_flow_desc_data {
 	return -rte_errno;
 }
 
+static inline void
+fts_enter(struct rte_eth_dev *dev)
+{
+	if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE))
+		pthread_mutex_lock(&dev->data->flow_ops_mutex);
+}
+
+static inline void
+fts_exit(struct rte_eth_dev *dev)
+{
+	if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE))
+		pthread_mutex_unlock(&dev->data->flow_ops_mutex);
+}
+
 static int
 flow_err(uint16_t port_id, int ret, struct rte_flow_error *error)
 {
@@ -254,12 +268,16 @@ struct rte_flow_desc_data {
 {
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->validate))
-		return flow_err(port_id, ops->validate(dev, attr, pattern,
-						       actions, error), error);
+	if (likely(!!ops->validate)) {
+		fts_enter(dev);
+		ret = ops->validate(dev, attr, pattern, actions, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -280,7 +298,9 @@ struct rte_flow *
 	if (unlikely(!ops))
 		return NULL;
 	if (likely(!!ops->create)) {
+		fts_enter(dev);
 		flow = ops->create(dev, attr, pattern, actions, error);
+		fts_exit(dev);
 		if (flow == NULL)
 			flow_err(port_id, -rte_errno, error);
 		return flow;
@@ -298,12 +318,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->destroy))
-		return flow_err(port_id, ops->destroy(dev, flow, error),
-				error);
+	if (likely(!!ops->destroy)) {
+		fts_enter(dev);
+		ret = ops->destroy(dev, flow, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -316,11 +340,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->flush))
-		return flow_err(port_id, ops->flush(dev, error), error);
+	if (likely(!!ops->flush)) {
+		fts_enter(dev);
+		ret = ops->flush(dev, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -336,12 +365,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (!ops)
 		return -rte_errno;
-	if (likely(!!ops->query))
-		return flow_err(port_id, ops->query(dev, flow, action, data,
-						    error), error);
+	if (likely(!!ops->query)) {
+		fts_enter(dev);
+		ret = ops->query(dev, flow, action, data, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -355,11 +388,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (!ops)
 		return -rte_errno;
-	if (likely(!!ops->isolate))
-		return flow_err(port_id, ops->isolate(dev, set, error), error);
+	if (likely(!!ops->isolate)) {
+		fts_enter(dev);
+		ret = ops->isolate(dev, set, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -963,12 +1001,16 @@ enum rte_flow_conv_item_spec_type {
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->dev_dump))
-		return flow_err(port_id, ops->dev_dump(dev, file, error),
-				error);
+	if (likely(!!ops->dev_dump)) {
+		fts_enter(dev);
+		ret = ops->dev_dump(dev, file, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -980,12 +1022,16 @@ enum rte_flow_conv_item_spec_type {
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->get_aged_flows))
-		return flow_err(port_id, ops->get_aged_flows(dev, contexts,
-				nb_contexts, error), error);
+	if (likely(!!ops->get_aged_flows)) {
+		fts_enter(dev);
+		ret = ops->get_aged_flows(dev, contexts, nb_contexts, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOTSUP,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
-- 
1.8.3.1


  parent reply	other threads:[~2020-10-15  1:08 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-27  8:20 [dpdk-dev] [PATCH 0/2] ethdev: make rte flow " Suanming Mou
2020-09-27  8:20 ` [dpdk-dev] [PATCH 1/2] eal/windows: add pthread mutex lock Suanming Mou
2020-09-27 15:56   ` Dmitry Kozlyuk
2020-09-28  2:30     ` Suanming Mou
2020-09-27  8:20 ` [dpdk-dev] [PATCH 2/2] ethdev: make rte flow API thread safe Suanming Mou
2020-09-30 10:56   ` Ori Kam
2020-10-04 23:44     ` Suanming Mou
2020-10-04 23:48 ` [dpdk-dev] [PATCH v2 0/2] ethdev: make rte_flow " Suanming Mou
2020-10-04 23:48   ` [dpdk-dev] [PATCH v2 1/2] eal/windows: add pthread mutex lock Suanming Mou
2020-10-04 23:48   ` [dpdk-dev] [PATCH v2 2/2] ethdev: make rte_flow API thread safe Suanming Mou
2020-10-05 11:28     ` Ori Kam
2020-10-06 23:18       ` Ajit Khaparde
2020-10-07  0:50         ` Suanming Mou
2020-10-07  6:33           ` Ori Kam
2020-10-07 14:17 ` [dpdk-dev] [PATCH v3 0/2] " Suanming Mou
2020-10-07 14:17   ` [dpdk-dev] [PATCH v3 1/2] eal/windows: add pthread mutex lock Suanming Mou
2020-10-07 16:53     ` Dmitry Kozlyuk
2020-10-08  2:46       ` Suanming Mou
2020-10-14 10:02         ` Tal Shnaiderman
2020-10-07 14:17   ` [dpdk-dev] [PATCH v3 2/2] ethdev: make rte_flow API thread safe Suanming Mou
2020-10-07 14:42     ` Ajit Khaparde
2020-10-07 16:37       ` Ori Kam
2020-10-07 20:10     ` Matan Azrad
2020-10-08  2:56       ` Suanming Mou
2020-10-09  1:17 ` [dpdk-dev] [PATCH v4 0/2] " Suanming Mou
2020-10-09  1:17   ` [dpdk-dev] [PATCH v4 1/2] eal/windows: add pthread mutex lock Suanming Mou
2020-10-09  9:19     ` Tal Shnaiderman
2020-10-14 16:45     ` Ranjit Menon
2020-10-15  2:15     ` Narcisa Ana Maria Vasile
2020-10-15  2:18       ` Suanming Mou
2020-10-09  1:17   ` [dpdk-dev] [PATCH v4 2/2] ethdev: make rte_flow API thread safe Suanming Mou
2020-10-14 10:19     ` Thomas Monjalon
2020-10-14 10:41       ` Suanming Mou
2020-10-15  1:07 ` [dpdk-dev] [PATCH v5 0/2] " Suanming Mou
2020-10-15  1:07   ` [dpdk-dev] [PATCH v5 1/2] eal/windows: add pthread mutex lock Suanming Mou
2020-10-15  2:22     ` Narcisa Ana Maria Vasile
2020-10-15  1:07   ` Suanming Mou [this message]
2020-10-15  8:28     ` [dpdk-dev] [PATCH v5 2/2] ethdev: make rte_flow API thread safe Thomas Monjalon
2020-10-15  8:52       ` Andrew Rybchenko
2020-10-15 22:43   ` [dpdk-dev] [PATCH v5 0/2] " Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1602724067-390536-3-git-send-email-suanmingm@nvidia.com \
    --to=suanmingm@nvidia.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=orika@nvidia.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).