From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 328E2A04BA; Wed, 7 Oct 2020 16:43:22 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A70351BC7F; Wed, 7 Oct 2020 16:43:20 +0200 (CEST) Received: from mail-ot1-f68.google.com (mail-ot1-f68.google.com [209.85.210.68]) by dpdk.org (Postfix) with ESMTP id D411F1BC7E for ; Wed, 7 Oct 2020 16:43:19 +0200 (CEST) Received: by mail-ot1-f68.google.com with SMTP id d28so2472880ote.1 for ; Wed, 07 Oct 2020 07:43:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=CJpPzhwbclNBkungrkkWOxBNti9nvbN1GBdobTdj70w=; b=YLZrjX24GvFQcgMX/dwiVNn/O9V8QGFFS6oCNdpiVlByNHyx4JBoKCxJP7+zxOjqz3 Ptu15qSdtbTmu0ZHv5LuTnR5P7QQtseXWqvs6Y0pYgv2MPDRmWpM+bwyp2xq0fwhXwgV WIgFTK+/CAYjb5Pykoeck73GutQltjUTB11mE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CJpPzhwbclNBkungrkkWOxBNti9nvbN1GBdobTdj70w=; b=Fjt+i9flbYj+GrwQZzNz3y8FNsm7qSXj3rhqmvPFlhw5QfQ/9o7546l7hTS7XSwusk 5104taWh/4x6SxU+hvMjfUL5F/JNKb1NnDECAQejQj6/KkmDpp3tng8x8ArKgQCYY90u E5XuQLhvHyBCrXrFIsckECp4J1d/5ozATcJzYINh/hk4ktcSf4Fcanx1hwycIjNCcs01 MH5Zo9iQfUU10UjO/vRI4DFuh1WHrcGBPBHKOK2QqO0IlC4iLoX+huyN/rhhuLqfG40A cnC/3XSYHT7FZ6obAvs+hzmxplR/m5YhuLXgaWZSTCNIs+eDY3jMPYJEAjipF9VI14Q9 r4fg== X-Gm-Message-State: AOAM532BtmEX/WX1BKHwHocTuSV4K0DyTs1J5NGy8eFZTpwLlEsBmYRq RyB91/x23XvzIp6saU1Qn7cp/sBxyh1K5abUjMiWrA== X-Google-Smtp-Source: ABdhPJziadt395wFg1sP+Zf6t41QvmMDo4qRj+bH8aUQG/4hwN1L0y6irb/a46NsUGe8qLiSVbcJmo5voEvfT4bv4wA= X-Received: by 2002:a9d:6acc:: with SMTP id m12mr1896432otq.95.1602081797022; Wed, 07 Oct 2020 07:43:17 -0700 (PDT) MIME-Version: 1.0 References: <1601194817-208834-1-git-send-email-suanmingm@nvidia.com> <1602080249-36533-1-git-send-email-suanmingm@nvidia.com> <1602080249-36533-3-git-send-email-suanmingm@nvidia.com> In-Reply-To: <1602080249-36533-3-git-send-email-suanmingm@nvidia.com> From: Ajit Khaparde Date: Wed, 7 Oct 2020 07:42:54 -0700 Message-ID: To: Suanming Mou Cc: Ori Kam , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko , dpdk-dev Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v3 2/2] ethdev: make rte_flow API thread safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Oct 7, 2020 at 7:18 AM Suanming Mou wrote: > > Currently, the rte_flow functions are not defined as thread safe. > DPDK applications either call the functions in single thread or add > locks around the functions for the critical section. > > For PMDs support the flow operations thread safe natively, the > redundant protection in application hurts the performance of the > rte_flow operation functions. > > And the restriction of thread safe not guaranteed for the rte_flow > functions also limits the applications' expectation. > > This feature is going to change the rte_flow functions to be thread > safe. As different PMDs have different flow operations, some may > support thread safe already and others may not. For PMDs don't > support flow thread safe operation, a new lock is defined in ethdev > in order to protects thread unsafe PMDs from rte_flow level. > > A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to > determine whether the PMD supports thread safe flow operation or not. > For PMDs support thread safe flow operations, set the > RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will > skip the thread safe helper lock for these PMDs. Again the rte_flow > level thread safe lock only works when PMD operation functions are > not thread safe. > > For the PMDs which don't want the default mutex lock, just set the > flag in the PMD, and add the prefer type of lock in the PMD. Then > the default mutex lock is easily replaced by the PMD level lock. > > The change has no effect on the current DPDK applications. No change > is required for the current DPDK applications. For the standard posix > pthread_mutex, if no lock contention with the added rte_flow level > mutex, the mutex only does the atomic increasing in > pthread_mutex_lock() and decreasing in > pthread_mutex_unlock(). No futex() syscall will be involved. > > Signed-off-by: Suanming Mou Acked-by: Ajit Khaparde > --- > > v3: > - update flow_lock/unlock -> fts_enter/exit > > v2: > - Update commit info and description doc. > - Add inline for the flow lock and unlock functions. > - Remove the PMD sample part flag configuration. > > --- > > doc/guides/prog_guide/rte_flow.rst | 9 ++-- > lib/librte_ethdev/rte_ethdev.c | 2 + > lib/librte_ethdev/rte_ethdev.h | 2 + > lib/librte_ethdev/rte_ethdev_core.h | 4 ++ > lib/librte_ethdev/rte_flow.c | 84 ++++++++++++++++++++++++++++--------- > 5 files changed, 78 insertions(+), 23 deletions(-) > > diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst > index 119b128..ae2ddb3 100644 > --- a/doc/guides/prog_guide/rte_flow.rst > +++ b/doc/guides/prog_guide/rte_flow.rst > @@ -3046,10 +3046,6 @@ Caveats > - API operations are synchronous and blocking (``EAGAIN`` cannot be > returned). > > -- There is no provision for re-entrancy/multi-thread safety, although nothing > - should prevent different devices from being configured at the same > - time. PMDs may protect their control path functions accordingly. > - > - Stopping the data path (TX/RX) should not be necessary when managing flow > rules. If this cannot be achieved naturally or with workarounds (such as > temporarily replacing the burst function pointers), an appropriate error > @@ -3101,6 +3097,11 @@ This interface additionally defines the following helper function: > - ``rte_flow_ops_get()``: get generic flow operations structure from a > port. > > +If PMD interfaces do not support re-entrancy/multi-thread safety, rte_flow > +level functions will do it by mutex. The application can test the dev_flags > +with RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE in struct rte_eth_dev_data to know > +if the rte_flow thread-safe works under rte_flow level or PMD level. > + > More will be added over time. > > Device compatibility > diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c > index 0f56541..60677fe 100644 > --- a/lib/librte_ethdev/rte_ethdev.c > +++ b/lib/librte_ethdev/rte_ethdev.c > @@ -500,6 +500,7 @@ struct rte_eth_dev * > strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name)); > eth_dev->data->port_id = port_id; > eth_dev->data->mtu = RTE_ETHER_MTU; > + pthread_mutex_init(ð_dev->data->fts_mutex, NULL); > > unlock: > rte_spinlock_unlock(&rte_eth_dev_shared_data->ownership_lock); > @@ -564,6 +565,7 @@ struct rte_eth_dev * > rte_free(eth_dev->data->mac_addrs); > rte_free(eth_dev->data->hash_mac_addrs); > rte_free(eth_dev->data->dev_private); > + pthread_mutex_destroy(ð_dev->data->fts_mutex); > memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data)); > } > > diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h > index d2bf74f..03612fd 100644 > --- a/lib/librte_ethdev/rte_ethdev.h > +++ b/lib/librte_ethdev/rte_ethdev.h > @@ -1664,6 +1664,8 @@ struct rte_eth_dev_owner { > #define RTE_ETH_DEV_REPRESENTOR 0x0010 > /** Device does not support MAC change after started */ > #define RTE_ETH_DEV_NOLIVE_MAC_ADDR 0x0020 > +/** Device PMD supports thread safety flow operation */ > +#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE 0x0040 > > /** > * Iterates over valid ethdev ports owned by a specific owner. > diff --git a/lib/librte_ethdev/rte_ethdev_core.h b/lib/librte_ethdev/rte_ethdev_core.h > index fd3bf92..89df65a 100644 > --- a/lib/librte_ethdev/rte_ethdev_core.h > +++ b/lib/librte_ethdev/rte_ethdev_core.h > @@ -5,6 +5,9 @@ > #ifndef _RTE_ETHDEV_CORE_H_ > #define _RTE_ETHDEV_CORE_H_ > > +#include > +#include > + > /** > * @file > * > @@ -180,6 +183,7 @@ struct rte_eth_dev_data { > * Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags. > */ > > + pthread_mutex_t fts_mutex; /**< rte flow ops thread safety mutex. */ > uint64_t reserved_64s[4]; /**< Reserved for future fields */ > void *reserved_ptrs[4]; /**< Reserved for future fields */ > } __rte_cache_aligned; > diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c > index f8fdd68..6823458 100644 > --- a/lib/librte_ethdev/rte_flow.c > +++ b/lib/librte_ethdev/rte_flow.c > @@ -207,6 +207,20 @@ struct rte_flow_desc_data { > return -rte_errno; > } > > +static inline void > +fts_enter(struct rte_eth_dev *dev) > +{ > + if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE)) > + pthread_mutex_lock(&dev->data->fts_mutex); > +} > + > +static inline void > +fts_exit(struct rte_eth_dev *dev) > +{ > + if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE)) > + pthread_mutex_unlock(&dev->data->fts_mutex); > +} > + > static int > flow_err(uint16_t port_id, int ret, struct rte_flow_error *error) > { > @@ -346,12 +360,16 @@ struct rte_flow_desc_data { > { > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > + int ret; > > if (unlikely(!ops)) > return -rte_errno; > - if (likely(!!ops->validate)) > - return flow_err(port_id, ops->validate(dev, attr, pattern, > - actions, error), error); > + if (likely(!!ops->validate)) { > + fts_enter(dev); > + ret = ops->validate(dev, attr, pattern, actions, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOSYS, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOSYS)); > @@ -372,7 +390,9 @@ struct rte_flow * > if (unlikely(!ops)) > return NULL; > if (likely(!!ops->create)) { > + fts_enter(dev); > flow = ops->create(dev, attr, pattern, actions, error); > + fts_exit(dev); > if (flow == NULL) > flow_err(port_id, -rte_errno, error); > return flow; > @@ -390,12 +410,16 @@ struct rte_flow * > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > + int ret; > > if (unlikely(!ops)) > return -rte_errno; > - if (likely(!!ops->destroy)) > - return flow_err(port_id, ops->destroy(dev, flow, error), > - error); > + if (likely(!!ops->destroy)) { > + fts_enter(dev); > + ret = ops->destroy(dev, flow, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOSYS, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOSYS)); > @@ -408,11 +432,16 @@ struct rte_flow * > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > + int ret; > > if (unlikely(!ops)) > return -rte_errno; > - if (likely(!!ops->flush)) > - return flow_err(port_id, ops->flush(dev, error), error); > + if (likely(!!ops->flush)) { > + fts_enter(dev); > + ret = ops->flush(dev, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOSYS, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOSYS)); > @@ -428,12 +457,16 @@ struct rte_flow * > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > + int ret; > > if (!ops) > return -rte_errno; > - if (likely(!!ops->query)) > - return flow_err(port_id, ops->query(dev, flow, action, data, > - error), error); > + if (likely(!!ops->query)) { > + fts_enter(dev); > + ret = ops->query(dev, flow, action, data, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOSYS, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOSYS)); > @@ -447,11 +480,16 @@ struct rte_flow * > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > + int ret; > > if (!ops) > return -rte_errno; > - if (likely(!!ops->isolate)) > - return flow_err(port_id, ops->isolate(dev, set, error), error); > + if (likely(!!ops->isolate)) { > + fts_enter(dev); > + ret = ops->isolate(dev, set, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOSYS, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOSYS)); > @@ -1224,12 +1262,16 @@ enum rte_flow_conv_item_spec_type { > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > + int ret; > > if (unlikely(!ops)) > return -rte_errno; > - if (likely(!!ops->dev_dump)) > - return flow_err(port_id, ops->dev_dump(dev, file, error), > - error); > + if (likely(!!ops->dev_dump)) { > + fts_enter(dev); > + ret = ops->dev_dump(dev, file, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOSYS, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOSYS)); > @@ -1241,12 +1283,16 @@ enum rte_flow_conv_item_spec_type { > { > struct rte_eth_dev *dev = &rte_eth_devices[port_id]; > const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); > + int ret; > > if (unlikely(!ops)) > return -rte_errno; > - if (likely(!!ops->get_aged_flows)) > - return flow_err(port_id, ops->get_aged_flows(dev, contexts, > - nb_contexts, error), error); > + if (likely(!!ops->get_aged_flows)) { > + fts_enter(dev); > + ret = ops->get_aged_flows(dev, contexts, nb_contexts, error); > + fts_exit(dev); > + return flow_err(port_id, ret, error); > + } > return rte_flow_error_set(error, ENOTSUP, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > NULL, rte_strerror(ENOTSUP)); > -- > 1.8.3.1 >