DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid
@ 2020-10-08  9:17 Ciara Loftus
  2020-10-08 11:55 ` Ferruh Yigit
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Ciara Loftus @ 2020-10-08  9:17 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

Supporting this would require locks, which would impact the performance of
the more typical cases - xsks with different qids and netdevs.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")
---
 drivers/net/af_xdp/rte_eth_af_xdp.c | 44 +++++++++++++++++++++++------
 1 file changed, 35 insertions(+), 9 deletions(-)

diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index eaf2c9c873..9e0e5c254a 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -634,16 +634,35 @@ find_internal_resource(struct pmd_internals *port_int)
 	return list;
 }
 
+/* Check if the netdev,qid context already exists */
+static inline bool
+ctx_exists(struct pkt_rx_queue *rxq, const char *ifname,
+		struct pkt_rx_queue *list_rxq, const char *list_ifname)
+{
+	bool exists = false;
+
+	if (rxq->xsk_queue_idx == list_rxq->xsk_queue_idx &&
+			!strncmp(ifname, list_ifname, IFNAMSIZ)) {
+		AF_XDP_LOG(ERR, "ctx %s,%i already exists, cannot share umem\n",
+					ifname, rxq->xsk_queue_idx);
+		exists = true;
+	}
+
+	return exists;
+}
+
 /* Get a pointer to an existing UMEM which overlays the rxq's mb_pool */
-static inline struct xsk_umem_info *
-get_shared_umem(struct pkt_rx_queue *rxq) {
+static inline int
+get_shared_umem(struct pkt_rx_queue *rxq, const char *ifname,
+			struct xsk_umem_info **umem)
+{
 	struct internal_list *list;
 	struct pmd_internals *internals;
-	int i = 0;
+	int i = 0, ret = 0;
 	struct rte_mempool *mb_pool = rxq->mb_pool;
 
 	if (mb_pool == NULL)
-		return NULL;
+		return ret;
 
 	pthread_mutex_lock(&internal_list_lock);
 
@@ -655,20 +674,25 @@ get_shared_umem(struct pkt_rx_queue *rxq) {
 			if (rxq == list_rxq)
 				continue;
 			if (mb_pool == internals->rx_queues[i].mb_pool) {
+				if (ctx_exists(rxq, ifname, list_rxq,
+						internals->if_name)) {
+					ret = -1;
+					goto out;
+				}
 				if (__atomic_load_n(
 					&internals->rx_queues[i].umem->refcnt,
 							__ATOMIC_ACQUIRE)) {
-					pthread_mutex_unlock(
-							&internal_list_lock);
-					return internals->rx_queues[i].umem;
+					*umem = internals->rx_queues[i].umem;
+					goto out;
 				}
 			}
 		}
 	}
 
+out:
 	pthread_mutex_unlock(&internal_list_lock);
 
-	return NULL;
+	return ret;
 }
 
 static int
@@ -913,7 +937,9 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals,
 	uint64_t umem_size, align = 0;
 
 	if (internals->shared_umem) {
-		umem = get_shared_umem(rxq);
+		if (get_shared_umem(rxq, internals->if_name, &umem) < 0)
+			return NULL;
+
 		if (umem != NULL &&
 			__atomic_load_n(&umem->refcnt, __ATOMIC_ACQUIRE) <
 					umem->max_xsks) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid
  2020-10-08  9:17 [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid Ciara Loftus
@ 2020-10-08 11:55 ` Ferruh Yigit
  2020-10-13  8:52 ` Ferruh Yigit
  2020-10-13 13:10 ` [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx Ciara Loftus
  2 siblings, 0 replies; 6+ messages in thread
From: Ferruh Yigit @ 2020-10-08 11:55 UTC (permalink / raw)
  To: Ciara Loftus, dev; +Cc: techboard

On 10/8/2020 10:17 AM, Ciara Loftus wrote:
> Supporting this would require locks, which would impact the performance of
> the more typical cases - xsks with different qids and netdevs.
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")

Off the topic.

This is the patch 80000 in patchwork! This was fast.
Thanks to everyone who contributed!

https://patches.dpdk.org/patch/80000/

The historical numbers from DPDK patchwork:
80000 - Oct.   8, 2020 (153 days) [ 5 months / 21 weeks and 6 days ]
70000 - May    8, 2020 (224 days)
60000 - Sept. 27, 2019 (248 days)
50000 - Jan.  22, 2019 (253 days)
40000 - May   14, 2018 (217 days)
30000 - Oct.   9, 2017 (258 days)
20000 - Jan.  25, 2017 (372 days)
10000 - Jan.  20, 2016 (645 days)
00001 - April 16, 2014


This is the fastest 10K of patches, the average ~250 days for 10K patch seems 
passed by 100 days.
v20.11 being ABI break release must have helped it but meanwhile the big 
difference may mean project is still growing.

Just to put into another perspective, this means continuous 65 patches in 
average each day for last a few months. Thanks again to all contributors.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid
  2020-10-08  9:17 [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid Ciara Loftus
  2020-10-08 11:55 ` Ferruh Yigit
@ 2020-10-13  8:52 ` Ferruh Yigit
  2020-10-13 13:37   ` Loftus, Ciara
  2020-10-13 13:10 ` [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx Ciara Loftus
  2 siblings, 1 reply; 6+ messages in thread
From: Ferruh Yigit @ 2020-10-13  8:52 UTC (permalink / raw)
  To: Ciara Loftus, dev

On 10/8/2020 10:17 AM, Ciara Loftus wrote:
> Supporting this would require locks, which would impact the performance of
> the more typical cases - xsks with different qids and netdevs.
> 
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")

Hi Ciara,

'check-git-log.sh' script is giving some errors, can you please fix them, you 
can run script as: "./devtools/check-git-log.sh -n1"

And can you please give some sample in the description what is not supported now?

Also does this require any documentation update, as kind of limitation or known 
issues?

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx
  2020-10-08  9:17 [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid Ciara Loftus
  2020-10-08 11:55 ` Ferruh Yigit
  2020-10-13  8:52 ` Ferruh Yigit
@ 2020-10-13 13:10 ` Ciara Loftus
  2020-10-13 15:57   ` Ferruh Yigit
  2 siblings, 1 reply; 6+ messages in thread
From: Ciara Loftus @ 2020-10-13 13:10 UTC (permalink / raw)
  To: dev; +Cc: Ciara Loftus

AF_XDP PMDs who wish to share a UMEM must have a unique context
(ctx) ie. netdev,qid tuple. For instance, the following will not
work since both PMDs' contexts are identical.

  --vdev net_af_xdp0,iface=ens786f1,start_queue=0,shared_umem=1
  --vdev net_af_xdp1,iface=ens786f1,start_queue=0,shared_umem=1

Supporting this scenario would require locks, which would impact
the performance of the more typical cases - xsks with different
netdev,qid tuples.

Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")
---
v2:
* Add doc update
* Fix commit message style issues
* Update commit message with more information

 doc/guides/nics/af_xdp.rst          | 25 ++++++++++++++++
 drivers/net/af_xdp/rte_eth_af_xdp.c | 44 +++++++++++++++++++++++------
 2 files changed, 60 insertions(+), 9 deletions(-)

diff --git a/doc/guides/nics/af_xdp.rst b/doc/guides/nics/af_xdp.rst
index be268fe7ff..052e59a3ae 100644
--- a/doc/guides/nics/af_xdp.rst
+++ b/doc/guides/nics/af_xdp.rst
@@ -82,3 +82,28 @@ Limitations
 
   Note: The AF_XDP PMD will fail to initialise if an MTU which violates the driver's
   conditions as above is set prior to launching the application.
+
+- **Shared UMEM**
+
+  The sharing of UMEM is only supported for AF_XDP sockets with unique contexts.
+  The context refers to the netdev,qid tuple.
+
+  The following combination will fail:
+
+  .. code-block:: console
+
+    --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \
+    --vdev net_af_xdp1,iface=ens786f1,shared_umem=1 \
+
+  Either of the following however is permitted since either the netdev or qid differs
+  between the two vdevs:
+
+  .. code-block:: console
+
+    --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \
+    --vdev net_af_xdp1,iface=ens786f1,start_queue=1,shared_umem=1 \
+
+  .. code-block:: console
+
+    --vdev net_af_xdp0,iface=ens786f1,shared_umem=1 \
+    --vdev net_af_xdp1,iface=ens786f2,shared_umem=1 \
\ No newline at end of file
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index eaf2c9c873..9e0e5c254a 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -634,16 +634,35 @@ find_internal_resource(struct pmd_internals *port_int)
 	return list;
 }
 
+/* Check if the netdev,qid context already exists */
+static inline bool
+ctx_exists(struct pkt_rx_queue *rxq, const char *ifname,
+		struct pkt_rx_queue *list_rxq, const char *list_ifname)
+{
+	bool exists = false;
+
+	if (rxq->xsk_queue_idx == list_rxq->xsk_queue_idx &&
+			!strncmp(ifname, list_ifname, IFNAMSIZ)) {
+		AF_XDP_LOG(ERR, "ctx %s,%i already exists, cannot share umem\n",
+					ifname, rxq->xsk_queue_idx);
+		exists = true;
+	}
+
+	return exists;
+}
+
 /* Get a pointer to an existing UMEM which overlays the rxq's mb_pool */
-static inline struct xsk_umem_info *
-get_shared_umem(struct pkt_rx_queue *rxq) {
+static inline int
+get_shared_umem(struct pkt_rx_queue *rxq, const char *ifname,
+			struct xsk_umem_info **umem)
+{
 	struct internal_list *list;
 	struct pmd_internals *internals;
-	int i = 0;
+	int i = 0, ret = 0;
 	struct rte_mempool *mb_pool = rxq->mb_pool;
 
 	if (mb_pool == NULL)
-		return NULL;
+		return ret;
 
 	pthread_mutex_lock(&internal_list_lock);
 
@@ -655,20 +674,25 @@ get_shared_umem(struct pkt_rx_queue *rxq) {
 			if (rxq == list_rxq)
 				continue;
 			if (mb_pool == internals->rx_queues[i].mb_pool) {
+				if (ctx_exists(rxq, ifname, list_rxq,
+						internals->if_name)) {
+					ret = -1;
+					goto out;
+				}
 				if (__atomic_load_n(
 					&internals->rx_queues[i].umem->refcnt,
 							__ATOMIC_ACQUIRE)) {
-					pthread_mutex_unlock(
-							&internal_list_lock);
-					return internals->rx_queues[i].umem;
+					*umem = internals->rx_queues[i].umem;
+					goto out;
 				}
 			}
 		}
 	}
 
+out:
 	pthread_mutex_unlock(&internal_list_lock);
 
-	return NULL;
+	return ret;
 }
 
 static int
@@ -913,7 +937,9 @@ xsk_umem_info *xdp_umem_configure(struct pmd_internals *internals,
 	uint64_t umem_size, align = 0;
 
 	if (internals->shared_umem) {
-		umem = get_shared_umem(rxq);
+		if (get_shared_umem(rxq, internals->if_name, &umem) < 0)
+			return NULL;
+
 		if (umem != NULL &&
 			__atomic_load_n(&umem->refcnt, __ATOMIC_ACQUIRE) <
 					umem->max_xsks) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid
  2020-10-13  8:52 ` Ferruh Yigit
@ 2020-10-13 13:37   ` Loftus, Ciara
  0 siblings, 0 replies; 6+ messages in thread
From: Loftus, Ciara @ 2020-10-13 13:37 UTC (permalink / raw)
  To: Yigit, Ferruh; +Cc: dev

> On 10/8/2020 10:17 AM, Ciara Loftus wrote:
> > Supporting this would require locks, which would impact the performance
> of
> > the more typical cases - xsks with different qids and netdevs.
> >
> > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> > Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")
> 
> Hi Ciara,
> 
> 'check-git-log.sh' script is giving some errors, can you please fix them, you
> can run script as: "./devtools/check-git-log.sh -n1"
> 
> And can you please give some sample in the description what is not
> supported now?
> 
> Also does this require any documentation update, as kind of limitation or
> known
> issues?

Thanks for the suggestions Ferruh, all valid. I've implemented them in the v2.

Ciara

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx
  2020-10-13 13:10 ` [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx Ciara Loftus
@ 2020-10-13 15:57   ` Ferruh Yigit
  0 siblings, 0 replies; 6+ messages in thread
From: Ferruh Yigit @ 2020-10-13 15:57 UTC (permalink / raw)
  To: Ciara Loftus, dev

On 10/13/2020 2:10 PM, Ciara Loftus wrote:
> AF_XDP PMDs who wish to share a UMEM must have a unique context
> (ctx) ie. netdev,qid tuple. For instance, the following will not
> work since both PMDs' contexts are identical.
> 
>    --vdev net_af_xdp0,iface=ens786f1,start_queue=0,shared_umem=1
>    --vdev net_af_xdp1,iface=ens786f1,start_queue=0,shared_umem=1
> 
> Supporting this scenario would require locks, which would impact
> the performance of the more typical cases - xsks with different
> netdev,qid tuples.
> 
 > Fixes: 74b46340e2d4 ("net/af_xdp: support shared UMEM")
 >
> Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
 >

Applied to dpdk-next-net/main, thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-10-13 15:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-08  9:17 [dpdk-dev] [PATCH] net/af_xdp: Don't allow umem sharing for xsks with same netdev, qid Ciara Loftus
2020-10-08 11:55 ` Ferruh Yigit
2020-10-13  8:52 ` Ferruh Yigit
2020-10-13 13:37   ` Loftus, Ciara
2020-10-13 13:10 ` [dpdk-dev] [PATCH v2] net/af_xdp: don't allow umem sharing for xsks with same ctx Ciara Loftus
2020-10-13 15:57   ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).