DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Loftus, Ciara" <ciara.loftus@intel.com>,
	"Du, Frank" <frank.du@intel.com>,
	"Andrew Rybchenko" <andrew.rybchenko@oktetlabs.ru>,
	"Paul Szczepanek" <paul.szczepanek@arm.com>
Cc: "Ferruh Yigit" <ferruh.yigit@amd.com>, <dev@dpdk.org>,
	"Burakov, Anatoly" <anatoly.burakov@intel.com>
Subject: RE: [PATCH v2] net/af_xdp: fix umem map size for zero copy
Date: Wed, 29 May 2024 16:16:32 +0200	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35E9F4CC@smartserver.smartshare.dk> (raw)
In-Reply-To: <MW4PR11MB58724AC82A34A3EEFEF78E898EF22@MW4PR11MB5872.namprd11.prod.outlook.com>

> From: Loftus, Ciara [mailto:ciara.loftus@intel.com]
> Sent: Wednesday, 29 May 2024 14.58
> 
> > From: Du, Frank <frank.du@intel.com>
> > Sent: Thursday, May 23, 2024 8:56 AM
> >
> > > From: Morten Brørup <mb@smartsharesystems.com>
> > > Sent: Thursday, May 23, 2024 3:41 PM
> > >
> > > > From: Du, Frank [mailto:frank.du@intel.com]
> > > > Sent: Thursday, 23 May 2024 08.56
> > > >
> > > > > From: Morten Brørup <mb@smartsharesystems.com>
> > > > > Sent: Wednesday, May 22, 2024 3:27 PM
> > > > >
> > > > > > From: Du, Frank [mailto:frank.du@intel.com]
> > > > > > Sent: Wednesday, 22 May 2024 03.25
> > > > > >
> > > > > > > From: Ferruh Yigit <ferruh.yigit@amd.com>
> > > > > > > Sent: Wednesday, May 22, 2024 1:58 AM

[...]

> > > > > > >
> > > > > > > Isn't there a mempool flag that can help us figure out mempool
> > > > > > > is not IOVA contiguous? Isn't it sufficient on its own?
> > > > > >
> > > > > > Indeed, what we need to ascertain is whether it's contiguous in
> > > > > > CPU virtual space, not IOVA. I haven't come across a flag
> > > > > > specifically for CPU virtual contiguity. The major limitation in
> > > > > > XDP is XSK UMEM only supports registering a single contiguous
> > > > > > virtual memory area.
> > > > >
> > > > > I would assume that the EAL memory manager merges free memory into
> > > > > contiguous chunks whenever possible.
> > > > > @Anatoly, please confirm?
> > > > >
> > > > > If my assumption is correct, it means that if mp->nb_mem_chunks !=
> > > > > 1, then the
> > > > > mempool is not virtual contiguous. And if mp->nb_mem_chunks == 1,
> > > > > then it is;
> > > > > there is no need to iterate through the memhdr list.
> > > >
> > > > If this's true now, however, this assumption may not hold true in the
> > > > future code change, iterating through the list may is a safer way as
> > > > it carefully checks the virtual address without relying on any
> > > > condition.
> > >
> > > If there is exactly one memory chunk, it is virtual contiguous. It has one
> > > address and one length, so it must be.
> > >
> > > If there are more than one memory chunk, I consider it unlikely that they
> > > are contiguous.
> > > Have you ever observed the opposite, i.e. a mempool with multiple memory
> > > chunks being virtual contiguous?
> > >
> > > Iterating through the list does not seem safer to me, quite the opposite.
> > > Which future change are you trying to prepare for?
> > >
> > > Keeping it simple is more likely to not break with future changes.
> >
> > No, I haven't encountered a mempool with multiple memory chunks actually,
> > not know how to construct such mempool. The initial approach was to return
> > an error if multiple chunks were detected, and the iteration method was
> > introduced later. I can revert to the original, simpler way.
> 
> The mempool created in my (virtualized) test environment always has multiple
> memory chunks and the iterative check for virtual contiguity in v2 of this
> patch succeeds for me.
> However in v4, since mp->nb_mem_chunks != 1, it will fail for me.
> So it appears that virtual contiguity is possible even if mp->nb_mem_chunks !=
> 1 so I don't think we can rely on that value for determining virtual
> contiguity.

Excellent feedback, Ciara!
Once again, reality beats assumptions. Memory management is not easy. :-)

In another thread [1], I have asked Paul Szczepanek to pick up on this, and coordinate directly with Frank Du.
Paul is working on a closely related function in the mempool library, and it makes sense to merge this feature into the function he is providing.
Alternatively provide another function in the mempool library.

[1]: https://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35E9F4C4@smartserver.smartshare.dk/T/#m8bf62c45d12f34659becf965cf101f8456723c94


  reply	other threads:[~2024-05-29 14:16 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-26  0:51 [PATCH] " Frank Du
2024-04-26 10:43 ` Loftus, Ciara
2024-04-28  0:46   ` Du, Frank
2024-04-30  9:22     ` Loftus, Ciara
2024-05-11  5:26 ` [PATCH v2] " Frank Du
2024-05-17 13:19   ` Loftus, Ciara
2024-05-20  1:28     ` Du, Frank
2024-05-21 15:43   ` Ferruh Yigit
2024-05-21 17:57   ` Ferruh Yigit
2024-05-22  1:25     ` Du, Frank
2024-05-22  7:26       ` Morten Brørup
2024-05-22 10:20         ` Ferruh Yigit
2024-05-23  6:56         ` Du, Frank
2024-05-23  7:40           ` Morten Brørup
2024-05-23  7:56             ` Du, Frank
2024-05-29 12:57               ` Loftus, Ciara
2024-05-29 14:16                 ` Morten Brørup [this message]
2024-05-22 10:00       ` Ferruh Yigit
2024-05-22 11:03         ` Morten Brørup
2024-05-22 14:05           ` Ferruh Yigit
2024-05-23  6:53 ` [PATCH v3] " Frank Du
2024-05-23  8:07 ` [PATCH v4] " Frank Du
2024-05-23  9:22   ` Morten Brørup
2024-05-23 13:31     ` Ferruh Yigit
2024-05-24  1:05       ` Du, Frank
2024-05-24  5:30         ` Morten Brørup
2024-06-20  3:25 ` [PATCH v5] net/af_xdp: parse umem map info from mempool range api Frank Du
2024-06-20  7:10   ` Morten Brørup
2024-07-06  3:40     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35E9F4CC@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=anatoly.burakov@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=frank.du@intel.com \
    --cc=paul.szczepanek@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).