DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: "Tahhan, Maryam" <maryam.tahhan@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 1/1] net/af_xdp: shared UMEM support
Date: Thu, 17 Sep 2020 08:54:45 +0000	[thread overview]
Message-ID: <759f102000c249b6b93fa2503ea41861@intel.com> (raw)
In-Reply-To: <MN2PR11MB374358F26827444BA5F64A5EF0270@MN2PR11MB3743.namprd11.prod.outlook.com>

> >
> > Kernel v5.10 will introduce the ability to efficiently share a UMEM between
> > AF_XDP sockets bound to different queue ids on the same or different
> > devices. This patch integrates that functionality into the AF_XDP PMD.
> >
> > A PMD will attempt to share a UMEM with others if the shared_umem=1
> > vdev arg is set. UMEMs can only be shared across PMDs with the same
> > mempool, up to a limited number of PMDs goverened by the size of the
> > given mempool.
> > Sharing UMEMs is not supported for non-zero-copy (aligned) mode.
> >
> > The benefit of sharing UMEM across PMDs is a saving in memory due to not
> > having to register the UMEM multiple times. Throughput was measured to
> > remain within 2% of the default mode (not sharing UMEM).
> >
> > A version of libbpf >= v0.2.0 is required and the appropriate pkg-config file
> > for libbpf must be installed such that meson can determine the version.
> >
> > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> 
> <snip>
> 
> >
> > +/* List which tracks PMDs to facilitate sharing UMEMs across them. */
> > +struct internal_list {
> > +TAILQ_ENTRY(internal_list) next;
> > +struct rte_eth_dev *eth_dev;
> > +};
> > +
> > +TAILQ_HEAD(internal_list_head, internal_list); static struct
> > +internal_list_head internal_list =
> > +TAILQ_HEAD_INITIALIZER(internal_list);
> > +
> > +static pthread_mutex_t internal_list_lock =
> PTHREAD_MUTEX_INITIALIZER;
> 
> [Tahhan, Maryam] do multiple threads typically initialize and ethdev/invoke
> the underlying driver?
> Most apps I've seen initialize the ports one after the other in the starting
> thread - so if there's not multiple threads doing initialization - we may want
> to consider removing this mutex...
> Or maybe do you see something potentially removing a port while a port is
> being added?

Hi Maryam,

Yes. Although unlikely, I'm not aware of any guarantee that a port A cannot be removed when port B is being added and since both operations can touch the tailq I'm inclined to keep the mutex. But I'm open to correction.

Thanks,
Ciara

> 
> <snip>


  reply	other threads:[~2020-09-17  8:55 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-07 16:16 [dpdk-dev] [PATCH 0/1] " Ciara Loftus
2020-09-07 16:16 ` [dpdk-dev] [PATCH 1/1] " Ciara Loftus
2020-09-10 11:55   ` Tahhan, Maryam
2020-09-17  8:54     ` Loftus, Ciara [this message]
2020-09-17  9:49       ` Tahhan, Maryam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=759f102000c249b6b93fa2503ea41861@intel.com \
    --to=ciara.loftus@intel.com \
    --cc=dev@dpdk.org \
    --cc=maryam.tahhan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).