DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: "Eads, Gage" <gage.eads@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"arybchenko@solarflare.com" <arybchenko@solarflare.com>
Subject: Re: [dpdk-dev] Mempool handler ops index allocation issue
Date: Mon, 13 May 2019 14:14:24 +0200	[thread overview]
Message-ID: <20190513121424.7u7nwp5ziq4nhcjs@platinum> (raw)
In-Reply-To: <9184057F7FC11744A2107296B6B8EB1E68CB4370@fmsmsx101.amr.corp.intel.com>

Hi Gage,

On Thu, May 09, 2019 at 10:19:55PM +0000, Eads, Gage wrote:
> Hi all,
> 
> I ran into a problem with a multi-process application, in which two processes assigned the same mempool handler ops index to *different* handlers. This happened because the two processes supplied the -d EAL arguments in different order, e.g.:
> 
> sudo ./appA -dlibrte_mempool_bucket.so -dlibrte_mempool_ring.so --proc-type primary &
> sudo ./appB -dlibrte_mempool_ring.so -dlibrte_mempool_bucket.so --proc-type secondary &
> 
> The dynamic load order matters because the ops indexes are assigned in the order the library ctors are run. This can result in the different processes trying to use different handlers for the same mempool.
> 
> I'm not aware of any requirement that the EAL argument order should match across processes, so I don't think this is a user error. This could also happen in static libraries if they linked the libraries in a different order - but that shouldn't occur if both applications are following the rules for building an external application (https://doc.dpdk.org/guides/prog_guide/dev_kit_build_system.html#building-external-applications).
> 
> If you agree that this is an issue, I see a couple possible resolutions:
> 
> 
> 1.       Add a note/warning to the mempool handlers section of the user guide (https://doc.dpdk.org/guides/prog_guide/mempool_lib.html#mempool-handlers)
> 
> 2.       Modify rte_mempool_register_ops() so that built-in handlers (ring, stack, etc.) have fixed IDs. E.g. ring is always 0, stack is always 1, etc. These handlers could be identified by their name string. User-registered mempools would still be susceptible to this problem, though.
> 
> Thoughts? Alternatives?

What about this:

- add a new table in a named memory zone that stores the association
  between mempool_ops id and name (but not the ops pointers) of the
  primary process.

- change rte_mempool_register_ops() to have a specific behavior in case
  of a secondary process: lookup in that table to get the id associated
  to the name (fail if not found).


On the other hand, using secondary processes always looked a bit dangerous
to me (for several reasons), so adding a note in the programmer's guide
(your proposal 1) is also fine to me.

Thanks,
Olivier

  parent reply	other threads:[~2019-05-13 12:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09 22:19 Eads, Gage
2019-05-09 22:19 ` Eads, Gage
2019-05-13 12:14 ` Olivier Matz [this message]
2019-05-13 12:14   ` Olivier Matz
2019-05-13 12:22   ` [dpdk-dev] ***Spam*** " Andrew Rybchenko
2019-05-13 12:22     ` Andrew Rybchenko
2019-06-18 18:14     ` Eads, Gage

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190513121424.7u7nwp5ziq4nhcjs@platinum \
    --to=olivier.matz@6wind.com \
    --cc=arybchenko@solarflare.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).