DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Eads, Gage" <gage.eads@intel.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Cc: "olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	"arybchenko@solarflare.com" <arybchenko@solarflare.com>
Subject: [dpdk-dev] Mempool handler ops index allocation issue
Date: Thu, 9 May 2019 22:19:55 +0000	[thread overview]
Message-ID: <9184057F7FC11744A2107296B6B8EB1E68CB4370@fmsmsx101.amr.corp.intel.com> (raw)
Message-ID: <20190509221955.2mIOX8dIvqYebPPLd6qlhp_LkUIAsXc2VoxqduyYVq0@z> (raw)

Hi all,

I ran into a problem with a multi-process application, in which two processes assigned the same mempool handler ops index to *different* handlers. This happened because the two processes supplied the -d EAL arguments in different order, e.g.:

sudo ./appA -dlibrte_mempool_bucket.so -dlibrte_mempool_ring.so --proc-type primary &
sudo ./appB -dlibrte_mempool_ring.so -dlibrte_mempool_bucket.so --proc-type secondary &

The dynamic load order matters because the ops indexes are assigned in the order the library ctors are run. This can result in the different processes trying to use different handlers for the same mempool.

I'm not aware of any requirement that the EAL argument order should match across processes, so I don't think this is a user error. This could also happen in static libraries if they linked the libraries in a different order - but that shouldn't occur if both applications are following the rules for building an external application (https://doc.dpdk.org/guides/prog_guide/dev_kit_build_system.html#building-external-applications).

If you agree that this is an issue, I see a couple possible resolutions:


1.       Add a note/warning to the mempool handlers section of the user guide (https://doc.dpdk.org/guides/prog_guide/mempool_lib.html#mempool-handlers)

2.       Modify rte_mempool_register_ops() so that built-in handlers (ring, stack, etc.) have fixed IDs. E.g. ring is always 0, stack is always 1, etc. These handlers could be identified by their name string. User-registered mempools would still be susceptible to this problem, though.

Thoughts? Alternatives?

Thanks,
Gage

             reply	other threads:[~2019-05-09 22:19 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09 22:19 Eads, Gage [this message]
2019-05-09 22:19 ` Eads, Gage
2019-05-13 12:14 ` Olivier Matz
2019-05-13 12:14   ` Olivier Matz
2019-05-13 12:22   ` [dpdk-dev] ***Spam*** " Andrew Rybchenko
2019-05-13 12:22     ` Andrew Rybchenko
2019-06-18 18:14     ` Eads, Gage

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9184057F7FC11744A2107296B6B8EB1E68CB4370@fmsmsx101.amr.corp.intel.com \
    --to=gage.eads@intel.com \
    --cc=arybchenko@solarflare.com \
    --cc=dev@dpdk.org \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).