DPDK patches and discussions
 help / color / mirror / Atom feed
From: Jay Rolette <rolette@infiniteio.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] Avoid possible memory cpoy when sort hugepages
Date: Wed, 10 Dec 2014 15:36:37 -0600	[thread overview]
Message-ID: <CADNuJVpsNNOymFmmCPGA9hHB84bzT-N90mci6j5YZqVKtCfi3Q@mail.gmail.com> (raw)
In-Reply-To: <2601191342CEEE43887BDE71AB977258213BED20@IRSMSX105.ger.corp.intel.com>

On Wed, Dec 10, 2014 at 12:39 PM, Ananyev, Konstantin <
konstantin.ananyev@intel.com> wrote:

> > I just got through replacing that entire function in my repo with a call
> to qsort() from the standard library last night myself. Faster
> > (although probably not material to most deployments) and less code.
>
> If you feel like it is worth it, why not to submit a patch? :)


On Haswell and IvyBridge Xeons, with 128 1G huge pages, it doesn't make a
user-noticeable difference in the time required for
rte_eal_hugepage_init(). The reason I went ahead and checked it in my repo
is because:

a) it eats at my soul to see an O(n^2) case for something where qsort() is
trivial to use
b) we will increase that up to ~232 1G huge pages soon. Likely doesn't
matter at that point either, but since it was already written...

What *does* chew up a lot of time in init is where the huge pages are being
explicitly zeroed in map_all_hugepages().

Removing that memset() makes find_numasocket() blow up, but I was able to
do a quick test where I only memset 1 byte on each page. That cut init time
by 30% (~20 seconds in my test).  Significant, but since I'm not entirely
sure it is safe, I'm not making that change right now.

On Linux, shared memory that isn't file-backed is automatically zeroed
before the app gets it. However, I haven't had a chance to chase down
whether that applies to huge pages or not, much less how hugetlbfs factors
into the equation.

Back to the question about the patch, if you guys are interested in it,
I'll have to figure out your patch submission process. Shouldn't be a huge
deal other than the fact that we are on DPDK 1.6 (r2).

Cheers,
Jay

  reply	other threads:[~2014-12-10 21:36 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-10  2:25 Michael Qiu
2014-12-10 10:41 ` Bruce Richardson
2014-12-10 10:59   ` Qiu, Michael
2014-12-10 11:08     ` Ananyev, Konstantin
2014-12-10 17:58       ` Jay Rolette
     [not found]         ` <2601191342CEEE43887BDE71AB977258213BED0C@IRSMSX105.ger.corp.intel.com>
2014-12-10 18:39           ` Ananyev, Konstantin
2014-12-10 21:36             ` Jay Rolette [this message]
2014-12-11  1:44               ` Qiu, Michael

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CADNuJVpsNNOymFmmCPGA9hHB84bzT-N90mci6j5YZqVKtCfi3Q@mail.gmail.com \
    --to=rolette@infiniteio.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).