DPDK patches and discussions
 help / color / mirror / Atom feed
From: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>
To: Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] app/test: fix memory needs after RTE_MAX_LCORE was increased to 128
Date: Thu, 11 Dec 2014 09:56:16 +0000	[thread overview]
Message-ID: <E115CCD9D858EF4F90C690B0DCB4D897268770E5@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <6865201.6y9pmue6ZI@xps13>

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Thursday, December 11, 2014 1:11 AM
> To: De Lara Guarch, Pablo
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2] app/test: fix memory needs after
> RTE_MAX_LCORE was increased to 128
> 2014-12-10 15:40, Thomas Monjalon:
> > 2014-12-09 10:11, Pablo de Lara:
> > > Since commit b91c67e5a693211862aa7dc3b78630b4e856c2af,
> > > maximum number of cores is 128, which has increase
> > > the total memory necessary for a rte_mempool structure,
> > > as the per-lcore local cache has been doubled in size.
> > > Therefore, eal_flags unit test was broken since it needed
> > > to use more hugepages.
> > >
> > > Changes in v2: Increased memory to 18MB, as that is the
> > > actual minimum memory necessary (depending on the physical memory
> segments,
> > > DPDK may need less memory)
> > >
> > > Signed-off-by: Pablo de Lara <pablo.de.lara.guarch@intel.com>
> >
> > Independently of RTE_MAX_LCORE being increased to 128, I still have
> some
> > problems with test_invalid_vdev_flag().
> > First, DEFAULT_MEM_SIZE is not used in this function.
> > Also, even with your patch, I get this error in my VM:
> > 	Cannot get hugepage information
> >
> > But if this patch solves an issue for you, I guess it should be applied.
> > That said, I would appreciate a complete description of the commands
> > you use to launch this test in a VM (starting with the Qemu command).
> Applied
> I think we should try to improve this test to be able to run it everywhere.

Sorry Thomas for not coming back to you earlier.
This is the qemu command I used:
qemu-kvm -cpu host -smp 8 -m 4G -monitor stdio -nographic -device e1000,vlan=1 -net user,vlan=1,hostfwd=::2222-:22 fedora-img.cow

And then I run the test using:

./x86_64-native-linuxapp-gcc/app/test -c ff -n 4 -m 20

I suspect you are missing the -m flag, and in that case test will fail as it launches
other primary processes (that test that is failing for you, for instance), which will require some free hugepages, 
so as you said, there is nothing to do with RTE_MAX_LCORE.

Btw, I also tested the fix on VMWare, and it works fine as well. Let me know if your problem is still different from what I think.

> Thanks
> --
> Thomas

      reply	other threads:[~2014-12-11  9:56 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-04 12:40 [dpdk-dev] [PATCH] " Pablo de Lara
2014-12-04 13:20 ` Thomas Monjalon
2014-12-05 15:57   ` De Lara Guarch, Pablo
2014-12-09 10:11 ` [dpdk-dev] [PATCH v2] " Pablo de Lara
2014-12-10 14:40   ` Thomas Monjalon
2014-12-11  1:11     ` Thomas Monjalon
2014-12-11  9:56       ` De Lara Guarch, Pablo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E115CCD9D858EF4F90C690B0DCB4D897268770E5@IRSMSX108.ger.corp.intel.com \
    --to=pablo.de.lara.guarch@intel.com \
    --cc=dev@dpdk.org \
    --cc=thomas.monjalon@6wind.com \


* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).