DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Sun, Xutao" <xutao.sun@intel.com>
To: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue when	mem_pool is more than 34
Date: Tue, 27 Oct 2015 08:22:29 +0000	[thread overview]
Message-ID: <9AC567D38896294095E6F3228F697FC8021C49FD@shsmsx102.ccr.corp.intel.com> (raw)
In-Reply-To: <E115CCD9D858EF4F90C690B0DCB4D8973C83C7C4@IRSMSX108.ger.corp.intel.com>


> -----Original Message-----
> From: De Lara Guarch, Pablo
> Sent: Tuesday, October 27, 2015 3:55 PM
> To: Sun, Xutao; dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue
> when mem_pool is more than 34
> 
> Hi Xutao,
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xutao Sun
> > Sent: Tuesday, October 27, 2015 5:11 AM
> > To: dev@dpdk.org
> > Subject: [dpdk-dev] [PATCH v2] examples/vmdq: Fix the core dump issue
> > when mem_pool is more than 34
> >
> > Macro MAX_QUEUES was defined to 128, only allow 16 vmdq_pools in
> > theory.
> > When running vmdq_app with more than 34 vmdq_pools, it will cause the
> > core_dump issue.
> > Change MAX_QUEUES to 1024 will solve this issue.
> >
> > Signed-off-by: Xutao Sun <xutao.sun@intel.com>
> > ---
> > v2:
> >  - rectify the NUM_MBUFS_PER_PORT since MAX_QUEUES has been
> changed
> >
> >  examples/vmdq/main.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index
> > a142d49..bba5164 100644
> > --- a/examples/vmdq/main.c
> > +++ b/examples/vmdq/main.c
> > @@ -69,12 +69,13 @@
> >  #include <rte_mbuf.h>
> >  #include <rte_memcpy.h>
> >
> > -#define MAX_QUEUES 128
> > +#define MAX_QUEUES 1024
> >  /*
> >   * For 10 GbE, 128 queues require roughly
> >   * 128*512 (RX/TX_queue_nb * RX/TX_ring_descriptors_nb) per port.
> >   */
> > -#define NUM_MBUFS_PER_PORT (128*512)
> > +#define NUM_MBUFS_PER_PORT (MAX_QUEUES *
> > RTE_MAX(RTE_TEST_RX_DESC_DEFAULT, \
> > +
> > 	RTE_TEST_TX_DESC_DEFAULT))
> >  #define MBUF_CACHE_SIZE 64
> >
> >  #define MAX_PKT_BURST 32
> > --
> > 1.9.3
> 
> Please, change the comment above, as you have change code related to it,
> i.e. it is not 128*512 anymore.
> 
> Pablo

Hi, Pablo

I described how I changed code in version 2 below "v2".
And I think the main feature of the patch to is modify the number of MAX_QUEUES.
So I just described the other features briefly.

Thanks,
Xutao

  reply	other threads:[~2015-10-27  8:22 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-13  7:29 [dpdk-dev] [PATCH] " Xutao Sun
2015-10-13  7:59 ` De Lara Guarch, Pablo
2015-10-15  7:54   ` Sun, Xutao
2015-10-27  5:10 ` [dpdk-dev] [PATCH v2] " Xutao Sun
2015-10-27  7:55   ` De Lara Guarch, Pablo
2015-10-27  8:22     ` Sun, Xutao [this message]
2015-10-27  8:58   ` [dpdk-dev] [PATCH v3] " Xutao Sun
2015-10-29  7:53     ` De Lara Guarch, Pablo
2015-12-07  1:46       ` Thomas Monjalon
2015-10-30  5:54   ` [dpdk-dev] [PATCH v2] " Wu, Jingjing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9AC567D38896294095E6F3228F697FC8021C49FD@shsmsx102.ccr.corp.intel.com \
    --to=xutao.sun@intel.com \
    --cc=dev@dpdk.org \
    --cc=pablo.de.lara.guarch@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).