DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: 张伟 <zhangwqh@126.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped
Date: Fri, 18 Dec 2015 12:07:41 +0000	[thread overview]
Message-ID: <20151218120740.GA11116@bricha3-MOBL3> (raw)
In-Reply-To: <44e6e2b3.65ad.151ae293aa7.Coremail.zhangwqh@126.com>

On Thu, Dec 17, 2015 at 12:18:36PM +0800, 张伟 wrote:
> Hi all, 
> 
> 
> When running the multi process example, does anybody know that why increasing the number of mbufs, the performance gets dropped. 
> 
> 
> In multi process example, there are two macros which are related to the number of mbufs
> 
> 
> #defineMBUFS_PER_CLIENT1536
> |
> | #defineMBUFS_PER_PORT1536 |
> | |
> 
> 
> If increasing these two numbers by 8 times, the performance drops about 10%. Does anybody know why?
> 
> | constunsigned num_mbufs = (num_clients * MBUFS_PER_CLIENT) \ |
> | | + (ports->num_ports * MBUFS_PER_PORT); |
> | pktmbuf_pool = rte_mempool_create(PKTMBUF_POOL_NAME, num_mbufs, |
> | | MBUF_SIZE, MBUF_CACHE_SIZE, |
> | | sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, |
> | | NULL, rte_pktmbuf_init, NULL, rte_socket_id(), NO_FLAGS ); |

One possible explanation could be due to the memory footprint of the memory 
pool. While the per-lcore mempool caches of buffers operate in a LIFO (i.e. stack)
manner, when mbufs are allocated on one core and freed on another, they pass
through a FIFO (i.e. ring) inside the mempool. This means that you iterate
through all buffers in the pool in this case, which can cause a slowdown if the
mempool size is bigger than your cache.

/Bruce

      reply	other threads:[~2015-12-18 12:07 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-12-17  4:18 张伟
2015-12-18 12:07 ` Bruce Richardson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151218120740.GA11116@bricha3-MOBL3 \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=zhangwqh@126.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).