DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped
@ 2015-12-17  4:18 张伟
  2015-12-18 12:07 ` Bruce Richardson
  0 siblings, 1 reply; 2+ messages in thread
From: 张伟 @ 2015-12-17  4:18 UTC (permalink / raw)
  To: dev

Hi all, 


When running the multi process example, does anybody know that why increasing the number of mbufs, the performance gets dropped. 


In multi process example, there are two macros which are related to the number of mbufs


#defineMBUFS_PER_CLIENT1536
|
| #defineMBUFS_PER_PORT1536 |
| |


If increasing these two numbers by 8 times, the performance drops about 10%. Does anybody know why?

| constunsigned num_mbufs = (num_clients * MBUFS_PER_CLIENT) \ |
| | + (ports->num_ports * MBUFS_PER_PORT); |
| pktmbuf_pool = rte_mempool_create(PKTMBUF_POOL_NAME, num_mbufs, |
| | MBUF_SIZE, MBUF_CACHE_SIZE, |
| | sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, |
| | NULL, rte_pktmbuf_init, NULL, rte_socket_id(), NO_FLAGS ); |

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped
  2015-12-17  4:18 [dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped 张伟
@ 2015-12-18 12:07 ` Bruce Richardson
  0 siblings, 0 replies; 2+ messages in thread
From: Bruce Richardson @ 2015-12-18 12:07 UTC (permalink / raw)
  To: 张伟; +Cc: dev

On Thu, Dec 17, 2015 at 12:18:36PM +0800, 张伟 wrote:
> Hi all, 
> 
> 
> When running the multi process example, does anybody know that why increasing the number of mbufs, the performance gets dropped. 
> 
> 
> In multi process example, there are two macros which are related to the number of mbufs
> 
> 
> #defineMBUFS_PER_CLIENT1536
> |
> | #defineMBUFS_PER_PORT1536 |
> | |
> 
> 
> If increasing these two numbers by 8 times, the performance drops about 10%. Does anybody know why?
> 
> | constunsigned num_mbufs = (num_clients * MBUFS_PER_CLIENT) \ |
> | | + (ports->num_ports * MBUFS_PER_PORT); |
> | pktmbuf_pool = rte_mempool_create(PKTMBUF_POOL_NAME, num_mbufs, |
> | | MBUF_SIZE, MBUF_CACHE_SIZE, |
> | | sizeof(struct rte_pktmbuf_pool_private), rte_pktmbuf_pool_init, |
> | | NULL, rte_pktmbuf_init, NULL, rte_socket_id(), NO_FLAGS ); |

One possible explanation could be due to the memory footprint of the memory 
pool. While the per-lcore mempool caches of buffers operate in a LIFO (i.e. stack)
manner, when mbufs are allocated on one core and freed on another, they pass
through a FIFO (i.e. ring) inside the mempool. This means that you iterate
through all buffers in the pool in this case, which can cause a slowdown if the
mempool size is bigger than your cache.

/Bruce

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-12-18 12:07 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-17  4:18 [dpdk-dev] dpdk multi process increase the number of mbufs, throughput gets dropped 张伟
2015-12-18 12:07 ` Bruce Richardson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).