From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
To: Olivier Matz <olivier.matz@6wind.com>
Cc: stable@dpdk.org
Subject: Re: [dpdk-stable] [PATCH] app/testpmd: fix number of mbufs in pool
Date: Sun, 21 May 2017 18:42:05 +0800 [thread overview]
Message-ID: <20170521104205.GH2276@yliu-dev> (raw)
In-Reply-To: <20170519081302.13557-1-olivier.matz@6wind.com>
On Fri, May 19, 2017 at 10:13:02AM +0200, Olivier Matz wrote:
> [ backported from upstream commit 3ab64341daf8bae485a7e27c68f1dd80c7fd5130 ]
Thanks for the backport, applied to dpdk-stable/16.11.
--yliu
>
> The number of mbufs in pools is not consistent depending on the
> options passed by the user and the number of ports, especially
> in numa mode, when the number of mbuf is specified by the user.
>
> When the user specifies the number of mbuf (per pool), it should
> overrides the default value.
>
> - before the patch
>
> ./build/app/testpmd -- -i --numa
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
> <mbuf_pool_socket_1>: n=331456, size=2176, socket=1
>
> ./build/app/testpmd -- --total-num-mbufs=8000 -i --numa
> <mbuf_pool_socket_0>: n=256000, size=2176, socket=0
> <mbuf_pool_socket_1>: n=256000, size=2176, socket=1
> # BAD, should be n=8000 for each socket
>
> ./build/app/testpmd -- -i
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
>
> ./build/app/testpmd -- --total-num-mbufs=8000 -i
> <mbuf_pool_socket_0>: n=8000, size=2176, socket=0
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \
> -i --numa
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
> <mbuf_pool_socket_1>: n=331456, size=2176, socket=1
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \
> --total-num-mbufs=8000 -i --numa
> <mbuf_pool_socket_0>: n=128000, size=2176, socket=0
> <mbuf_pool_socket_1>: n=128000, size=2176, socket=1
> # BAD, should be n=8000 for each socket
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- -i
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \
> --total-num-mbufs=8000 -i
> <mbuf_pool_socket_0>: n=8000, size=2176, socket=0
>
> - after the patch
>
> ./build/app/testpmd -- -i --numa
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
> <mbuf_pool_socket_1>: n=331456, size=2176, socket=1
>
> ./build/app/testpmd -- --total-num-mbufs=8000 -i --numa
> <mbuf_pool_socket_0>: n=8000, size=2176, socket=0
> <mbuf_pool_socket_1>: n=8000, size=2176, socket=1
>
> ./build/app/testpmd -- -i
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
>
> ./build/app/testpmd -- --total-num-mbufs=8000 -i
> <mbuf_pool_socket_0>: n=8000, size=2176, socket=0
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \
> -i --numa
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
> <mbuf_pool_socket_1>: n=331456, size=2176, socket=1
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \
> --total-num-mbufs=8000 -i --numa
> <mbuf_pool_socket_0>: n=8000, size=2176, socket=0
> <mbuf_pool_socket_1>: n=8000, size=2176, socket=1
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- -i
> <mbuf_pool_socket_0>: n=331456, size=2176, socket=0
>
> ./build/app/testpmd --vdev=eth_null0 --vdev=eth_null1 -- \
> --total-num-mbufs=8000 -i
> <mbuf_pool_socket_0>: n=8000, size=2176, socket=0
>
> Fixes: b6ea6408fbc7 ("ethdev: store numa_node per device")
> Cc: stable@dpdk.org
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Jingjing Wu <jingjing.wu@intel.com>
>
> Since the
> commit 999b2ee0fe45 ("app/testpmd: enable NUMA support by default")
> is not present in the stable branch, update the commit log and retest:
> --no-numa is removed
> --numa is added when nothing was specified
>
> (cherry picked from commit 3ab64341daf8bae485a7e27c68f1dd80c7fd5130)
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
>
> Conflicts:
> app/test-pmd/testpmd.c
> ---
> app/test-pmd/testpmd.c | 65 +++++++++++++++++++++-----------------------------
> 1 file changed, 27 insertions(+), 38 deletions(-)
>
> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
> index f0ac7f379..56a8aa965 100644
> --- a/app/test-pmd/testpmd.c
> +++ b/app/test-pmd/testpmd.c
> @@ -518,34 +518,6 @@ init_config(void)
> fwd_lcores[lc_id]->cpuid_idx = lc_id;
> }
>
> - /*
> - * Create pools of mbuf.
> - * If NUMA support is disabled, create a single pool of mbuf in
> - * socket 0 memory by default.
> - * Otherwise, create a pool of mbuf in the memory of sockets 0 and 1.
> - *
> - * Use the maximum value of nb_rxd and nb_txd here, then nb_rxd and
> - * nb_txd can be configured at run time.
> - */
> - if (param_total_num_mbufs)
> - nb_mbuf_per_pool = param_total_num_mbufs;
> - else {
> - nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX + (nb_lcores * mb_mempool_cache)
> - + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST;
> -
> - if (!numa_support)
> - nb_mbuf_per_pool =
> - (nb_mbuf_per_pool * RTE_MAX_ETHPORTS);
> - }
> -
> - if (!numa_support) {
> - if (socket_num == UMA_NO_CONFIG)
> - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, 0);
> - else
> - mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool,
> - socket_num);
> - }
> -
> FOREACH_PORT(pid, ports) {
> port = &ports[pid];
> rte_eth_dev_info_get(pid, &port->dev_info);
> @@ -568,20 +540,37 @@ init_config(void)
> port->need_reconfig_queues = 1;
> }
>
> + /*
> + * Create pools of mbuf.
> + * If NUMA support is disabled, create a single pool of mbuf in
> + * socket 0 memory by default.
> + * Otherwise, create a pool of mbuf in the memory of sockets 0 and 1.
> + *
> + * Use the maximum value of nb_rxd and nb_txd here, then nb_rxd and
> + * nb_txd can be configured at run time.
> + */
> + if (param_total_num_mbufs)
> + nb_mbuf_per_pool = param_total_num_mbufs;
> + else {
> + nb_mbuf_per_pool = RTE_TEST_RX_DESC_MAX +
> + (nb_lcores * mb_mempool_cache) +
> + RTE_TEST_TX_DESC_MAX + MAX_PKT_BURST;
> + nb_mbuf_per_pool *= RTE_MAX_ETHPORTS;
> + }
> +
> if (numa_support) {
> uint8_t i;
> - unsigned int nb_mbuf;
> -
> - if (param_total_num_mbufs && nb_ports != 0)
> - nb_mbuf_per_pool = nb_mbuf_per_pool/nb_ports;
>
> - for (i = 0; i < max_socket; i++) {
> - nb_mbuf = (nb_mbuf_per_pool * RTE_MAX_ETHPORTS);
> - if (nb_mbuf)
> - mbuf_pool_create(mbuf_data_size,
> - nb_mbuf,i);
> - }
> + for (i = 0; i < max_socket; i++)
> + mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, i);
> + } else {
> + if (socket_num == UMA_NO_CONFIG)
> + mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool, 0);
> + else
> + mbuf_pool_create(mbuf_data_size, nb_mbuf_per_pool,
> + socket_num);
> }
> +
> init_port_config();
>
> /*
> --
> 2.11.0
prev parent reply other threads:[~2017-05-21 10:46 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-19 8:13 Olivier Matz
2017-05-21 10:42 ` Yuanhan Liu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170521104205.GH2276@yliu-dev \
--to=yuanhan.liu@linux.intel.com \
--cc=olivier.matz@6wind.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).