From: "Phil Yang (Arm Technology China)" <Phil.Yang@arm.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>,
Ferruh Yigit <ferruh.yigit@intel.com>,
"dev-bounces@dpdk.org" <dev-bounces@dpdk.org>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "bernard.iremonger@intel.com" <bernard.iremonger@intel.com>,
"Gavin Hu (Arm Technology China)" <Gavin.Hu@arm.com>,
"stable@dpdk.org" <stable@dpdk.org>, nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool allocation
Date: Thu, 11 Oct 2018 10:37:32 +0000 [thread overview]
Message-ID: <DB7PR08MB3385A2D96AC323D86D3801F2E9E10@DB7PR08MB3385.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <DB7PR08MB33854540DC8B02D36D233AE3E9E10@DB7PR08MB3385.eurprd08.prod.outlook.com>
Hi Anatoly/Yigit,
I've prepared a patch to fix this issue.
I will send out the patch once the internal review is done.
Thanks,
Phil Yang
> -----Original Message-----
> From: Phil Yang (Arm Technology China)
> Sent: Thursday, October 11, 2018 3:12 PM
> To: 'Burakov, Anatoly' <anatoly.burakov@intel.com>; Ferruh Yigit
> <ferruh.yigit@intel.com>; dev-bounces@dpdk.org; dev@dpdk.org
> Cc: bernard.iremonger@intel.com; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; stable@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool
> allocation
>
> > -----Original Message-----
> > From: Burakov, Anatoly <anatoly.burakov@intel.com>
> > Sent: Monday, October 8, 2018 7:36 PM
> > To: Ferruh Yigit <ferruh.yigit@intel.com>; dev-bounces@dpdk.org;
> > dev@dpdk.org
> > Cc: bernard.iremonger@intel.com; Gavin Hu (Arm Technology China)
> > <Gavin.Hu@arm.com>; stable@dpdk.org; Phil Yang (Arm Technology China)
> > <Phil.Yang@arm.com>
> > Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool
> > allocation
> >
> > On 08-Oct-18 12:33 PM, Ferruh Yigit wrote:
> > > On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote:
> > >> By default, testpmd will create membuf pool for all NUMA nodes and
> > >> ignore EAL configuration.
> > >>
> > >> Count the number of available NUMA according to EAL core mask or
> > >> core list configuration. Optimized by only creating membuf pool for
> > >> those nodes.
> > >>
> > >> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
> > >>
> > >> Signed-off-by: Phil Yang <phil.yang@arm.com>
> > >> Acked-by: Gavin Hu <gavin.hu@arm.com>
> > >> ---
> > >> app/test-pmd/testpmd.c | 4 ++--
> > >> 1 file changed, 2 insertions(+), 2 deletions(-)
> > >>
> > >> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > >> ee48db2..a56af2b 100644
> > >> --- a/app/test-pmd/testpmd.c
> > >> +++ b/app/test-pmd/testpmd.c
> > >> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void)
> > >>
> > >> nb_lc = 0;
> > >> for (i = 0; i < RTE_MAX_LCORE; i++) {
> > >> + if (!rte_lcore_is_enabled(i))
> > >> + continue;
> > >> sock_num = rte_lcore_to_socket_id(i);
> > >> if (new_socket_id(sock_num)) {
> > >> if (num_sockets >= RTE_MAX_NUMA_NODES) { @@ -
> > 485,8 +487,6 @@
> > >> set_default_fwd_lcores_config(void)
> > >> }
> > >> socket_ids[num_sockets++] = sock_num;
> > >> }
> > >> - if (!rte_lcore_is_enabled(i))
> > >> - continue;
> > >> if (i == rte_get_master_lcore())
> > >> continue;
> > >> fwd_lcores_cpuids[nb_lc++] = i;
> > >>
> > >
> > >
> > > This is causing testpmd fail for the case all cores from socket 1
> > > and added a virtual device which will try to allocate memory from socket 0.
> > >
> > >
> > > $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i
> > > ...
> > > Failed to setup RX queue:No mempool allocation on the socket 0
> > > EAL: Error - exiting with code: 1
> > > Cause: Start ports failed
> > >
> > >
> >
> > It's an open question as to why pcap driver tries to allocate on
> > socket
> > 0 when everything is on socket 1, but perhaps a better improvement
> > would be to take into account not only socket ID's of lcores, but ethdev
> devices as well?
> >
> > --
> > Thanks,
> > Anatoly
>
> Hi Anatoly,
>
> Agree.
>
> Since NUMA-aware is enabled default in testpmd, so it should be configurable
> for vdev port NUMA setting.
>
> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo --socket-mem=64 --
> --numa --port-numa-config="(0,1)" --ring-numa-config="(0,1,1),(0,2,1)" -i
>
> ...
> Configuring Port 0 (socket 0)
> Failed to setup RX queue:No mempool allocation on the socket 0
> EAL: Error - exiting with code: 1
> Cause: Start ports failed
>
> This should be a defect.
>
> Thanks
> Phil Yang
prev parent reply other threads:[~2018-10-11 10:37 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-27 9:33 [dpdk-dev] [PATCH] app/testpmd: Optimize " Phil Yang
2018-08-27 9:39 ` Gavin Hu
2018-09-11 16:23 ` Iremonger, Bernard
2018-09-12 1:59 ` Phil Yang (Arm Technology China)
2018-09-12 1:54 ` [dpdk-dev] [PATCH v2] app/testpmd: optimize " Phil Yang
2018-09-12 10:15 ` Iremonger, Bernard
2018-09-19 13:38 ` Thomas Monjalon
2018-10-08 11:33 ` Ferruh Yigit
2018-10-08 11:35 ` Burakov, Anatoly
2018-10-11 7:11 ` Phil Yang (Arm Technology China)
2018-10-11 10:37 ` Phil Yang (Arm Technology China) [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DB7PR08MB3385A2D96AC323D86D3801F2E9E10@DB7PR08MB3385.eurprd08.prod.outlook.com \
--to=phil.yang@arm.com \
--cc=Gavin.Hu@arm.com \
--cc=anatoly.burakov@intel.com \
--cc=bernard.iremonger@intel.com \
--cc=dev-bounces@dpdk.org \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@intel.com \
--cc=nd@arm.com \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).