From: Jerin Jacob <jerinjacobk@gmail.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: dpdk-dev <dev@dpdk.org>, Bruce Richardson <bruce.richardson@intel.com>
Subject: Re: [dpdk-dev] [PATCH] config: increase default maximum number of NUMA nodes
Date: Thu, 4 Feb 2021 17:09:02 +0530 [thread overview]
Message-ID: <CALBAE1MFbtLaPR0KaFe2xutOm=xWyCeyS4dr=hgLd5pT8GdpSg@mail.gmail.com> (raw)
In-Reply-To: <2455385.Mn0tqrlHz1@thomas>
On Thu, Feb 4, 2021 at 3:58 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 04/02/2021 07:19, Jerin Jacob:
> > On Thu, Feb 4, 2021 at 2:49 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > AMD CPU can present a high number of NUMA nodes.
> > > The default should be 32 for better compatibility.
> >
> > The typical configuration is 4 nodes[1] for AMD. Just wondering, Is it
> > an exception case? if so, Do we need to consume more memory for normal
> > cases?
> >
> > [1]
> > https://developer.amd.com/wp-content/resources/56308-NUMA%20Topology%20for%20AMD%20EPYC%E2%84%A2%20Naples%20Family%20Processors.PDF
>
> As you can read in
> https://www.dell.com/support/kbdoc/fr-fr/000137696/amd-rome-is-it-for-real-architecture-and-initial-hpc-performance
> there is an option "CCX as NUMA Domain.
> This option exposes each CCX as a NUMA node.
> On a system with dual-socket CPUs with 16 CCXs per CPU,
> this setting will expose 32 NUMA domains."
> and
> "Enabling this option is expected to help virtualized environments."
I see.
>
> I would not say it is exceptional.
> And in my understanding, the memory cost is not so high for DPDK.
> Do you see some large arrays depending on RTE_MAX_NUMA_NODES?
Not quite a lot.
lib/librte_efd/rte_efd.c: struct efd_online_chunk
*chunks[RTE_MAX_NUMA_NODES];
lib/librte_eal/linux/eal_memory.c: uint64_t memory[RTE_MAX_NUMA_NODES];
lib/librte_eal/linux/eal.c: char * arg[RTE_MAX_NUMA_NODES];
lib/librte_eal/common/eal_common_dynmem.c: uint64_t
memory[RTE_MAX_NUMA_NODES];
lib/librte_eal/common/eal_common_dynmem.c: int
cpu_per_socket[RTE_MAX_NUMA_NODES];
lib/librte_eal/common/eal_private.h: uint32_t
numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
lib/librte_eal/common/eal_internal_cfg.h: uint32_t
num_pages[RTE_MAX_NUMA_NODES];
lib/librte_eal/common/eal_internal_cfg.h: volatile uint64_t
socket_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per socket */
lib/librte_eal/common/eal_internal_cfg.h: volatile uint64_t
socket_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per
socket */
lib/librte_eal/windows/eal_lcore.c: struct socket_map
sockets[RTE_MAX_NUMA_NODES];
lib/librte_node/ip4_lookup.c: struct rte_lpm *lpm_tbl[RTE_MAX_NUMA_NODES];
Acked-by: Jerin Jacob <jerinj@marvell.com>
>
>
next prev parent reply other threads:[~2021-02-04 11:39 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-03 21:18 Thomas Monjalon
2021-02-04 6:19 ` Jerin Jacob
2021-02-04 10:28 ` Thomas Monjalon
2021-02-04 11:39 ` Jerin Jacob [this message]
2021-02-04 9:56 ` Bruce Richardson
2021-02-05 16:36 ` Thomas Monjalon
2021-02-04 10:56 ` Asaf Penso
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALBAE1MFbtLaPR0KaFe2xutOm=xWyCeyS4dr=hgLd5pT8GdpSg@mail.gmail.com' \
--to=jerinjacobk@gmail.com \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).