From: Thomas Monjalon <thomas.monjalon@6wind.com>
To: venky.venkatesan@intel.com
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH v2] eal: change default per socket memory allocation
Date: Tue, 13 May 2014 18:27:07 +0200 [thread overview]
Message-ID: <63642020.erThIODFlC@xps13> (raw)
In-Reply-To: <1399642242-19725-1-git-send-email-david.marchand@6wind.com>
Hi Venky,
There were comments on the first version of this patch and you suggested to
try this new implementation.
So do you acknowledge this patch?
Thanks for your review
2014-05-09 15:30, David Marchand:
> From: Didier Pallard <didier.pallard@6wind.com>
>
> Currently, if there is more memory in hugepages than the amount
> requested by dpdk application, the memory is allocated by taking as much
> memory as possible from each socket, starting from first one.
> For example if a system is configured with 8 GB in 2 sockets (4 GB per
> socket), and dpdk is requesting only 4GB of memory, all memory will be
> taken in socket 0 (that have exactly 4GB of free hugepages) even if some
> cores are configured on socket 1, and there are free hugepages on socket
> 1...
>
> Change this behaviour to allocate memory on all sockets where some cores
> are configured, spreading the memory amongst sockets using following
> ratio per socket:
> N° of cores configured on the socket / Total number of configured cores
> * requested memory
>
> This algorithm is used when memory amount is specified globally using
> -m option. Per socket memory allocation can always be done using
> --socket-mem option.
>
> Changes included in v2:
> - only update linux implementation as bsd looks not to be ready for numa
> - if new algorithm fails, then defaults to previous behaviour
>
> Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
> Signed-off-by: David Marchand <david.marchand@6wind.com>
> ---
> lib/librte_eal/linuxapp/eal/eal_memory.c | 50
> +++++++++++++++++++++++++++--- 1 file changed, 45 insertions(+), 5
> deletions(-)
>
> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c
> b/lib/librte_eal/linuxapp/eal/eal_memory.c index 73a6394..471dcfd 100644
> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> @@ -881,13 +881,53 @@ calc_num_pages_per_socket(uint64_t * memory,
> if (num_hp_info == 0)
> return -1;
>
> - for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; socket++)
> { - /* if specific memory amounts per socket weren't requested */
> - if (internal_config.force_sockets == 0) {
> + /* if specific memory amounts per socket weren't requested */
> + if (internal_config.force_sockets == 0) {
> + int cpu_per_socket[RTE_MAX_NUMA_NODES];
> + size_t default_size, total_size;
> + unsigned lcore_id;
> +
> + /* Compute number of cores per socket */
> + memset(cpu_per_socket, 0, sizeof(cpu_per_socket));
> + RTE_LCORE_FOREACH(lcore_id) {
> + cpu_per_socket[rte_lcore_to_socket_id(lcore_id)]++;
> + }
> +
> + /*
> + * Automatically spread requested memory amongst detected sockets
> according + * to number of cores from cpu mask present on each socket
> + */
> + total_size = internal_config.memory;
> + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
> socket++) { +
> + /* Set memory amount per socket */
> + default_size = (internal_config.memory * cpu_per_socket[socket])
> + / rte_lcore_count();
> +
> + /* Limit to maximum available memory on socket */
> + default_size = RTE_MIN(default_size,
get_socket_mem_size(socket));
> +
> + /* Update sizes */
> + memory[socket] = default_size;
> + total_size -= default_size;
> + }
> +
> + /*
> + * If some memory is remaining, try to allocate it by getting all
> + * available memory from sockets, one after the other
> + */
> + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
> socket++) { /* take whatever is available */
> - memory[socket] = RTE_MIN(get_socket_mem_size(socket),
> - total_mem);
> + default_size = RTE_MIN(get_socket_mem_size(socket) -
memory[socket],
> + total_size);
> +
> + /* Update sizes */
> + memory[socket] += default_size;
> + total_size -= default_size;
> }
> + }
> +
> + for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_mem != 0; socket++)
> { /* skips if the memory on specific socket wasn't requested */
> for (i = 0; i < num_hp_info && memory[socket] != 0; i++){
> hp_used[i].hugedir = hp_info[i].hugedir;
next prev parent reply other threads:[~2014-05-13 16:27 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-09 13:30 David Marchand
2014-05-13 16:27 ` Thomas Monjalon [this message]
2014-05-13 16:33 ` Venkatesan, Venky
2014-05-14 9:15 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63642020.erThIODFlC@xps13 \
--to=thomas.monjalon@6wind.com \
--cc=dev@dpdk.org \
--cc=venky.venkatesan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).