From: Olivier MATZ <olivier.matz@6wind.com>
To: Stephen Hemminger <stephen@networkplumber.org>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH, v2] mempool: avoid memory waste with large pagesize
Date: Thu, 10 Mar 2016 09:37:48 +0100 [thread overview]
Message-ID: <56E1325C.2030206@6wind.com> (raw)
In-Reply-To: <1457557926-4056-1-git-send-email-stephen@networkplumber.org>
Hello,
On 03/09/2016 10:12 PM, Stephen Hemminger wrote:
> If page size is large (like 64K on ARM) and object size is small
> then don't waste lots of memory by rounding up to page size.
> Instead, round up so that 1 or more objects all fit in a page.
>
> This preserves the requirement that an object must not a page
> or virt2phys would break, and makes sure 62K is not wasted per mbuf.
>
> Also, fix invalid use of printf (versus log) for error reporting.
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> v2 - fix trailer_size calculation, and replace printf with log
>
> lib/librte_mempool/rte_mempool.c | 23 ++++++++++++++---------
> 1 file changed, 14 insertions(+), 9 deletions(-)
>
> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> index f8781e1..ff08a1a 100644
> --- a/lib/librte_mempool/rte_mempool.c
> +++ b/lib/librte_mempool/rte_mempool.c
> @@ -300,18 +300,23 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags,
> if (! rte_eal_has_hugepages()) {
> /*
> * compute trailer size so that pool elements fit exactly in
> - * a standard page
> + * a standard page. If elements are smaller than a page
> + * then allow multiple elements per page
> */
> - int page_size = getpagesize();
> - int new_size = page_size - sz->header_size - sz->elt_size;
> - if (new_size < 0 || (unsigned int)new_size < sz->trailer_size) {
> - printf("When hugepages are disabled, pool objects "
> - "can't exceed PAGE_SIZE: %d + %d + %d > %d\n",
> - sz->header_size, sz->elt_size, sz->trailer_size,
> - page_size);
> + unsigned new_size, orig_size, page_size;
> +
> + page_size = getpagesize();
> + orig_size = sz->header_size + sz->elt_size;
> + new_size = rte_align32pow2(orig_size);
> + if (new_size > page_size) {
> + RTE_LOG(ERR, MEMPOOL,
> + "When hugepages are disabled, pool objects "
> + "can't exceed PAGE_SIZE: %u + %u + %u > %u\n",
> + sz->header_size, sz->elt_size, sz->trailer_size,
> + page_size);
> return 0;
> }
> - sz->trailer_size = new_size;
> + sz->trailer_size = new_size - orig_size;
> }
>
> /* this is the size of an object, including header and trailer */
>
It still does not work. When CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=y:
mp = rte_mempool_create("test", 128,
64, 0, 0, NULL, NULL, NULL, NULL, SOCKET_ID_ANY, 0);
rte_mempool_dump(stdout, mp);
populated_size=128
header_size=64
elt_size=64
trailer_size=64
total_obj_size=192
Regards,
Olivier
next prev parent reply other threads:[~2016-03-10 8:37 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-09 21:12 Stephen Hemminger
2016-03-10 8:37 ` Olivier MATZ [this message]
2016-03-10 10:48 ` Ferruh Yigit
2016-03-10 11:12 ` Olivier MATZ
2016-03-11 4:09 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56E1325C.2030206@6wind.com \
--to=olivier.matz@6wind.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).