DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Xueming(Steven) Li" <xuemingl@mellanox.com>
To: Jerin Jacob <jerinjacobk@gmail.com>
Cc: Olivier Matz <olivier.matz@6wind.com>,
	Andrew Rybchenko <arybchenko@solarflare.com>,
	dpdk-dev <dev@dpdk.org>, Asaf Penso <asafp@mellanox.com>,
	Ori Kam <orika@mellanox.com>,
	Stephen Hemminger <stephen@networkplumber.org>
Subject: Re: [dpdk-dev] [RFC] mempool: introduce indexed memory pool
Date: Fri, 18 Oct 2019 10:10:46 +0000	[thread overview]
Message-ID: <DB6PR05MB3190F00F8E08C241FC7DD94BAC6C0@DB6PR05MB3190.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <CALBAE1PiZqeWNyfnETae=bZcM8Kh3eCqWVjCd7eo0M2rLJGehQ@mail.gmail.com>

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Friday, October 18, 2019 12:41 AM
> To: Xueming(Steven) Li <xuemingl@mellanox.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>; dpdk-dev <dev@dpdk.org>; Asaf Penso
> <asafp@mellanox.com>; Ori Kam <orika@mellanox.com>; Stephen
> Hemminger <stephen@networkplumber.org>
> Subject: Re: [dpdk-dev] [RFC] mempool: introduce indexed memory pool
> 
> On Thu, Oct 17, 2019 at 6:43 PM Xueming(Steven) Li
> <xuemingl@mellanox.com> wrote:
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Thursday, October 17, 2019 3:14 PM
> > > To: Xueming(Steven) Li <xuemingl@mellanox.com>
> > > Cc: Olivier Matz <olivier.matz@6wind.com>; Andrew Rybchenko
> > > <arybchenko@solarflare.com>; dpdk-dev <dev@dpdk.org>; Asaf Penso
> > > <asafp@mellanox.com>; Ori Kam <orika@mellanox.com>
> > > Subject: Re: [dpdk-dev] [RFC] mempool: introduce indexed memory pool
> > >
> > > On Thu, Oct 17, 2019 at 12:25 PM Xueming Li <xuemingl@mellanox.com>
> wrote:
> > > >
> > > > Indexed memory pool manages memory entries by index, allocation
> > > > from pool returns both memory pointer and index(ID). users save ID
> > > > as u32 or less(u16) instead of traditional 8 bytes pointer. Memory
> > > > could be retrieved from pool or returned to pool later by index.
> > > >
> > > > Pool allocates backend memory in chunk on demand, pool size grows
> > > > dynamically. Bitmap is used to track entry usage in chunk, thus
> > > > management overhead is one bit per entry.
> > > >
> > > > Standard rte_malloc demands malloc overhead(64B) and minimal data
> > > > size(64B). This pool aims to such cost saving also pointer size.
> > > > For scenario like creating millions of rte_flows each consists of
> > > > small pieces of memories, the difference is huge.
> > > >
> > > > Like standard memory pool, this lightweight pool only support
> > > > fixed size memory allocation. Pools should be created for each
> > > > different size.
> > > >
> > > > To facilitate memory allocated by index, a set of ILIST_XXX macro
> > > > defined to operate entries as regular LIST.
> > > >
> > > > By setting entry size to zero, pool can be used as ID generator.
> > > >
> > > > Signed-off-by: Xueming Li <xuemingl@mellanox.com>
> > > > ---
> > > >  lib/librte_mempool/Makefile                |   3 +-
> > > >  lib/librte_mempool/rte_indexed_pool.c      | 289
> +++++++++++++++++++++
> > > >  lib/librte_mempool/rte_indexed_pool.h      | 224 ++++++++++++++++
> > >
> > > Can this be abstracted over the driver interface instead of creating a new
> APIS?
> > > ie using drivers/mempool/
> >
> > The driver interface manage memory entries with pointers, while this api
> uses u32 index as key...
> 
> I see. As a use case, it makes sense to me.

> Have you checked the possibility reusing/extending
> lib/librte_eal/common/include/rte_bitmap.h for bitmap management,
> instead of rolling a new one?

Yes, the rte_bitmap designed for fixed bitmap size, to grow, have to copy almost entire bitmap(array1+array2).
This pool distribute array2 into each trunk, and the trunk array actually plays the array1 role.
When growing, just grow array1 which is smaller, no touch to existing array2 in each trunk.

The map_xxx() naming might confused people, I'll make following change in next version:
	map_get()/map_set(): only used once and the code is simple, move code into caller.
	map_is_empty()/map_clear()/ : unused, remove
	map_clear_any(): relative simple, embed into caller.

  reply	other threads:[~2019-10-18 10:10 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-17  6:55 Xueming Li
2019-10-17  7:13 ` Jerin Jacob
2019-10-17 13:13   ` Xueming(Steven) Li
2019-10-17 16:40     ` Jerin Jacob
2019-10-18 10:10       ` Xueming(Steven) Li [this message]
2019-10-19 12:28         ` Jerin Jacob
2019-10-25 15:29           ` Xueming(Steven) Li
2019-10-25 16:28             ` Jerin Jacob
2019-12-26 11:05 ` Olivier Matz
2020-03-05  7:43   ` Suanming Mou
2020-03-05  9:52     ` Morten Brørup
2020-03-06  7:27       ` Suanming Mou
2020-03-06  8:57         ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB6PR05MB3190F00F8E08C241FC7DD94BAC6C0@DB6PR05MB3190.eurprd05.prod.outlook.com \
    --to=xuemingl@mellanox.com \
    --cc=arybchenko@solarflare.com \
    --cc=asafp@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=jerinjacobk@gmail.com \
    --cc=olivier.matz@6wind.com \
    --cc=orika@mellanox.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).