DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Wenfeng Liu <liuwf@arraynetworks.com.cn>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] mempool: try to get objects from cache when the	mempool is single consumer and multiple producer
Date: Mon, 9 Jan 2017 10:36:09 +0000	[thread overview]
Message-ID: <2601191342CEEE43887BDE71AB9772583F102895@irsmsx105.ger.corp.intel.com> (raw)
In-Reply-To: <1483957487-92635-1-git-send-email-liuwf@arraynetworks.com.cn>

Hi,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wenfeng Liu
> Sent: Monday, January 9, 2017 10:25 AM
> To: olivier.matz@6wind.com
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH] mempool: try to get objects from cache when the mempool is single consumer and multiple producer
> 
> We put objects to cache when the mempool is multiple producer, however the cache will not be used when it is single consumer.
> With this patch we can get objects from cache when the single consumer is happen to be one of the producers, and this improves
> performance.
> 
> Signed-off-by: Wenfeng Liu <liuwf@arraynetworks.com.cn>
> ---
>  lib/librte_mempool/rte_mempool.h | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> index d315d42..4ab5a95 100644
> --- a/lib/librte_mempool/rte_mempool.h
> +++ b/lib/librte_mempool/rte_mempool.h
> @@ -1250,8 +1250,9 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
>  	uint32_t index, len;
>  	void **cache_objs;
> 
> -	/* No cache provided or single consumer */
> -	if (unlikely(cache == NULL || flags & MEMPOOL_F_SC_GET ||
> +	/* No cache provided or single consumer and single producer */
> +	if (unlikely(cache == NULL ||
> +		     (flags & MEMPOOL_F_SC_GET) && (flags & MEMPOOL_F_SP_PUT) ||


I suppose that's a good thing to do...
Might be go one step further and don't check flags at all?
if (unlikely(cache == NULL || n >= cache->size)
   goto ring_dequeue;
If people don't want to have a mempool with cache,
they can just specify it at mempool creation time.
Again cache might improve performance even for SC|SP case.
Konstantin   

>  		     n >= cache->size))
>  		goto ring_dequeue;
> 
> --
> 2.7.4

  reply	other threads:[~2017-01-09 10:36 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-09 10:24 Wenfeng Liu
2017-01-09 10:36 ` Ananyev, Konstantin [this message]
2017-01-10  7:14 ` [dpdk-dev] [PATCH v2] mempool: don't check mempool flags when cache is enabled Wenfeng Liu
2017-01-10  8:26   ` [dpdk-dev] [PATCH v3] " Wenfeng Liu
2017-01-10 15:14     ` Olivier MATZ
2017-01-11  2:25     ` [dpdk-dev] [PATCH v4] mempool: use cache in single producer or consumer mode Wenfeng Liu
2017-01-13 15:23       ` Olivier Matz
2017-01-13 15:34         ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2601191342CEEE43887BDE71AB9772583F102895@irsmsx105.ger.corp.intel.com \
    --to=konstantin.ananyev@intel.com \
    --cc=dev@dpdk.org \
    --cc=liuwf@arraynetworks.com.cn \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).