From: Olivier Matz <olivier.matz@6wind.com>
To: Andrew Rybchenko <arybchenko@solarflare.com>
Cc: Xiao Wang <xiao.w.wang@intel.com>, dev@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] mempool: optimize copy in cache get
Date: Mon, 1 Jul 2019 15:11:03 +0200 [thread overview]
Message-ID: <20190701131103.zi72h3me63mzg73v@platinum> (raw)
In-Reply-To: <98702cfa-66ed-e13f-a5ae-c85eb6d64c2f@solarflare.com>
Hi,
On Tue, May 21, 2019 at 12:34:55PM +0300, Andrew Rybchenko wrote:
> On 5/21/19 12:03 PM, Xiao Wang wrote:
> > Use rte_memcpy to improve the pointer array copy. This optimization method
> > has already been applied to __mempool_generic_put() [1], this patch applies
> > it to __mempool_generic_get(). Slight performance gain can be observed in
> > testpmd txonly test.
> >
> > [1] 863bfb47449 ("mempool: optimize copy in cache")
> >
> > Signed-off-by: Xiao Wang <xiao.w.wang@intel.com>
> > ---
> > lib/librte_mempool/rte_mempool.h | 7 +------
> > 1 file changed, 1 insertion(+), 6 deletions(-)
> >
> > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h
> > index 8053f7a04..975da8d22 100644
> > --- a/lib/librte_mempool/rte_mempool.h
> > +++ b/lib/librte_mempool/rte_mempool.h
> > @@ -1344,15 +1344,11 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> > unsigned int n, struct rte_mempool_cache *cache)
> > {
> > int ret;
> > - uint32_t index, len;
> > - void **cache_objs;
> > /* No cache provided or cannot be satisfied from cache */
> > if (unlikely(cache == NULL || n >= cache->size))
> > goto ring_dequeue;
> > - cache_objs = cache->objs;
> > -
> > /* Can this be satisfied from the cache? */
> > if (cache->len < n) {
> > /* No. Backfill the cache first, and then fill from it */
> > @@ -1375,8 +1371,7 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> > }
> > /* Now fill in the response ... */
> > - for (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++)
> > - *obj_table = cache_objs[len];
> > + rte_memcpy(obj_table, &cache->objs[cache->len - n], sizeof(void *) * n);
> > cache->len -= n;
>
> I think the idea of the loop above is to get objects in reverse order to
> order
> to reuse cache top objects (put last) first. It should improve cache hit
> etc.
> So, performance effect of the patch could be very different on various CPUs
> (with different cache sizes) and various work-loads.
>
> So, I doubt that it is a step in right direction.
For reference, this was already discussed 3 years ago:
https://mails.dpdk.org/archives/dev/2016-May/039873.html
https://mails.dpdk.org/archives/dev/2016-June/040029.html
I'm still not convinced that reversing object addresses (as it's done
today) is really important. But Andrew is probably right, the impact of
this kind of patch probably varies depending on many factors. More
performance numbers on real-life use-cases would help to decide what to
do.
Regards,
Olivier
next prev parent reply other threads:[~2019-07-01 13:11 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-21 9:03 Xiao Wang
2019-05-21 9:34 ` Andrew Rybchenko
2019-07-01 13:11 ` Olivier Matz [this message]
2019-07-01 14:21 ` Wang, Xiao W
2019-07-01 15:00 ` Olivier Matz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190701131103.zi72h3me63mzg73v@platinum \
--to=olivier.matz@6wind.com \
--cc=arybchenko@solarflare.com \
--cc=dev@dpdk.org \
--cc=xiao.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).