From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 8BD07A490 for ; Wed, 17 Jan 2018 16:07:12 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 17E13940096; Wed, 17 Jan 2018 15:07:11 +0000 (UTC) Received: from [192.168.38.17] (84.52.114.114) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 17 Jan 2018 15:07:06 +0000 To: Olivier MATZ CC: , "Artem V. Andreev" References: <1511539591-20966-1-git-send-email-arybchenko@solarflare.com> <1511539591-20966-5-git-send-email-arybchenko@solarflare.com> <20171214133837.y5feayfpoxeou6z3@platinum> From: Andrew Rybchenko Message-ID: <11152b14-0c57-3df6-67de-f9ed671eb9c6@solarflare.com> Date: Wed, 17 Jan 2018 18:07:00 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2 MIME-Version: 1.0 In-Reply-To: <20171214133837.y5feayfpoxeou6z3@platinum> Content-Language: en-GB X-Originating-IP: [84.52.114.114] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.100.1062-23600.003 X-TM-AS-Result: No--13.871800-0.000000-31 X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-MDID: 1516201632-gP7OcK7tUylA Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [RFC PATCH 4/6] mempool: add a function to flush default cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 Jan 2018 15:07:12 -0000 On 12/14/2017 04:38 PM, Olivier MATZ wrote: > On Fri, Nov 24, 2017 at 04:06:29PM +0000, Andrew Rybchenko wrote: >> From: "Artem V. Andreev" >> >> Mempool get/put API cares about cache itself, but sometimes it is >> required to flush the cache explicitly. > I don't disagree, but do you have some use-case in mind? Ideally mempool objects should be reused ASAP. Block/bucket dequeue bypasses cache, since cache is not block-aware. So, cache should be flushed before block dequeue. Initially we had cache flush inside block dequeue wrapper, but decoupling it gives more freedom for optimizations. >> Also dedicated API allows to decouple it from block get API (to be >> added) and provides more fine-grained control. >> >> Signed-off-by: Artem V. Andreev >> Signed-off-by: Andrew Rybchenko >> --- >> lib/librte_mempool/rte_mempool.h | 16 ++++++++++++++++ >> 1 file changed, 16 insertions(+) >> >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h >> index 9bcb8b7..3a52b93 100644 >> --- a/lib/librte_mempool/rte_mempool.h >> +++ b/lib/librte_mempool/rte_mempool.h >> @@ -1161,6 +1161,22 @@ rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id) >> } >> >> /** >> + * Ensure that a default per-lcore mempool cache is flushed, if it is present >> + * >> + * @param mp >> + * A pointer to the mempool structure. >> + */ >> +static __rte_always_inline void >> +rte_mempool_ensure_cache_flushed(struct rte_mempool *mp) >> +{ >> + struct rte_mempool_cache *cache; >> + cache = rte_mempool_default_cache(mp, rte_lcore_id()); >> + if (cache != NULL && cache->len > 0) >> + rte_mempool_cache_flush(cache, mp); >> +} >> + > We already have rte_mempool_cache_flush(). > Why not just extending it instead of adding a new function? > > I mean: > > static __rte_always_inline void > rte_mempool_cache_flush(struct rte_mempool_cache *cache, > struct rte_mempool *mp) > { > + if (cache == NULL) > + cache = rte_mempool_default_cache(mp, rte_lcore_id()); > + if (cache == NULL || cache->len == 0) > + return; > rte_mempool_ops_enqueue_bulk(mp, cache->objs, cache->len); > cache->len = 0; > } Thanks, good idea.