From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DE0DA050B; Thu, 7 Apr 2022 12:44:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F11C24068B; Thu, 7 Apr 2022 12:44:03 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id 3570F40689 for ; Thu, 7 Apr 2022 12:44:02 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649328242; x=1680864242; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=12V+qqAocAyn0/J6HW4ozUZKMhIbn+y0jw+iVKWBy84=; b=Z6Z/uQQ7wYgiqxaiDYvV1DshxL5gDbEwHC84TddCeVD8xIWYHVt/icYY pFnjQwx0jlmvTYe7hUsXXCcbi0J94ktEfkbiNBn5DLb2aTaKcTFlVl35T gwTxvdJ/UYbcrjKgfJKzQ/mKx1U7WbXFzgC5POzWtoxh3860XUu/AisiU KEx4Zf/vhMaARM0VknihosifijTRaKmt540h7hV8rZhtyGnBvY7GyCoAh ybnRYNh3R3BKoF8vq0h/ThovQ6L0YM3pnkBPgS76c+NkmYeiQTdESEf0f 6D+4CydlvuKc4DOpllPipzhRPABZAG1ETKd/MMh4VvUpQv7GcF+9l08vI A==; X-IronPort-AV: E=McAfee;i="6200,9189,10309"; a="261466766" X-IronPort-AV: E=Sophos;i="5.90,241,1643702400"; d="scan'208";a="261466766" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Apr 2022 03:44:01 -0700 X-IronPort-AV: E=Sophos;i="5.90,241,1643702400"; d="scan'208";a="722910200" Received: from bricha3-mobl.ger.corp.intel.com ([10.55.133.67]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 07 Apr 2022 03:43:59 -0700 Date: Thu, 7 Apr 2022 11:43:54 +0100 From: Bruce Richardson To: Morten =?iso-8859-1?Q?Br=F8rup?= Cc: olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, thomas@monjalon.net, jerinjacobk@gmail.com, dev@dpdk.org Subject: Re: [PATCH v4] mempool: fix mempool cache flushing algorithm Message-ID: References: <98CBD80474FA8B44BF855DF32C47DC35D86DB2@smartserver.smartshare.dk> <20220202103354.79832-1-mb@smartsharesystems.com> <98CBD80474FA8B44BF855DF32C47DC35D86FB8@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D86FB9@smartserver.smartshare.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Thu, Apr 07, 2022 at 11:32:12AM +0100, Bruce Richardson wrote: > On Thu, Apr 07, 2022 at 11:26:53AM +0200, Morten Brørup wrote: > > > From: Bruce Richardson [mailto:bruce.richardson@intel.com] > > > Sent: Thursday, 7 April 2022 11.14 > > > > > > On Thu, Apr 07, 2022 at 11:04:53AM +0200, Morten Brørup wrote: > > > > > From: Morten Brørup [mailto:mb@smartsharesystems.com] > > > > > Sent: Wednesday, 2 February 2022 11.34 > > > > > > > > > > This patch fixes the rte_mempool_do_generic_put() caching > > > algorithm, > > > > > which was fundamentally wrong, causing multiple performance issues > > > when > > > > > flushing. > > > > > > > > > > > > > [...] > > > > > > > > Olivier, > > > > > > > > Will you please consider this patch [1] and the other one [2]. > > > > > > > > The primary bug here is this: When a mempool cache becomes full (i.e. > > > exceeds the "flush threshold"), and is flushed to the backing ring, it > > > is still full afterwards; but it should be empty afterwards. It is not > > > flushed entirely, only the elements exceeding "size" are flushed. > > > > > > > > > > I don't believe it should be flushed entirely, there should always be > > > some > > > elements left so that even after flush we can still allocate an > > > additional > > > burst. We want to avoid the situation where a flush of all elements is > > > immediately followed by a refill of new elements. However, we can flush > > > to > > > maybe size/2, and improve things. In short, this not emptying is by > > > design > > > rather than a bug, though we can look to tweak the behaviour. > > > > > > > I initially agreed with you about flushing to size/2. > > > > However, I did think further about it when I wrote the patch, and came to this conclusion: If an application thread repeatedly puts objects into the mempool, and does it so often that the cache overflows (i.e. reaches the flush threshold) and needs to be flushed, it is far more likely that the application thread will continue doing that, rather than start getting objects from the mempool. This speaks for flushing the cache entirely. > > > > Both solutions are better than flushing to size, so if there is a preference for keeping some objects in the cache after flushing, I can update the patch accordingly. > > > > Would it be worth looking at adding per-core hinting to the mempool? > Indicate for a core that it allocates-only, i.e. RX thread, frees-only, > i.e. TX-thread, or does both alloc and free (the default)? That hint could > be used only on flush or refill to specify whether to flush all or partial, > and similarly to refill to max possible or just to size. > Actually, taking the idea further, we could always track per-core whether a core has ever done a flush/refill and use that as the hint instead. It could even be done in a branch-free manner if we want. For example: on flush: keep_entries = (size >> 1) & (never_refills - 1); which will set the entries to keep to be 0 if we have never had to refill, or half of size, if the thread has previously done refills. /Bruce