From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id A5D9FA0C4A;
	Wed,  7 Jul 2021 12:04:45 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 8CAF0406FF;
	Wed,  7 Jul 2021 12:04:45 +0200 (CEST)
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by mails.dpdk.org (Postfix) with ESMTP id 6ED9D406B4
 for <dev@dpdk.org>; Wed,  7 Jul 2021 12:04:43 +0200 (CEST)
X-IronPort-AV: E=McAfee;i="6200,9189,10037"; a="209317065"
X-IronPort-AV: E=Sophos;i="5.83,331,1616482800"; d="scan'208";a="209317065"
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 07 Jul 2021 03:04:41 -0700
X-IronPort-AV: E=Sophos;i="5.83,331,1616482800"; d="scan'208";a="482114184"
Received: from dhunt5-mobl5.ger.corp.intel.com (HELO [10.252.24.114])
 ([10.252.24.114])
 by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 07 Jul 2021 03:04:39 -0700
To: Anatoly Burakov <anatoly.burakov@intel.com>, dev@dpdk.org
Cc: ciara.loftus@intel.com, konstantin.ananyev@intel.com
References: <cover.1624981669.git.anatoly.burakov@intel.com>
 <cover.1625498488.git.anatoly.burakov@intel.com>
 <5d3a791ba126c53f1077c5f7aaa4bb55e3d90c8a.1625498488.git.anatoly.burakov@intel.com>
From: David Hunt <david.hunt@intel.com>
Message-ID: <fb52a5f3-4004-d7a3-3b9b-058de30c085f@intel.com>
Date: Wed, 7 Jul 2021 11:04:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.11.0
MIME-Version: 1.0
In-Reply-To: <5d3a791ba126c53f1077c5f7aaa4bb55e3d90c8a.1625498488.git.anatoly.burakov@intel.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
Subject: Re: [dpdk-dev] [PATCH v6 5/7] power: support callbacks for multiple
 Rx queues
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>


On 5/7/2021 4:22 PM, Anatoly Burakov wrote:
> Currently, there is a hard limitation on the PMD power management
> support that only allows it to support a single queue per lcore. This is
> not ideal as most DPDK use cases will poll multiple queues per core.
>
> The PMD power management mechanism relies on ethdev Rx callbacks, so it
> is very difficult to implement such support because callbacks are
> effectively stateless and have no visibility into what the other ethdev
> devices are doing. This places limitations on what we can do within the
> framework of Rx callbacks, but the basics of this implementation are as
> follows:
>
> - Replace per-queue structures with per-lcore ones, so that any device
>    polled from the same lcore can share data
> - Any queue that is going to be polled from a specific lcore has to be
>    added to the list of queues to poll, so that the callback is aware of
>    other queues being polled by the same lcore
> - Both the empty poll counter and the actual power saving mechanism is
>    shared between all queues polled on a particular lcore, and is only
>    activated when all queues in the list were polled and were determined
>    to have no traffic.
> - The limitation on UMWAIT-based polling is not removed because UMWAIT
>    is incapable of monitoring more than one address.
>
> Also, while we're at it, update and improve the docs.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
>      v6:
>      - Track each individual queue sleep status (Konstantin)
>      - Fix segfault (Dave)
>      
>      v5:
>      - Remove the "power save queue" API and replace it with mechanism suggested by
>        Konstantin
>      
>      v3:
>      - Move the list of supported NICs to NIC feature table
>      
>      v2:
>      - Use a TAILQ for queues instead of a static array
>      - Address feedback from Konstantin
>      - Add additional checks for stopped queues
>
>   doc/guides/nics/features.rst           |  10 +
>   doc/guides/prog_guide/power_man.rst    |  65 ++--
>   doc/guides/rel_notes/release_21_08.rst |   3 +
>   lib/power/rte_power_pmd_mgmt.c         | 452 +++++++++++++++++++------
>   4 files changed, 394 insertions(+), 136 deletions(-)
>

--snip--


>   
> +static inline void
> +queue_reset(struct pmd_core_cfg *cfg, struct queue_list_entry *qcfg)
> +{
> +	const bool is_ready_to_sleep = qcfg->n_empty_polls > EMPTYPOLL_MAX;
> +
> +	/* reset empty poll counter for this queue */
> +	qcfg->n_empty_polls = 0;
> +	/* reset the queue sleep counter as well */
> +	qcfg->n_sleeps = 0;
> +	/* remove the queue from list of cores ready to sleep */
> +	if (is_ready_to_sleep)
> +		cfg->n_queues_ready_to_sleep--;


Hi Anatoly,

    I don't think the logic around this is bulletproof yet, in my 
testing I'm seeing n_queues_ready_to_sleep wrap around (i.e. decremented 
while already zero).

Rgds,
Dave.


--snip--