From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCB85A0C4A; Wed, 7 Jul 2021 12:07:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 36FBD406FF; Wed, 7 Jul 2021 12:07:07 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mails.dpdk.org (Postfix) with ESMTP id CC6BF406B4 for ; Wed, 7 Jul 2021 12:07:04 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10037"; a="209095232" X-IronPort-AV: E=Sophos;i="5.83,331,1616482800"; d="scan'208";a="209095232" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jul 2021 03:07:01 -0700 X-IronPort-AV: E=Sophos;i="5.83,331,1616482800"; d="scan'208";a="563749678" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.213.193.4]) ([10.213.193.4]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jul 2021 03:07:00 -0700 To: "Ananyev, Konstantin" , "dev@dpdk.org" , "Hunt, David" Cc: "Loftus, Ciara" References: <5d3a791ba126c53f1077c5f7aaa4bb55e3d90c8a.1625498488.git.anatoly.burakov@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Wed, 7 Jul 2021 11:06:57 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v6 5/7] power: support callbacks for multiple Rx queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 06-Jul-21 7:50 PM, Ananyev, Konstantin wrote: > >> Currently, there is a hard limitation on the PMD power management >> support that only allows it to support a single queue per lcore. This is >> not ideal as most DPDK use cases will poll multiple queues per core. >> >> The PMD power management mechanism relies on ethdev Rx callbacks, so it >> is very difficult to implement such support because callbacks are >> effectively stateless and have no visibility into what the other ethdev >> devices are doing. This places limitations on what we can do within the >> framework of Rx callbacks, but the basics of this implementation are as >> follows: >> >> - Replace per-queue structures with per-lcore ones, so that any device >> polled from the same lcore can share data >> - Any queue that is going to be polled from a specific lcore has to be >> added to the list of queues to poll, so that the callback is aware of >> other queues being polled by the same lcore >> - Both the empty poll counter and the actual power saving mechanism is >> shared between all queues polled on a particular lcore, and is only >> activated when all queues in the list were polled and were determined >> to have no traffic. >> - The limitation on UMWAIT-based polling is not removed because UMWAIT >> is incapable of monitoring more than one address. >> >> Also, while we're at it, update and improve the docs. >> >> Signed-off-by: Anatoly Burakov >> --- >> >> Notes: >> v6: >> - Track each individual queue sleep status (Konstantin) >> - Fix segfault (Dave) >> >> v5: >> - Remove the "power save queue" API and replace it with mechanism suggested by >> Konstantin >> >> v3: >> - Move the list of supported NICs to NIC feature table >> >> v2: >> - Use a TAILQ for queues instead of a static array >> - Address feedback from Konstantin >> - Add additional checks for stopped queues >> > .... >> +static inline void >> +queue_reset(struct pmd_core_cfg *cfg, struct queue_list_entry *qcfg) >> +{ >> + const bool is_ready_to_sleep = qcfg->n_empty_polls > EMPTYPOLL_MAX; >> + >> + /* reset empty poll counter for this queue */ >> + qcfg->n_empty_polls = 0; >> + /* reset the queue sleep counter as well */ >> + qcfg->n_sleeps = 0; >> + /* remove the queue from list of cores ready to sleep */ >> + if (is_ready_to_sleep) >> + cfg->n_queues_ready_to_sleep--; >> + /* >> + * no need change the lcore sleep target counter because this lcore will >> + * reach the n_sleeps anyway, and the other cores are already counted so >> + * there's no need to do anything else. >> + */ >> +} >> + >> +static inline bool >> +queue_can_sleep(struct pmd_core_cfg *cfg, struct queue_list_entry *qcfg) >> +{ >> + /* this function is called - that means we have an empty poll */ >> + qcfg->n_empty_polls++; >> + >> + /* if we haven't reached threshold for empty polls, we can't sleep */ >> + if (qcfg->n_empty_polls <= EMPTYPOLL_MAX) >> + return false; >> + >> + /* >> + * we've reached a point where we are able to sleep, but we still need >> + * to check if this queue has already been marked for sleeping. >> + */ >> + if (qcfg->n_sleeps == cfg->sleep_target) >> + return true; >> + >> + /* mark this queue as ready for sleep */ >> + qcfg->n_sleeps = cfg->sleep_target; >> + cfg->n_queues_ready_to_sleep++; > > So, assuming there is no incoming traffic, should it be: > 1) poll_all_queues(times=EMPTYPOLL_MAX); sleep; poll_all_queues(times=1); sleep; poll_all_queues(times=1); sleep; ... > OR > 2) poll_all_queues(times=EMPTYPOLL_MAX); sleep; poll_all_queues(times= EMPTYPOLL_MAX); sleep; poll_all_queues(times= EMPTYPOLL_MAX); sleep; ... > ? > > My initial thought was 2) but might be the intention is 1)? The intent is 1), not 2). There's no need to wait for more empty polls once we pass the threshold - we keep sleeping until there's traffic. -- Thanks, Anatoly