From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CD672A0C41; Tue, 22 Jun 2021 11:41:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E7C34003F; Tue, 22 Jun 2021 11:41:25 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 54FE44003C for ; Tue, 22 Jun 2021 11:41:23 +0200 (CEST) IronPort-SDR: KYIlihcb/uuvC8ZxTyWC02noLb5KKnZ8GNn3YGYZJkOLCtyPaq9+sZLCj9bDTWWb26Mt2hDHuB aM1CMq49rzcA== X-IronPort-AV: E=McAfee;i="6200,9189,10022"; a="268159773" X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; d="scan'208";a="268159773" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2021 02:41:14 -0700 IronPort-SDR: 2KHBSt//Vszbsx822nOn/ytjCU6IEqksd8qTSarX95/mZqfkamwLG9jJ9by4s1HG8PtDpfqg0J RLMOhONWFZ/w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,291,1616482800"; d="scan'208";a="480760345" Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86]) by FMSMGA003.fm.intel.com with ESMTP; 22 Jun 2021 02:41:13 -0700 Received: from fmsmsx612.amr.corp.intel.com (10.18.126.92) by fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4; Tue, 22 Jun 2021 02:41:13 -0700 Received: from fmsedg601.ED.cps.intel.com (10.1.192.135) by fmsmsx612.amr.corp.intel.com (10.18.126.92) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.4 via Frontend Transport; Tue, 22 Jun 2021 02:41:13 -0700 Received: from NAM12-MW2-obe.outbound.protection.outlook.com (104.47.66.42) by edgegateway.intel.com (192.55.55.70) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.4; Tue, 22 Jun 2021 02:41:13 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FLGnpYO0UdckVhOthLSw8wZGzbn6zC+1AhcZ6FB3Bw7PCtguEDzW3QCOw0R1pWbGsXD9qd1CuRQLqmOihgaVHVU42KviYYQ2qNj7rKFyQaiE6KUqJixWcKWC1GV6EYh8CBVowACZKpJR7Ig3Urxp32absoj3QZNwHKMTNUOk3zx6Ubrwe/AylMoKuT+xG+SEIXFCD3kJVRjxzYM/hA0yOjUom6RVPR8n7Qp0UZbI7Fxh7zRY2J0Rm63R+W/wKuKScYHOif63sjA3kE4Xsa9Q7Qi243kc2mYEKjOhQ1QlczjRdMJi59FywWWoYEM0TJda/ivgo0er1aPrwVF7N6D53A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LnAwrOOxV1hpqcBlKgh+NchSjA+YkeEp4ZUceYDYRcM=; b=fnCnNke1XxC3+ZgmokMZatD8CEh+rywlkOdoByYpXzUOOk9UxmiPgta2xoH7vajgjnVHNBCHUbE/hWr3Fk6zD+cjWnSYPh8T6aL0lK/KZrJWrLM7XHOQQer1JoxEOiesF0b7xYTVCZEGWI3HfE6+Yn3iW0nzCrMNOCKzzN+R7ji6WMFgVHgyyvSPjqSEK4Qy6YstMxVJ4wH20vg+16n90ZMZ9YSB3j7chMovqIY441vt/A6l6ARVzB+Njc1X/efu8BqHQQs0xJ8cGc5hRAv8AS6QJardStFB5HjwjZug56ivY0otigfroW8WDqWOT4//gwndCkJVHgIoDGRCPsfqLA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LnAwrOOxV1hpqcBlKgh+NchSjA+YkeEp4ZUceYDYRcM=; b=o6j4kAJsYEez7eCn96NkRkpb1T+hDNMgy2E8XRRSLzLbxlyWWHD3FZZnA3qm6BfKT6EzPe7NN67Ti6nFyWeKTPmfq0kP9yvEt4djewLNMAXXc4jCPiIG4grslAOS/bW2+WvQCZToDCUVKh88cG6AtD/4k23wHLlXh2u+e0mTEs4= Received: from DM6PR11MB4491.namprd11.prod.outlook.com (2603:10b6:5:204::19) by DM6PR11MB4250.namprd11.prod.outlook.com (2603:10b6:5:1df::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.22; Tue, 22 Jun 2021 09:41:11 +0000 Received: from DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::7dc4:66b0:f76b:6d48]) by DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::7dc4:66b0:f76b:6d48%7]) with mapi id 15.20.4242.023; Tue, 22 Jun 2021 09:41:11 +0000 From: "Ananyev, Konstantin" To: "Burakov, Anatoly" , "dev@dpdk.org" , "Hunt, David" , Ray Kinsella , Neil Horman CC: "Loftus, Ciara" Thread-Topic: [dpdk-dev] [PATCH v1 5/7] power: support callbacks for multiple Rx queues Thread-Index: AQHXVt39/DWk5nhxG0OX0KobzTE5Rqsf4FHA Date: Tue, 22 Jun 2021 09:41:11 +0000 Message-ID: References: <601f09ad562f97ca1e0077cb36ba46e448df382d.1622548381.git.anatoly.burakov@intel.com> In-Reply-To: <601f09ad562f97ca1e0077cb36ba46e448df382d.1622548381.git.anatoly.burakov@intel.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 authentication-results: intel.com; dkim=none (message not signed) header.d=none;intel.com; dmarc=none action=none header.from=intel.com; x-originating-ip: [109.255.184.192] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: c37b8796-cc72-4aa3-599c-08d93561dea6 x-ms-traffictypediagnostic: DM6PR11MB4250: x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr x-ms-exchange-transport-forked: True x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:10000; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: Qc5OU3KJKAlU9podKGYuyhhRYvxWuVFZRtNxrlK9HWCP4uxeY9ZIXyvFS4e12nHsGlYX067V1eEA+S/7KqIpHaaI0JEL2iV0Y4pNza4E15Go3UnnVpd433AGNHIdQz8s5yGbjYONKakoToHxhVTu/v1d8K7UtPYS0tANjoPCiF8h+I7XpHl8PI8S+xsYhcD4XQvepDy+5yOzktgUcaEJpJM7nrdV5BqC2NQBI44ks3LFiPITvukjr/2TRzhF72m6eTskElyVhHkSl0/tX5yCdv4urH8EN2T5zLMNHZZ/JF/RVCp/AfszOHDMZubMmsSy4YudXPYU2UdWdX59js9Xs5Q93CTh+yWWpLSZ4ae1TWGCzDPecVCvCr86BhJgIwwMkUJe2u/MQzR54246NqJoM8WWeTXePf0dUoayhjU4ejqokyhDviD0BBZh7Y0uaBvlpIptwsvid4GJ+8BJzOcYe+4dvvm/kejjhmNSmMMmsPFsFIRhuaZtXPOgnJsZDCpby6lY3sg5d1ynkGgIyHkM44LpfMQgmvvfXELU39CiDecqI0Z4jQT/eL0eQTd+At0upR28j0lKqhu7u+jdGb7OADywRseZA8ssMBBu/x4zXAs= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR11MB4491.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(376002)(396003)(39860400002)(346002)(136003)(366004)(55236004)(26005)(186003)(8676002)(8936002)(86362001)(71200400001)(122000001)(110136005)(6506007)(9686003)(33656002)(2906002)(316002)(7696005)(38100700002)(64756008)(66946007)(83380400001)(5660300002)(107886003)(52536014)(66446008)(66556008)(76116006)(478600001)(4326008)(66476007)(55016002); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?fR7qWBKVWXusymqbjMdjHup4ldgSvOJ1YbpwEqf4vkbW5UMYNTE8YvppnnEB?= =?us-ascii?Q?MHZ4qvJBJBnMHDTPxEVfACPcA4GfSWnug29erRN/6hZ3lAqpksf0kRS2i291?= =?us-ascii?Q?w1Hh55elGpVtISiOq8wJPkIyl8CRpDLDGqtpnw5IF+kmJsNBQPqaTjDG7M3L?= =?us-ascii?Q?4af4Ppw81gshveTKp11ueDvS+OBYwiNl92vZiFkwjVAdpHKHI00QMIlma8Bo?= =?us-ascii?Q?DisJKEfKehlu8Al7RNBjpWyT8Se+idqsQH9DcGWqPhYwYOdUHPicApK2rgUT?= =?us-ascii?Q?01HeXf4BU943ONu1BExgI72yojzn+pX6aZX39DB9WI+wNqC22SgC299okkE9?= =?us-ascii?Q?MjCrPDfrDTYs86rSpEp/V0iuIZ8ZQ/s6QDwOE1NEkpPgN4y7RRfZe2UfGq4o?= =?us-ascii?Q?1F4tkMCi/wDmf8Btj5OaHWFVvMsp2jNNf7yzZS84jitQYZHeum75aEmV+U9L?= =?us-ascii?Q?PtiwBTy8lAoqaPhDngafZ16c2BqfKnKlaZz0X/FrWiGZ14NC+S7cF1ZEG4r9?= =?us-ascii?Q?EBpIfHGoe2hJ22DU6vm+W7Rr1iQVxvvYOeHjLmw3q1ogvsQ4Ub9nj3iD6hWs?= =?us-ascii?Q?+kQeOSanMjlOsXOfS+TKJzVDDM99BXvQdibWN1NjLZ/81i29SGd+DFvjXw3C?= =?us-ascii?Q?rFjq6xyayvpouUPgPzXGTFRJgyUEhYH1uu7yNxABosDUHs1bJ0fc4b9zbSxI?= =?us-ascii?Q?p++WTwQg3cyjUNaBCSlMqAjj9BKQ6H+C4xj04hQxn/KElgPflFDav090yGjY?= =?us-ascii?Q?ESHFOsT1Ta1TdGYA82LPU8/L9CjL8xTeCka6b5Sz/1bhf5TElmR0DlLtj+mB?= =?us-ascii?Q?ajltUjVE1cJ8z5Q2Ms/nLW4bE17RdnWNDpx0C1t/MoxCjM8eIQQnbhJytaHE?= =?us-ascii?Q?lmPk6wv2AY0652SWJkRufSvg+2BX8khI6BaTSzWi/jXoJ/h8/9YGxYGnKZBk?= =?us-ascii?Q?pQKF7s/YNP8CsbWMkSBqSR6Mg+1yi1yTDFzWMgYUUPaybXwsrMWt8PTjsX61?= =?us-ascii?Q?FT8dWlY+wlD+YNqer6VNyyGOXAYWAtMp1OMpmv0MxvF+VT1C16v5f+T1vLAu?= =?us-ascii?Q?7RFbU5PDhaPJMUJHgcZ2g4x/uYvbvESu0+v9bcxHawKd63yiO8bA01KxUbzY?= =?us-ascii?Q?MiAKeE3NlYRckKbC1ujubppWoBLv2H/nsGCgmuutG6X2Sriyyy0MK/L3j/Va?= =?us-ascii?Q?MvblDeRXB5NLHfLHQ+A+OfXEPd3I5zie79pRkztnXnkNUnq3+56R4mN3VYEs?= =?us-ascii?Q?/hJ/YlQmBWWFxyzZcG4IvrCO0rbZnD9SNOvjBVNF2fcBOc/zQla8797wf56J?= =?us-ascii?Q?NLgl2CyiiCZdEMfjxGZ2jHB/?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB4491.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: c37b8796-cc72-4aa3-599c-08d93561dea6 X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jun 2021 09:41:11.2240 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: kwtoX6P+JVwvz/VJcz74Ysk64A3/cRRlC1Xnr4sXe81p3mwVtmhlcf1H4hxB1zcWzfieBuCjv+DY3oLjPell5159qvIK1ey4oAB7bv5M+F0= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR11MB4250 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [PATCH v1 5/7] power: support callbacks for multiple Rx queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > Currently, there is a hard limitation on the PMD power management > support that only allows it to support a single queue per lcore. This is > not ideal as most DPDK use cases will poll multiple queues per core. >=20 > The PMD power management mechanism relies on ethdev Rx callbacks, so it > is very difficult to implement such support because callbacks are > effectively stateless and have no visibility into what the other ethdev > devices are doing. This places limitations on what we can do within the > framework of Rx callbacks, but the basics of this implementation are as > follows: >=20 > - Replace per-queue structures with per-lcore ones, so that any device > polled from the same lcore can share data > - Any queue that is going to be polled from a specific lcore has to be > added to the list of cores to poll, so that the callback is aware of > other queues being polled by the same lcore > - Both the empty poll counter and the actual power saving mechanism is > shared between all queues polled on a particular lcore, and is only > activated when a special designated "power saving" queue is polled. To > put it another way, we have no idea which queue the user will poll in > what order, so we rely on them telling us that queue X is the last one > in the polling loop, so any power management should happen there. > - A new API is added to mark a specific Rx queue as "power saving". Honestly, I don't understand the logic behind that new function. I understand that depending on HW we ca monitor either one or multiple queu= es. That's ok, but why we now need to mark one queue as a 'very special' one? Why can't rte_power_ethdev_pmgmt_queue_enable() just: Check is number of monitored queues exceed HW/SW capabilities, and if so then just return a failure. Otherwise add queue to the list and treat them all equally, i.e: go to power save mode when number of sequential empty polls on all monitored queues will exceed EMPTYPOLL_MAX threshold? > Failing to call this API will result in no power management, however > when having only one queue per core it is obvious which queue is the > "power saving" one, so things will still work without this new API for > use cases that were previously working without it. > - The limitation on UMWAIT-based polling is not removed because UMWAIT > is incapable of monitoring more than one address. >=20 > Signed-off-by: Anatoly Burakov > --- > lib/power/rte_power_pmd_mgmt.c | 335 ++++++++++++++++++++++++++------- > lib/power/rte_power_pmd_mgmt.h | 34 ++++ > lib/power/version.map | 3 + > 3 files changed, 306 insertions(+), 66 deletions(-) >=20 > diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgm= t.c > index 0707c60a4f..60dd21a19c 100644 > --- a/lib/power/rte_power_pmd_mgmt.c > +++ b/lib/power/rte_power_pmd_mgmt.c > @@ -33,7 +33,19 @@ enum pmd_mgmt_state { > PMD_MGMT_ENABLED > }; >=20 > -struct pmd_queue_cfg { > +struct queue { > + uint16_t portid; > + uint16_t qid; > +}; Just a thought: if that would help somehow, it can be changed to: union queue { uint32_t raw; struct { uint16_t portid, qid; };=20 }; That way in queue find/cmp functions below you can operate with single raw = 32-bt values. Probably not that important, as all these functions are on slow path, but m= ight look nicer. > +struct pmd_core_cfg { > + struct queue queues[RTE_MAX_ETHPORTS]; If we'll have ability to monitor multiple queues per lcore, would it be alw= ays enough? >From other side, it is updated on control path only. Wouldn't normal list with malloc(/rte_malloc) would be more suitable here? = =20 > + /**< Which port-queue pairs are associated with this lcore? */ > + struct queue power_save_queue; > + /**< When polling multiple queues, all but this one will be ignored */ > + bool power_save_queue_set; > + /**< When polling multiple queues, power save queue must be set */ > + size_t n_queues; > + /**< How many queues are in the list? */ > volatile enum pmd_mgmt_state pwr_mgmt_state; > /**< State of power management for this queue */ > enum rte_power_pmd_mgmt_type cb_mode; > @@ -43,8 +55,97 @@ struct pmd_queue_cfg { > uint64_t empty_poll_stats; > /**< Number of empty polls */ > } __rte_cache_aligned; > +static struct pmd_core_cfg lcore_cfg[RTE_MAX_LCORE]; >=20 > -static struct pmd_queue_cfg port_cfg[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PE= R_PORT]; > +static inline bool > +queue_equal(const struct queue *l, const struct queue *r) > +{ > + return l->portid =3D=3D r->portid && l->qid =3D=3D r->qid; > +} > + > +static inline void > +queue_copy(struct queue *dst, const struct queue *src) > +{ > + dst->portid =3D src->portid; > + dst->qid =3D src->qid; > +} > + > +static inline bool > +queue_is_power_save(const struct pmd_core_cfg *cfg, const struct queue *= q) { Here and in other places - any reason why standard DPDK coding style is not= used? > + const struct queue *pwrsave =3D &cfg->power_save_queue; > + > + /* if there's only single queue, no need to check anything */ > + if (cfg->n_queues =3D=3D 1) > + return true; > + return cfg->power_save_queue_set && queue_equal(q, pwrsave); > +} > +