From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id BC4F5FFA for ; Tue, 5 Sep 2017 23:52:26 +0200 (CEST) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Sep 2017 14:52:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,481,1498546800"; d="scan'208";a="148488204" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by fmsmga006.fm.intel.com with ESMTP; 05 Sep 2017 14:52:25 -0700 Received: from fmsmsx117.amr.corp.intel.com (10.18.116.17) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.319.2; Tue, 5 Sep 2017 14:52:24 -0700 Received: from fmsmsx115.amr.corp.intel.com ([169.254.4.50]) by fmsmsx117.amr.corp.intel.com ([169.254.3.51]) with mapi id 14.03.0319.002; Tue, 5 Sep 2017 14:52:24 -0700 From: "Carrillo, Erik G" To: "Ananyev, Konstantin" CC: "dev@dpdk.org" , "rsanford@akamai.com" , Stephen Hemminger , "Wiles, Keith" Thread-Topic: [dpdk-dev] [PATCH v2 1/3] timer: add per-installer pending lists for each lcore Thread-Index: AdMmkAbJnEErS0E0Rn68kHl6apOlJg== Date: Tue, 5 Sep 2017 21:52:24 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYzYzYjBmZTEtMTE4Ni00MmQ2LTk1MTktNjhiMTQ2YTJkZGYwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6InRxRlo0ZzBYVzBCbGE5bXZwZGZqd3h1ak9qUEFjcmlXaGVOekc0UURVSE09In0= x-ctpclassification: CTP_IC x-originating-ip: [10.1.200.108] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2 1/3] timer: add per-installer pending lists for each lcore X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 05 Sep 2017 21:52:27 -0000 Hi all, Another approach that I'd like to put out for consideration is as follows: Let's say we introduce one flag per lcore - multi_pendlists. This flag ind= icates whether that lcore supports multiple pending lists (one per source l= core) or not, and by default it's set to false. At rte_timer_subsystem_init() time, each lcore will be configured to use a = single pending list per lcore (rather than multiple). A new API, rte_timer_subsystem_set_multi_pendlists(unsigned lcore_Id), can = be called to enable multi_pendlists for a particular lcore. It should be c= alled after rte_timer_subsystem_init(), and before any timers are started f= or that lcore. When timers are started for a particular lcore, that lcore's multi_pendlist= s flag will be inspected to determine whether it should go into a single li= st, or one of several lists. When an lcore processes its timers with rte_timer_manage(), it will look at= the multi_pendlists flag, and if it is false, only process a single list. = This should bring the overhead nearly back down to what it was originally.= And if multi_pendlists is true, it will break out the runlists from multi= ple pending lists in sequence and process them, as in the current patch. =09 Thoughts or comments? Thanks, Gabriel > -----Original Message----- > From: Ananyev, Konstantin > Sent: Tuesday, August 29, 2017 5:57 AM > To: Carrillo, Erik G ; rsanford@akamai.com > Cc: dev@dpdk.org > Subject: RE: [dpdk-dev] [PATCH v2 1/3] timer: add per-installer pending l= ists > for each lcore >=20 > Hi Gabriel, >=20 > > > > Instead of each priv_timer struct containing a single skiplist, this > > commit adds a skiplist for each enabled lcore to priv_timer. In the > > case that multiple lcores repeatedly install timers on the same target > > lcore, this change reduces lock contention for the target lcore's > > skiplists and increases performance. >=20 > I am not an rte_timer expert, but there is one thing that worries me: > It seems that complexity of timer_manage() has increased with that patch > quite a bit: > now it has to check/process up to RTE_MAX_LCORE skiplists instead of one= , > also it has to somehow to properly sort up to RTE_MAX_LCORE lists of > retrieved (ready to run) timers. > Wouldn't all that affect it's running time? >=20 > I understand your intention to reduce lock contention, but I suppose at l= east > it could be done in a configurable way. > Let say allow user to specify dimension of pending_lists[] at init phase= or so. > Then timer from lcore_id=3DN will endup in > pending_lists[N%RTE_DIM(pendilng_list)]. >=20 > Another thought - might be better to divide pending timers list not by cl= ient > (lcore) id, but by expiration time - some analog of timer wheel or so. > That, I think might greatly decrease the probability that timer_manage() = and > timer_add() will try to access the same list. > From other side timer_manage() still would have to consume skip-lists one > by one. > Though I suppose that's quite radical change from what we have right now. > Konstantin >=20