From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 13CF937AA for ; Thu, 26 May 2016 18:24:41 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga102.jf.intel.com with ESMTP; 26 May 2016 09:24:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,369,1459839600"; d="scan'208";a="709345632" Received: from irsmsx101.ger.corp.intel.com ([163.33.3.153]) by FMSMGA003.fm.intel.com with ESMTP; 26 May 2016 09:24:18 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.33]) by IRSMSX101.ger.corp.intel.com ([169.254.1.42]) with mapi id 14.03.0248.002; Thu, 26 May 2016 17:24:17 +0100 From: "Iremonger, Bernard" To: "Ananyev, Konstantin" , Stephen Hemminger , "Doherty, Declan" CC: "dev@dpdk.org" , "Iremonger, Bernard" Thread-Topic: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock Thread-Index: AQHRpvF5zY5KwBchMEiw1zqtXVzCN5+rpnuAgABaV4CACySCgIAAAhswgBRfpzA= Date: Thu, 26 May 2016 16:24:17 +0000 Message-ID: <8CEF83825BEC744B83065625E567D7C21A007EA4@IRSMSX108.ger.corp.intel.com> References: <1462461300-9962-1-git-send-email-bernard.iremonger@intel.com> <1462461300-9962-2-git-send-email-bernard.iremonger@intel.com> <20160505101233.191151ac@xeon-e3> <7f47b47d-945a-c265-4db3-dc0d6850a348@intel.com> <20160506085539.1ece142c@xeon-e3> <2601191342CEEE43887BDE71AB97725836B50971@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB97725836B5098C@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB97725836B5098C@irsmsx105.ger.corp.intel.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYjNhOTIwYmItYzg2MS00NTFiLTkyMzktZTI0YmQxYTQzYjQwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IlhUWnV4TmorNDRzUFJKY0VTZjFmTWQwME0yKytQZnBtTVBlMHRhZHI2akU9In0= x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 26 May 2016 16:24:42 -0000 Hi Konstantin,=20 > > > > On 05/05/16 18:12, Stephen Hemminger wrote: > > > > > On Thu, 5 May 2016 16:14:56 +0100 Bernard Iremonger > > > > > wrote: > > > > > > > > > >> Fixes: a45b288ef21a ("bond: support link status polling") > > > > >> Signed-off-by: Bernard Iremonger > > > > > > > > > > You know an uncontested reader/writer lock is significantly > > > > > slower than a spinlock. > > > > > > > > > > > > > As we can have multiple readers of the active slave list / primary > > > > slave, basically any tx/rx burst call needs to protect against a > > > > device being removed/closed during it's operation now that we > > > > support hotplugging, in the worst case this could mean we have > > > > 2(rx+tx) * queues possibly using the active slave list > > > > simultaneously, in that case I would have thought that a spinlock > > > > would have a much more significant affect on performance? > > > > > > Right, but the window where the shared variable is accessed is very > > > small, and it is actually faster to use spinlock for that. > > > > I don't think that window we hold the lock is that small, let say if > > we have a burst of 32 packets * (let say) 50 cycles/pkt =3D ~1500 cycle= s - each > IO thread would stall. > > For me that's long enough to justify rwlock usage here, especially > > that DPDK rwlock price is not much bigger (as I remember) then > > spinlock - it is basically 1 CAS operation. >=20 > As another alternative we can have a spinlock per queue, then different I= O > threads doing RX/XTX over different queues will be uncontended at all. > Though control thread would need to grab locks for all configured queues = :) >=20 > Konstantin >=20 I am preparing a v2 patchset which uses a spinlock per queue. Regards, Bernard.