From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <konstantin.ananyev@intel.com>
Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
 by dpdk.org (Postfix) with ESMTP id 748AF9A91
 for <dev@dpdk.org>; Fri, 13 May 2016 19:19:07 +0200 (CEST)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga103.fm.intel.com with ESMTP; 13 May 2016 10:19:03 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.24,614,1455004800"; d="scan'208";a="702364650"
Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157])
 by FMSMGA003.fm.intel.com with ESMTP; 13 May 2016 10:18:36 -0700
Received: from irsmsx105.ger.corp.intel.com ([169.254.7.130]) by
 IRSMSX103.ger.corp.intel.com ([169.254.3.54]) with mapi id 14.03.0248.002;
 Fri, 13 May 2016 18:18:35 +0100
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, Stephen Hemminger
 <stephen@networkplumber.org>, "Doherty, Declan" <declan.doherty@intel.com>
CC: "Iremonger, Bernard" <bernard.iremonger@intel.com>, "dev@dpdk.org"
 <dev@dpdk.org>
Thread-Topic: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with
 read/write lock
Thread-Index: AQHRpvF5zY5KwBchMEiw1zqtXVzCN5+rpnuAgABaV4CACySCgIAAAhsw
Date: Fri, 13 May 2016 17:18:35 +0000
Message-ID: <2601191342CEEE43887BDE71AB97725836B5098C@irsmsx105.ger.corp.intel.com>
References: <1462461300-9962-1-git-send-email-bernard.iremonger@intel.com>
 <1462461300-9962-2-git-send-email-bernard.iremonger@intel.com>
 <20160505101233.191151ac@xeon-e3>
 <7f47b47d-945a-c265-4db3-dc0d6850a348@intel.com>
 <20160506085539.1ece142c@xeon-e3>
 <2601191342CEEE43887BDE71AB97725836B50971@irsmsx105.ger.corp.intel.com>
In-Reply-To: <2601191342CEEE43887BDE71AB97725836B50971@irsmsx105.ger.corp.intel.com>
Accept-Language: en-IE, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [163.33.239.180]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with
 read/write lock
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 13 May 2016 17:19:07 -0000



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
> Sent: Friday, May 13, 2016 6:11 PM
> To: Stephen Hemminger; Doherty, Declan
> Cc: Iremonger, Bernard; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/w=
rite lock
>=20
>=20
>=20
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> > Sent: Friday, May 06, 2016 4:56 PM
> > To: Doherty, Declan
> > Cc: Iremonger, Bernard; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read=
/write lock
> >
> > On Fri, 6 May 2016 11:32:19 +0100
> > Declan Doherty <declan.doherty@intel.com> wrote:
> >
> > > On 05/05/16 18:12, Stephen Hemminger wrote:
> > > > On Thu,  5 May 2016 16:14:56 +0100
> > > > Bernard Iremonger <bernard.iremonger@intel.com> wrote:
> > > >
> > > >> Fixes: a45b288ef21a ("bond: support link status polling")
> > > >> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > > >
> > > > You know an uncontested reader/writer lock is significantly slower
> > > > than a spinlock.
> > > >
> > >
> > > As we can have multiple readers of the active slave list / primary
> > > slave, basically any tx/rx burst call needs to protect against a devi=
ce
> > > being removed/closed during it's operation now that we support
> > > hotplugging, in the worst case this could mean we have 2(rx+tx) * que=
ues
> > > possibly using the active slave list simultaneously, in that case I
> > > would have thought that a spinlock would have a much more significant
> > > affect on performance?
> >
> > Right, but the window where the shared variable is accessed is very sma=
ll,
> > and it is actually faster to use spinlock for that.
>=20
> I don't think that window we hold the lock is that small, let say if we h=
ave
> a burst of 32 packets * (let say) 50 cycles/pkt =3D ~1500 cycles - each I=
O thread would stall.
> For me that's long enough to justify rwlock usage here, especially that
> DPDK rwlock price is not much bigger (as I remember) then spinlock -
> it is basically 1 CAS operation.

As another alternative we can have a spinlock per queue, then different IO =
threads
doing RX/XTX over different queues will be uncontended at all.
Though control thread would need to grab locks for all configured queues :)

Konstantin

>=20
> Konstantin
>=20