From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR04-HE1-obe.outbound.protection.outlook.com (mail-eopbgr70045.outbound.protection.outlook.com [40.107.7.45]) by dpdk.org (Postfix) with ESMTP id 42D5E56A3; Thu, 20 Sep 2018 08:28:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=E2AG8kaCWjabi9nm8+Id99Ia3BQpypUJLJvobpOk+Sw=; b=Du7YEfNxaiVHsPJEx44TzDFctQNpZj5Q8d5/7f6QFXOIVOipNUdFzrSbBCue5ApTeGhVR27WwqObqj0bvzhVfoUDCtJmhqO5XyFk8rHIYRqkJyhYYABbJoKdpKYLPHVS6IeH5/nFr6w18dn11phnmoSsTdAuiW8KRGkDCrvDhso= Received: from AM0PR0502MB4019.eurprd05.prod.outlook.com (52.133.41.11) by AM0PR0502MB4001.eurprd05.prod.outlook.com (52.133.40.157) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.18; Thu, 20 Sep 2018 06:28:48 +0000 Received: from AM0PR0502MB4019.eurprd05.prod.outlook.com ([fe80::d0a5:94d3:26eb:92cf]) by AM0PR0502MB4019.eurprd05.prod.outlook.com ([fe80::d0a5:94d3:26eb:92cf%2]) with mapi id 15.20.1143.017; Thu, 20 Sep 2018 06:28:48 +0000 From: Matan Azrad To: Chas Williams <3chas3@gmail.com>, "dev@dpdk.org" CC: "declan.doherty@intel.com" , "ehkinzie@gmail.com" , Chas Williams , "stable@dpdk.org" Thread-Topic: [PATCH] net/bonding: ensure fairness among slaves Thread-Index: AQHUUDA5LJ4t9tA9DUKb1YncjdKb/qT4sLww Date: Thu, 20 Sep 2018 06:28:48 +0000 Message-ID: References: <20180919154825.5183-1-3chas3@gmail.com> In-Reply-To: <20180919154825.5183-1-3chas3@gmail.com> Accept-Language: en-US, he-IL Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=matan@mellanox.com; x-originating-ip: [193.47.165.251] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM0PR0502MB4001; 6:twQgyd9w+Hfk4UxQRSHbMefBaxVTmI/QbXF5JPMi/WpgS9M0TZtyHyqgbGydx6OR3zuZvEG6ZwCjFLg66ePSP/Asjx2MuDgDQIawhRCKN6FcmK6OJve+Gvj4A+joKVLNWSRTt9RF7Y9DOLQD5Hl+1N500SPRojVcwjPt/uJBqDGSF/6xd8b6KXQrmQ8Hs3wYrEZgpzSc9uwTPGHSLcmozlK7FBViNhu+aVQxgJ7ENKjlxGICmWPLduayzZxnvOVFHD6bTsv4vswtTehu1f8ifQyhsKQy3V9atpgTsuZCBdhSWEpY92cP13Mr8ilBhajypyqwnJoqOHUqC3PQo0zBATw/x05nN6mqASOzkv0f0effOTN7tDKI3NZnTlqepyLb2lC+kPtvEv6u1q94g+Ctdwi0IbWhiF8tfYjSTWR4uCQZ25jHHrM+2atCqmzR8S95z5PYR5BVijzqhciwgEBklA==; 5:5pIDqtupe2w1rClJ3v+q1zFvrTsFRnBfGTcPBgFsjnyKq35tYd7tactISqLCgaG2jfZ/SD5e0Pl/iz1xAnpc2SUuavLHYFjEPiMItbUgIHZwl+C2g5lCPn4LivlNHwTbXtK3gvKVaaE3tTylqWlSVs7Rx+Haf5KJKiELu7xlvc0=; 7:/4iuvcjgt3cfUpTfqOmBPoNxGF8ILfXypMl8vTyPzKJPeZees6NnsKOJvYjUSVeOsOqRIVW8VVqFi2jlHgD5ctuP+RFpZw0/7K/O0y5KAEkF1V8OxCkf4GF71MfTXvby71N+arQCoYEXugBlWLQuEC0514toVtKwyLXXgYcEBRZ8glsP8oVzpVEk4VvCHfMcx4RC1/fEAlV241puwNFM5qw1cQ3qRcKqtVetSvm4OH2vXNs/K9wMPq66LYJagKFj x-ms-exchange-antispam-srfa-diagnostics: SOS; x-ms-office365-filtering-correlation-id: 078ae2a0-99b2-4f8e-3a3c-08d61ec2530d x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:AM0PR0502MB4001; x-ms-traffictypediagnostic: AM0PR0502MB4001: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(97927398514766); x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231355)(944501410)(52105095)(3002001)(93006095)(93001095)(10201501046)(6055026)(149027)(150027)(6041310)(20161123562045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123560045)(201708071742011)(7699051); SRVR:AM0PR0502MB4001; BCL:0; PCL:0; RULEID:; SRVR:AM0PR0502MB4001; x-forefront-prvs: 0801F2E62B x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39860400002)(376002)(366004)(346002)(396003)(136003)(189003)(199004)(478600001)(97736004)(8936002)(66066001)(74316002)(2501003)(14444005)(99286004)(476003)(54906003)(316002)(110136005)(71190400001)(71200400001)(5250100002)(229853002)(86362001)(305945005)(7736002)(6116002)(3846002)(256004)(102836004)(2900100001)(39060400002)(55016002)(6246003)(186003)(106356001)(2906002)(7696005)(25786009)(26005)(53936002)(68736007)(486006)(5660300001)(6436002)(105586002)(11346002)(4326008)(6506007)(14454004)(81156014)(8676002)(81166006)(446003)(76176011)(9686003)(33656002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR0502MB4001; H:AM0PR0502MB4019.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: yNW2qEsYLoPtyWjoRoxYOWhwbSBqJl3hdIUHrri0Qm61dx8C6iBmVWlDfPm3ZNqLOkN5D3OKsBeOrdMA31u8oGQJKseU6ghWIaR52OCqMjiZnnEB6d7MUNxdFDf9S9gl4cVntotJlSXbWonvn8dOfCF15IEYYAWAqlJo37knVoaLdA4Azu0uphdQFbQOU0Vl26FRH5nnWa+J/d1lzz0Bikl5hDpr4fQnNnuKwDTWl7nA8eAsspYuFS4vHzOxExnKtum03M3OpvyVtCTmcX+5dzPBPcNlucV9rkcEsy64yRndoiSJBLvWLfNAa5v/JZDKdiRW4yA95kowB6VC5/hbUkMgLjp2xeopRkc1gvXCOnc= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 078ae2a0-99b2-4f8e-3a3c-08d61ec2530d X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Sep 2018 06:28:48.3587 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0502MB4001 Subject: Re: [dpdk-dev] [PATCH] net/bonding: ensure fairness among slaves X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Sep 2018 06:28:50 -0000 Hi Chas Please see small comments. > From: Chas Williams=20 > Some PMDs, especially ones with vector receives, require a minimum > number of receive buffers in order to receive any packets. If the first = slave > read leaves less than this number available, a read from the next slave m= ay > return 0 implying that the slave doesn't have any packets which results i= n > skipping over that slave as the next active slave. It is true not only in case of 0. It makes sense that the first polling slave gets the majority part of the b= urst while the others just get smaller part I suggest to rephrase to the general issue .=20 >=20 > To fix this, implement round robin for the slaves during receive that is = only > advanced to the next slave at the end of each receive burst. > This should also provide some additional fairness in processing in > bond_ethdev_rx_burst as well. >=20 > Fixes: 2efb58cbab6e ("bond: new link bonding library") If it is a fix, why not to use a fix title? Maybe net/bonding: fix the slaves Rx fairness=20 > Cc: stable@dpdk.org >=20 > Signed-off-by: Chas Williams Besides that: Acked-by: Matan Azrad > --- > drivers/net/bonding/rte_eth_bond_pmd.c | 50 > ++++++++++++++++++++++------------ > 1 file changed, 32 insertions(+), 18 deletions(-) >=20 > diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c > b/drivers/net/bonding/rte_eth_bond_pmd.c > index b84f32263..f25faa75c 100644 > --- a/drivers/net/bonding/rte_eth_bond_pmd.c > +++ b/drivers/net/bonding/rte_eth_bond_pmd.c > @@ -58,28 +58,33 @@ bond_ethdev_rx_burst(void *queue, struct > rte_mbuf **bufs, uint16_t nb_pkts) { > struct bond_dev_private *internals; >=20 > - uint16_t num_rx_slave =3D 0; > uint16_t num_rx_total =3D 0; > - > + uint16_t slave_count; > + uint16_t active_slave; > int i; >=20 > /* Cast to structure, containing bonded device's port id and queue id > */ > struct bond_rx_queue *bd_rx_q =3D (struct bond_rx_queue *)queue; > - > internals =3D bd_rx_q->dev_private; > + slave_count =3D internals->active_slave_count; > + active_slave =3D internals->active_slave; >=20 > + for (i =3D 0; i < slave_count && nb_pkts; i++) { > + uint16_t num_rx_slave; >=20 > - for (i =3D 0; i < internals->active_slave_count && nb_pkts; i++) { > /* Offset of pointer to *bufs increases as packets are > received > * from other slaves */ > - num_rx_slave =3D rte_eth_rx_burst(internals- > >active_slaves[i], > + num_rx_slave =3D rte_eth_rx_burst( > + internals->active_slaves[active_slave], > bd_rx_q->queue_id, bufs + num_rx_total, > nb_pkts); > - if (num_rx_slave) { > - num_rx_total +=3D num_rx_slave; > - nb_pkts -=3D num_rx_slave; > - } > + num_rx_total +=3D num_rx_slave; > + nb_pkts -=3D num_rx_slave; > + if (++active_slave =3D=3D slave_count) > + active_slave =3D 0; > } >=20 > + if (++internals->active_slave =3D=3D slave_count) > + internals->active_slave =3D 0; > return num_rx_total; > } >=20 > @@ -258,25 +263,32 @@ bond_ethdev_rx_burst_8023ad_fast_queue(void > *queue, struct rte_mbuf **bufs, > uint16_t num_rx_total =3D 0; /* Total number of received packets > */ > uint16_t slaves[RTE_MAX_ETHPORTS]; > uint16_t slave_count; > - > - uint16_t i, idx; > + uint16_t active_slave; > + uint16_t i; >=20 > /* Copy slave list to protect against slave up/down changes during tx > * bursting */ > slave_count =3D internals->active_slave_count; > + active_slave =3D internals->active_slave; > memcpy(slaves, internals->active_slaves, > sizeof(internals->active_slaves[0]) * slave_count); >=20 > - for (i =3D 0, idx =3D internals->active_slave; > - i < slave_count && num_rx_total < nb_pkts; i++, > idx++) { > - idx =3D idx % slave_count; > + for (i =3D 0; i < slave_count && nb_pkts; i++) { > + uint16_t num_rx_slave; >=20 > /* Read packets from this slave */ > - num_rx_total +=3D rte_eth_rx_burst(slaves[idx], bd_rx_q- > >queue_id, > - &bufs[num_rx_total], nb_pkts - > num_rx_total); > + num_rx_slave =3D rte_eth_rx_burst(slaves[active_slave], > + bd_rx_q->queue_id, > + bufs + num_rx_total, > nb_pkts); > + num_rx_total +=3D num_rx_slave; > + nb_pkts -=3D num_rx_slave; > + > + if (++active_slave =3D=3D slave_count) > + active_slave =3D 0; > } >=20 > - internals->active_slave =3D idx; > + if (++internals->active_slave =3D=3D slave_count) > + internals->active_slave =3D 0; >=20 > return num_rx_total; > } > @@ -459,7 +471,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct > rte_mbuf **bufs, > idx =3D 0; > } >=20 > - internals->active_slave =3D idx; > + if (++internals->active_slave =3D=3D slave_count) > + internals->active_slave =3D 0; > + > return num_rx_total; > } >=20 > -- > 2.14.4