From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50052.outbound.protection.outlook.com [40.107.5.52]) by dpdk.org (Postfix) with ESMTP id C58F78025; Tue, 7 Aug 2018 09:56:10 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector1-arm-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=P7Q/VXXozgJaxHwT+fU/QK57GXTvddEuWQzRr0QRNM8=; b=Yt0ahhE9u0F0VqMTxR+vSX6FwIwY9uSZpr37RdopY2VLD/Ma4ngHNnv/PNcpI3I6H/RppCVPzV27f+lxctPrFx5vwxpr6Rh/E9Xnql2kMrNTf00724Cp04NkRKG/74X9vAWp4oPMj5/1b+sSpO30hAnW5q3HBH26INvHJ7ypsKg= Received: from VI1PR08MB3167.eurprd08.prod.outlook.com (52.133.15.142) by VI1PR08MB2669.eurprd08.prod.outlook.com (10.175.245.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1017.15; Tue, 7 Aug 2018 07:56:08 +0000 Received: from VI1PR08MB3167.eurprd08.prod.outlook.com ([fe80::e122:5d23:5e33:b62e]) by VI1PR08MB3167.eurprd08.prod.outlook.com ([fe80::e122:5d23:5e33:b62e%3]) with mapi id 15.20.1017.019; Tue, 7 Aug 2018 07:56:08 +0000 From: Gavin Hu To: "He, Jia" , "dev@dpdk.org" CC: Honnappa Nagarahalli , Steve Capper , Ola Liljedahl , "jerin.jacob@caviumnetworks.com" , "hemant.agrawal@nxp.com" , "stable@dpdk.org" Thread-Topic: [PATCH v2] ring: fix c11 memory ordering issue Thread-Index: AQHULf2FLoB3V0DhOEqtJS/Gd5/dMaSzytIAgAAX27A= Date: Tue, 7 Aug 2018 07:56:08 +0000 Message-ID: References: <20180806011805.7857-1-gavin.hu@arm.com> <20180807031943.5331-1-gavin.hu@arm.com> <88b4421b63ec4802892f3fc40d457f03@HXTBJIDCEMVIW01.hxtcorp.net> In-Reply-To: <88b4421b63ec4802892f3fc40d457f03@HXTBJIDCEMVIW01.hxtcorp.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Gavin.Hu@arm.com; x-originating-ip: [113.29.88.7] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; VI1PR08MB2669; 6:AxPJa2PQzLB5S9Qp9pHX8pnnkHUETWqY4YpinErUJhYiHLLl/ALo1CAabs1EUIoGmLOfYYJ/88y/hx8qRZdIHJV195NQGf8I1o6rWgOfHQNGe8FClqzmrTCjQy3NPyXIaejTxqwGmLE+z8wW+7mWPS1rEvx1+EBzz3BBNsTSSlVBta1t8wpPo3A+p8eSS9adXpicelvR8064Os0Mkkr/o78CGfX3ykkoFkTlsTL/Q+M1RfJmUJr8Ke+8Tnl5Enb0PgxTWP/VoocE0SWhIyHb5IojegLZX5/hgAa0x3r7q6HNIGmG54k4FVXMNOANWx+w2Hw6yZTSY9rWNvdnZpgKWrSIV+0snFJ9IqsNcywtLRMNrMXKLv/5lSvYx29cqDzfJCPztTfsin1Kawa8533FzdO2LbA7leTZvMpYiWwkkFQJXR1LgiJP67qqNt5ZPdog/WZM4BiPOJt20uv5cF2Q4A==; 5:7S9Qi0zWYxZ39TkZvoGoi7QHV7uw3T5OXgUfCLQdLjFExA2FgJTfLvxRAam5b2xuXflImIcZUvc7lqcfdiW6YoejCGLI1DRrgOVvmUkkX5LozKQBXPJ/CutNYQNVzjf5sgioRAfMuPbG+jMPf5KVquBw8y0FYHBmbVQn2AjKGEc=; 7:MGQtFhREkGtC3A2MqOhZIiAldWT+ZH6XGwsFwSbOapgqXyZZUiwxr8JtspDWxdROZpktYykCL0co1FeLqlfOHqyXZLjts16MLQ9Io2TxdnaGrJFG+BdRqKW6psN5UJWyXv9qv+5fIfN1HXcSrhgHhhEaLaPcjgowb16gn4XghHKwAd17ik2di4yauZFD9IVrxQOqUqAsDFrazYSQlO4+juf4DPI2Xub0xtj7B6TEMEiyo1BOreG6Cvl5hLmoPEmi x-ms-exchange-antispam-srfa-diagnostics: SOS;SOR; x-ms-office365-filtering-correlation-id: 449726b8-4f2b-4709-432f-08d5fc3b3c45 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989117)(4534165)(7168020)(4627221)(201703031133081)(201702281549075)(8990107)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:VI1PR08MB2669; x-ms-traffictypediagnostic: VI1PR08MB2669: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(180628864354917)(185117386973197)(223705240517415); x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(10201501046)(3231311)(944501410)(52105095)(3002001)(93006095)(93001095)(6055026)(149027)(150027)(6041310)(20161123560045)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(6072148)(201708071742011)(7699016); SRVR:VI1PR08MB2669; BCL:0; PCL:0; RULEID:; SRVR:VI1PR08MB2669; x-forefront-prvs: 0757EEBDCA x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(136003)(376002)(39860400002)(366004)(396003)(346002)(199004)(189003)(13464003)(40434004)(81156014)(81166006)(99286004)(2501003)(478600001)(476003)(8676002)(8936002)(6246003)(486006)(2906002)(5250100002)(2900100001)(25786009)(97736004)(106356001)(53936002)(105586002)(4326008)(66066001)(305945005)(316002)(229853002)(55016002)(9686003)(54906003)(3846002)(14444005)(33656002)(5660300001)(256004)(110136005)(5024004)(6116002)(74316002)(26005)(72206003)(86362001)(14454004)(446003)(186003)(68736007)(7696005)(7736002)(55236004)(6506007)(11346002)(102836004)(76176011)(53546011)(6436002); DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR08MB2669; H:VI1PR08MB3167.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: 68Z7lppaUFqy5EZWi+F5KGGfsHr68ZVldlycU0+NROeBzFy/0di68CAG90VhLlS4CmLVdyTMeyZuCZ2akjSdz66lQnOXtok+9LnmVNw00TKyfdG/MBJi2HbOId6J5Gk3jJiA8iqRl2aTfu5bOs9TN7UjO2WxR5mt87kD1io+c8KaMGzYZJlhQTafrbhpSltD/yxJ177ZqA/1to0G9kiPGXdF5nxXrlWEUR4o3de1VrocplT/feK0+T3XaFzDtX14OuKY4h4komL8DfJubxr3ODwzETavuDRo/0YWYntgjty9ly+37SSHf8ut88Xw524kaIlYsWYlIQjbvTsTu54/h/f30Y/VsneNbR0iGpPlyfI= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-2022-jp" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-Network-Message-Id: 449726b8-4f2b-4709-432f-08d5fc3b3c45 X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Aug 2018 07:56:08.5704 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2669 Subject: Re: [dpdk-dev] [PATCH v2] ring: fix c11 memory ordering issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Aug 2018 07:56:11 -0000 Hi Jia, Thanks for your feedback, let's see if there are requests from others to sp= lit the fix. The is a race condition between updating the tail and getting free/avail_en= tries, which is dependent on the tails. Which should be synchronized by load-acquire and store-release. In simple w= ords below, step #1 and #5 should be synchronized with each other, mutually= , otherwise the free_/avail_entries calculation possibly get wrong. On each locre, either enque or deque, step #1 and #2 order should be mainta= ined as #2 has dependency on #1, That's why Acquire ordering is necessary. Please raise new questions if I don't get across this clearly. Ring enqueue / lcore #0 Ring deque / lcore #1 1. load-acquire prod_tail 1. Load-acquire cons_tail 2. get free_entries 2. Get avail_entries 3. move prod_head accordingly 3. Move cons_head accordingly 4. do enqueue operations 4. Do dequeue operations 5. store-release prod_tail 5. Store-release cons_tail Best Regards, Gavin -----Original Message----- From: He, Jia Sent: Tuesday, August 7, 2018 1:57 PM To: Gavin Hu ; dev@dpdk.org Cc: Honnappa Nagarahalli ; Steve Capper ; Ola Liljedahl ; jerin.jacob@caviu= mnetworks.com; hemant.agrawal@nxp.com; stable@dpdk.org Subject: RE: [PATCH v2] ring: fix c11 memory ordering issue Hi Gavin > -----Original Message----- > From: Gavin Hu [mailto:gavin.hu@arm.com] > Sent: 2018=1B$BG/=1B(B8=1B$B7n=1B(B7=1B$BF|=1B(B 11:20 > To: dev@dpdk.org > Cc: gavin.hu@arm.com; Honnappa.Nagarahalli@arm.com; > steve.capper@arm.com; Ola.Liljedahl@arm.com; > jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; He, Jia > ; stable@dpdk.org > Subject: [PATCH v2] ring: fix c11 memory ordering issue > > This patch includes two bug fixes(#1 and 2) and two optimisations(#3 and = #4). Maybe you need to split this into small parts. > 1) In update_tail, read ht->tail using __atomic_load.Although the > compiler currently seems to be doing the right thing even without > _atomic_load, we don't want to give the compiler freedom to optimise > what should be an atomic load, it should not be arbitarily moved > around. > 2) Synchronize the load-acquire of the tail and the store-release > within update_tail, the store release ensures all the ring operations, > engqueu or dequeue are seen by the observers as soon as they see > the updated tail. The load-acquire is required for correctly compu- > tate the free_entries or avail_entries, respectively for enqueue and > dequeue operations, the data dependency is not reliable for ordering > as the compiler might break it by saving to temporary values to boost > performance. Could you describe the race condition in details? e.g. cpu 1cpu2 code1 code2 Cheers, Jia > 3) In __rte_ring_move_prod_head, move the __atomic_load_n up and out of > the do {} while loop as upon failure the old_head will be updated, > another load is costy and not necessary. > 4) When calling __atomic_compare_exchange_n, relaxed ordering for both > success and failure, as multiple threads can work independently on > the same end of the ring (either enqueue or dequeue) without > synchronization, not as operating on tail, which has to be finished > in sequence. > > The patch was benchmarked with test/ring_perf_autotest and it > decreases the enqueue/dequeue latency by 5% ~ 24.6% with two lcores, > the real gains are dependent on the number of lcores, depth of the ring, = SPSC or MPMC. > For 1 lcore, it also improves a little, about 3 ~ 4%. > It is a big improvement, in case of MPMC, with rings size of 32, it > saves latency up to (6.90-5.20)/6.90 =3D 24.6%. > > Test result data: > > SP/SC bulk enq/dequeue (size: 8): 13.19 MP/MC bulk enq/dequeue (size: > 8): 25.79 SP/SC bulk enq/dequeue (size: 32): 3.85 MP/MC bulk > enq/dequeue (size: 32): 6.90 > > SP/SC bulk enq/dequeue (size: 8): 12.05 MP/MC bulk enq/dequeue (size: > 8): 23.06 SP/SC bulk enq/dequeue (size: 32): 3.62 MP/MC bulk > enq/dequeue (size: 32): 5.20 > > Fixes: 39368ebfc6 ("ring: introduce C11 memory model barrier option") > Cc: stable@dpdk.org > > Signed-off-by: Gavin Hu > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Steve Capper > Reviewed-by: Ola Liljedahl > --- > lib/librte_ring/rte_ring_c11_mem.h | 38 > +++++++++++++++++++++++++------------- > 1 file changed, 25 insertions(+), 13 deletions(-) > > diff --git a/lib/librte_ring/rte_ring_c11_mem.h > b/lib/librte_ring/rte_ring_c11_mem.h > index 94df3c4a6..cfa3be4a7 100644 > --- a/lib/librte_ring/rte_ring_c11_mem.h > +++ b/lib/librte_ring/rte_ring_c11_mem.h > @@ -21,7 +21,8 @@ update_tail(struct rte_ring_headtail *ht, uint32_t > old_val, uint32_t new_val, > * we need to wait for them to complete > */ > if (!single) > -while (unlikely(ht->tail !=3D old_val)) > +while (unlikely(old_val !=3D __atomic_load_n(&ht->tail, > +__ATOMIC_RELAXED))) > rte_pause(); > > __atomic_store_n(&ht->tail, new_val, __ATOMIC_RELEASE); @@ -60,20 > +61,24 @@ __rte_ring_move_prod_head(struct rte_ring *r, unsigned int > +is_sp, > unsigned int max =3D n; > int success; > > +*old_head =3D __atomic_load_n(&r->prod.head, __ATOMIC_RELAXED); > do { > /* Reset n to the initial burst count */ > n =3D max; > > -*old_head =3D __atomic_load_n(&r->prod.head, > -__ATOMIC_ACQUIRE); > > -/* > - * The subtraction is done between two unsigned 32bits value > +/* load-acquire synchronize with store-release of ht->tail > + * in update_tail. > + */ > +const uint32_t cons_tail =3D __atomic_load_n(&r->cons.tail, > +__ATOMIC_ACQUIRE); > + > +/* The subtraction is done between two unsigned 32bits value > * (the result is always modulo 32 bits even if we have > * *old_head > cons_tail). So 'free_entries' is always between 0 > * and capacity (which is < size). > */ > -*free_entries =3D (capacity + r->cons.tail - *old_head); > +*free_entries =3D (capacity + cons_tail - *old_head); > > /* check that we have enough room in ring */ > if (unlikely(n > *free_entries)) > @@ -87,9 +92,10 @@ __rte_ring_move_prod_head(struct rte_ring *r, > unsigned int is_sp, > if (is_sp) > r->prod.head =3D *new_head, success =3D 1; > else > +/* on failure, *old_head is updated */ > success =3D __atomic_compare_exchange_n(&r->prod.head, > old_head, *new_head, > -0, __ATOMIC_ACQUIRE, > +/*weak=3D*/0, __ATOMIC_RELAXED, > __ATOMIC_RELAXED); > } while (unlikely(success =3D=3D 0)); > return n; > @@ -128,18 +134,23 @@ __rte_ring_move_cons_head(struct rte_ring *r, > int is_sc, > int success; > > /* move cons.head atomically */ > +*old_head =3D __atomic_load_n(&r->cons.head, __ATOMIC_RELAXED); > do { > /* Restore n as it may change every loop */ > n =3D max; > -*old_head =3D __atomic_load_n(&r->cons.head, > -__ATOMIC_ACQUIRE); > + > +/* this load-acquire synchronize with store-release of ht->tail > + * in update_tail. > + */ > +const uint32_t prod_tail =3D __atomic_load_n(&r->prod.tail, > +__ATOMIC_ACQUIRE); > > /* The subtraction is done between two unsigned 32bits value > * (the result is always modulo 32 bits even if we have > * cons_head > prod_tail). So 'entries' is always between 0 > * and size(ring)-1. > */ > -*entries =3D (r->prod.tail - *old_head); > +*entries =3D (prod_tail - *old_head); > > /* Set the actual entries for dequeue */ > if (n > *entries) > @@ -152,10 +163,11 @@ __rte_ring_move_cons_head(struct rte_ring *r, > int is_sc, > if (is_sc) > r->cons.head =3D *new_head, success =3D 1; > else > +/* on failure, *old_head will be updated */ > success =3D __atomic_compare_exchange_n(&r->cons.head, > -old_head, *new_head, > -0, __ATOMIC_ACQUIRE, > -__ATOMIC_RELAXED); > +old_head, *new_head, > +/*weak=3D*/0, __ATOMIC_RELAXED, > +__ATOMIC_RELAXED); > } while (unlikely(success =3D=3D 0)); > return n; > } > -- > 2.11.0 IMPORTANT NOTICE: The contents of this email and any attachments are confid= ential and may also be privileged. If you are not the intended recipient, p= lease notify the sender immediately and do not disclose the contents to any= other person, use it for any purpose, or store or copy the information in = any medium. Thank you.