From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-BN3-obe.outbound.protection.outlook.com (mail-bn3nam01on0066.outbound.protection.outlook.com [104.47.33.66]) by dpdk.org (Postfix) with ESMTP id CFD851E20; Sun, 7 Oct 2018 06:03:08 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OhZqmgdsLxCIhooA0KCjpEV9RWLYSyOKOxNeORhnc4s=; b=fhjWFZFwG8H+dgj1l+ZXcZeVqmgBJ3hjo3torYGBr79coVGnKjUD3t7OK5TOIekUUqibCiYc/BrcyPhpaynTG88VZ0dUyOo9nvV/muagPGTDtZuxTBVxLld0JK61swJOtLS7gnkIIU2lYUUY5MvUZSPuk5W4r1It2Bl2TIjsEtY= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Jerin.JacobKollanukkaran@cavium.com; Received: from jerin (122.167.112.78) by SN6PR07MB5008.namprd07.prod.outlook.com (2603:10b6:805:ad::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1207.21; Sun, 7 Oct 2018 04:03:02 +0000 Date: Sun, 7 Oct 2018 09:32:46 +0530 From: Jerin Jacob To: Ola Liljedahl Cc: "dev@dpdk.org" , Honnappa Nagarahalli , "Ananyev, Konstantin" , "Gavin Hu (Arm Technology China)" , Steve Capper , nd , "stable@dpdk.org" Message-ID: <20181007040243.GA1850@jerin> References: <2601191342CEEE43887BDE71AB9772580102FE2951@IRSMSX106.ger.corp.intel.com> <20181005170725.GA18671@jerin> <1555626C-F2B8-44EB-98A3-79B1F7002587@arm.com> <60055965-A7C8-4E9F-8668-0AE1DCE57515@arm.com> <20181006074126.GA16715@jerin> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Originating-IP: [122.167.112.78] X-ClientProxiedBy: BMXPR01CA0019.INDPRD01.PROD.OUTLOOK.COM (2603:1096:b00:d::29) To SN6PR07MB5008.namprd07.prod.outlook.com (2603:10b6:805:ad::10) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4d1157f8-d2ed-4457-175a-08d62c09c915 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(2017052603328)(7153060)(7193020); SRVR:SN6PR07MB5008; X-Microsoft-Exchange-Diagnostics: 1; SN6PR07MB5008; 3:53CraGZF0N2TvP66luB5BPxKG4Wl7q0EWTe7qtYQocyjNzeUJkNJmwNOrlyoVz/HhsLDS3L7vckG3dETwbU423iHP0EpquXLHRaH5OTtBqwFLYnOfsSmuXlpF3LTeOMz3kOVFeN4fj5orR5M44FeuXXXe8QOjte6/18XoxkpuWHq25FHwUun3q3kTE9YQ4AG8mOGsv+l/PKaDZpPUp9XvgmxVoWgnE2ZPlAJkCpE6K2Ij98jJqPR9nlxXkfnFVhB; 25:axInoUcqfdO+s8RL9B3hPc3flnXh5fpSbCYvZC5B6ZV+vUnc7gly/ND7txJ7FDZa5uMpITEfDgmffw3UnQh9RBIPnVKS2DoTptuO3W19s8KW3KTRW0Bbt/w3ADSO6Z+oAhgeNBePTrgE9mlAAj3VgIz2Hk4cmcDD+4M4IMUJcK5a0RDiqA4oUZk+Jl/GqKE0gOP5Q7NlMmosGxfPUemvV3HaM941APC5/L7B6DfRXXUYBthO7COIkRAjVsBharC9rnCyx8fNnPuQNrzk96/iJZmzRXlZOlfsi5sCuNkYXuu2XXJou/AbxFloGIb//xbuZFkzq1a2RiKAGj7D7Kxa6A==; 31:Kf4oJxjatmJSZ9OHtusW0SCmfFvl1WRG6IPCwKx+Za9RJ0vk0WSrEToXACA7kUlHPwfwhej+AkvIZiF9n01wg/dqDcR1KlhDDzb/VgK/NRarClc98hKMQonqlPg+/M8cB2piinpHoELdQD48V1V51ciQq+AZ560Gs0LztDtxxIY84jOl0Eh4dqnGuJuDXyqJ+ZoQJ8mTCBPVku2PWIJUow3xHzFMpZg7CxvEQTnlNss= X-MS-TrafficTypeDiagnostic: SN6PR07MB5008: X-Microsoft-Exchange-Diagnostics: 1; SN6PR07MB5008; 20:qMuVRQNVp4peyC3qoDzRfLK9RVAOIQoPtGsuKUKDFnDXThzZ3slo2Hx7XN4c+RvYpLdrVqvYiuOuqH0Sbot8Xrp5iGavPeYCoZYshfqO+auDVmFFxr14apq5jM2gNe10dJn9nJn6ulFTyCTS/O19SW8aBRbVABB7AEfVi3Aa4TGYfpPMF3FH367Hj9PXME4SEpWBp01EcNGKMfSZQqdop2D+vWiBVryh/+ihPuDn90uu+rR/sAz5IMc6aALOVTwqbjkP30ie3GOs1DshxJ0hqj03ZxK01Ygy95pKGwKrLzGwM3/sSzXr5YAa36TiP0xCiH+7AK8RyJRnVKsmP21ncNOFJNEIYlIR8DO9SVOrlTSd20ysQPvtRvdQ1yVWVs8o9S5DpH4vIr1Axbx/bsxIHW0QuRa43AmXUiWAKwRVhVfSFKMF6lNvb/impisKrAuLONY5iZ/k96AfwJExyEcq74quwJ/yTN8JoipdwMAMNXYjLbXQ5vMxSrgpxRViEboDxtLFjuJ4d4JGF0rFLtQCCNAqjrcTKl9k8JZwx+dK8UHex0WFTQv+PiP4qwWeOJbKzifG1ruG1fVEJi+3p36j7IIch/eazE6ot8vz8v67H/o= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(180628864354917)(228905959029699)(61256525528834)(100405760836317); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231355)(944501410)(52105095)(10201501046)(3002001)(93006095)(149066)(150057)(6041310)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123558120)(20161123560045)(201708071742011)(7699051); SRVR:SN6PR07MB5008; BCL:0; PCL:0; RULEID:; SRVR:SN6PR07MB5008; X-Microsoft-Exchange-Diagnostics: 1; SN6PR07MB5008; 4:paRPShvS1TU2MHUYb81AsRJ6ICHtqhBQBkIcnfRqmD7PDm4jrxg5YmQh6SDoXa7z5spyXx3FOnJlT4rsjHa2n8u+N3YhU+UTtJZLtFYOy17QgIfsik5js54QyuotbckZ3oWvfVSh7JN/txDMXR8DyYdInrPD6WcHr7MGWGZR4+rxNAnuhMYzR1UPU8Xd5QCrpYsy1IJCQNnrjWG7a7+sNubCFrX2Zectl0B3yG/rf/be/yfatuipinWo71eN5qSOy0MjDxu1dHvB+YURn2Lc5n2h6DuOU6dJ+QJTrTMqAWS7ChRmwywVzTsKaGiRG4TQTTcG2XW1+qp+Rp4fFsoYEa+nXfGWRJKZLxYAkkxob0sv3eiE6ZAQ9fkUR0B9gokkEjPm3mHYsBLk+HyAswEwDULXVlx9aLYMT4Lg1AK69MA= X-Forefront-PRVS: 0818724663 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(376002)(346002)(396003)(39860400002)(136003)(366004)(189003)(13464003)(199004)(446003)(53546011)(4326008)(1076002)(52116002)(386003)(316002)(47776003)(105586002)(33716001)(2870700001)(478600001)(6916009)(33896004)(45080400002)(3846002)(97736004)(25786009)(6116002)(966005)(42882007)(66066001)(106356001)(33656002)(345774005)(76176011)(72206003)(6666003)(16526019)(54906003)(53936002)(186003)(486006)(50466002)(19627235002)(2906002)(55016002)(68736007)(6306002)(9686003)(93886005)(956004)(81166006)(81156014)(6246003)(305945005)(6496006)(26005)(2486003)(4744004)(23676004)(58126008)(5660300001)(476003)(229853002)(14444005)(8676002)(8936002)(44832011)(52146003)(11346002)(7736002)(575784001)(10126625002)(18370500001); DIR:OUT; SFP:1101; SCL:1; SRVR:SN6PR07MB5008; H:jerin; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtTTjZQUjA3TUI1MDA4OzIzOmc5ZVJ3ZWVacFF2akJNK2dCcVRVRENNcEJS?= =?utf-8?B?Sms5UnA1bVNRa3hrUktRdXVtZmEyaldmUHhTWDhMVG16c3RKY1RvSzVDa2Zw?= =?utf-8?B?MUNUcEg5R3VxczNqa2oraW1naDAvQVcyaXFmbVNRY0dRVWh6Z2dqNlNkWXFz?= =?utf-8?B?YklVUktSVGdORFFNOUU3eFM5dnFoTXJIR0FMajB0b0M2UitZbFZBWkZRYXI1?= =?utf-8?B?YnhvcENKSlQzMjJaOHA4Y0huanhLaW9FcWk0YzdVV1FIZVJYSmpaeGdLS0Fp?= =?utf-8?B?WXA2TThmM2hSOUNHWFJoWjJvSHp6OUVISmtDTzFwaUhMbEVMUG5FRVlMdjFt?= =?utf-8?B?MnlQVEhUbE1xL0RNWEJ1R1o3cW1EeUh3dForRDZPa21EN0Y1bFhod2FVTjd2?= =?utf-8?B?T0lHSC80T1o1aVdrY3dJOVkxOXBVdUNPTENDcWsrc0p5Qmp3cFc0OHRYbS84?= =?utf-8?B?ZG0wbXhVSGwwdjhHblVqU1pYV1A2YTl3dkNpSXZZeS92VzY0Q2FBQUY2LzVS?= =?utf-8?B?aWRUSkdUQXR0bXRRYUZoM0ZMSWswZ1J5L21hNUZHYVZNTjFGd2k2Y2s3OW5S?= =?utf-8?B?czhXalppc2tieVRzQTEySWFGOW44V1UyMUJaQWxTdVovYUR0b3gvZHdoMXdV?= =?utf-8?B?Z0lYL3RMMnUxM0EvSmFxbGgvMnRoR2tURnNhVFJnYm9peHk5Y0pVV1h2aXBl?= =?utf-8?B?MjdYSXEzWFNVaGRUajlPQXBPcnVacGxLY0t3MEN0UnZZc0g4SStXaTJUUnht?= =?utf-8?B?OWVzckROakpHZFkrN05NKzBVdG9MMTJlc3lVRFFtWnplMUhhc1MyMGZObWMr?= =?utf-8?B?MldneGZBakx1SG5ZR0x3K0c0cnRxNHhZRmhqS0xJRUxqSVM4T2pqcTBiVVQ2?= =?utf-8?B?VGdzWkpETHlsUTEyM0xzbllOOEtMNyt5ekI5MURXZDNGYTRJWXllV3dNRHNL?= =?utf-8?B?a3BWRldUazJYSlZwbHMyWDdEclNXdmJ3ZVU2MThmK0pNYVJ4b1hzejBNaEUv?= =?utf-8?B?WjJUWVN5WW9WL0hlTXZJYlJ5bjh0cHZoYkUyeWt5UStVT3Nvb0ZaMGx6WlIy?= =?utf-8?B?amxqOHNaR0ZSajFHTkZLY3BBZUYwV0dzdS9RaGdHSEMxdkVGSjFlcUVjWXBj?= =?utf-8?B?VGtmVWVpeEQveDNTRUpvM1V5VlBWc1ZEV2xvSGlPRDBvM1dnNFNwb1QyOGx3?= =?utf-8?B?RXM5ZEhZOFJnc0NmdFZISmJNa2FWb3p1UnlPakNTWVhQQ2s1NXNRRlJyRDVw?= =?utf-8?B?NDZKUWF4ZVRFVTBiK1BVNzVYMmNBdFVuMmhyVXhuR1NoRW1wUEVaUkowQSs0?= =?utf-8?B?T25waEtXR3ZJVXVxc2VIS3FqeW9PcDdVOW9aZHRaOVBoZEY3OFVrQlZSeXpa?= =?utf-8?B?QU1nSzZKb2VCOU9nK3VmN1BORDFJb0pzMFpiOVlFRmg5b3dQMUpmY1J3S1RB?= =?utf-8?B?dTdaZm9BeXNnTWRoNFBJOTI4MW9WMHJoZEIyYllwdE40N2xXR05SNEJ6bW1H?= =?utf-8?B?blluM2QrR3hJY1BUQ0VoaXBPdllqTmd5dmNQVVZSMGQ2ODVSR2JoRmhlNWNm?= =?utf-8?B?SHJVMTdMRWxHcU1zL2R6dUNJRDBXOXVaNS9GWEx5VVBYeGJvVlVSSjFpVGFp?= =?utf-8?B?VEN3QTl6Q3NZdXJFeWdLNWRFVDhXbkZvSUhjeTdMUEFzdEJGV1BESE9VS1VD?= =?utf-8?B?eml0aWtmQURSdG43cDAwNnk5S2FCMGlLT2gzZ2Y2Y1d4UWgyc1NXd1ZoL2hj?= =?utf-8?B?OUtvSGtraVZsb1NoZWpLdkF6aS9xdXEzTFVyQmI1eHcvMHZpRkt2ZmYxeGFK?= =?utf-8?B?eGtnNWE4Zk9BaEFVaHphaGVrcUJEeHRaRTU3OFFmdEZVYWMva2s4QTdyUTZv?= =?utf-8?B?dzZqLzdDTW5oL0RPVkY0aGxwam9NN3FCUzArbURaZ251Y0V3U2hwaGRGajNH?= =?utf-8?B?RDVHZzIzZHRuMFkzYjVpYVlyYUtkZ0xOaURQSFpaWkxRSHhXWTZ4NXFSemRK?= =?utf-8?B?RUYxNzlXeHV1Ris3dHV5WUtFaUVkL3AwM0hVWldHYjJzcmJudnZ3OEp2K2lh?= =?utf-8?B?YS94R2tCNXR2NzU2Snc5djdRRjJpUC9qTFpQWExYSlhrMTQrNFh2MWV5ZEpK?= =?utf-8?B?UTZ6QUtZZ0piaVBWVzlTUmMxR3RRMWpXWjdoUWRnb09TenBUbzBmQVdUOThK?= =?utf-8?Q?czgdmV6JJo9cK40+y3e0iAhouQZ/ZiPlkudMKInRUg=3D?= X-Microsoft-Antispam-Message-Info: e2NQV+NaIwQwp3D+sjvmCiBxqH+GpDlsjtBiI9j4mygs4a/B1ON/p2ZgHEi1L3A32zUG1KnuZmwG4S5lQd4GBnF+E2reqGe40ScOhX/twO6oVY+lfSN2z8UadxyRjAbcQr+eyrBaEy/qIZ5Fo8wbQcyqcahCnOq2AtE1hqSKanGoXCiEqeof2bl8QoLt0lWEhGLWMLwyJpJhciRPr0eS4fnxonLQnEwWoPJpnGGttA2wnutv/pQCVQyXZQ3ZCdOVc9HeJzmkvZPGr3bW2vDtqBzCOpCLcEGT+qAXLnmXPFJRPSIhkE113aYzP+yKg6gMukYLvMAIctnC1Rf/vDmT4Qmh4nm7PQKxwGEXJLHIzFg= X-Microsoft-Exchange-Diagnostics: 1; SN6PR07MB5008; 6:vcpxAZpqWbLa0yU0IX0sIjDlPI2N2imVWwr4WtnPWXCNnpZEh7VrOd9Nfc6E+XfdoVnO6E7EgHeon1+bllYzUrSw7zN8Vql20F8cwXky3p/R6Yd+A1c0ztln+SRMwWp2OomDihVmgLjzYO6pt5oqv2k0a5ZcTsP3JUD17dw5T8UJ4L7IJtpKFCw04iuNv5QIwDdW9f8YuoRFphAPxareeLRFNp4BrudxiUwMjKMIi4gITallLKLuC9NsPh4Kqf1fHbBx/OAjh0jJvAXIIUDkCcv3n/fo5uzHFpAN72HqzvBzom7dkKaGhAa+yBaJdKAZAnovVAlDvka8AiYewxYfhvu8p0tfWuYemTHJIPCUoixmE/z8IYDDm4BixgDoV4aISye3va0Wc65vrDh8SLRay9vBRSBxPZ6LJdWQ2NW2JUXwhLy6cjEAN/d9TUiGaw809UYf3JovNSbEARCZ6RvccA==; 5:3fEx6+XLeG2B5X6mfqK0U6rJ3VpSsuXcEVURm0q7IV4yZpIQgBbncQ3elYuroU86UurpQU02zdFrFDWSNQwlQ7Rl0+Ai2+b90Rj9DzKuTeYzPb+BJVRfTcGESe/o6inH7k+Q25pZFjr6Y1nRk16xP/z4NERGWWlyf4NpHpQUoB0=; 7:8e9a6mFx2TrSwT5qHfpBsinNDc0K0jXe28ZN+PdAV7dUGlevKv6BRwDIItflukyxqAk8VsTDdeYjOn2TFO6KQvGl13wz2zW2fjmos3Mj/nMxqRBl2O7LEdXyfbXzDTqPwA7UCZTZbqranLarnphGMzFZHJaX9Ec2AOBQTJ3EGMsIZEaffpsagrEXv5hjXMiW2ozFXW60JHHPtcned621fYhuHRPyM1bIguIXCuHZmAioquwD0i2xtKipGkDefYpp SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2018 04:03:02.9573 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4d1157f8-d2ed-4457-175a-08d62c09c915 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR07MB5008 Subject: Re: [dpdk-stable] [PATCH v3 1/3] ring: read tail using atomic load X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 07 Oct 2018 04:03:09 -0000 -----Original Message----- > Date: Sat, 6 Oct 2018 19:44:35 +0000 > From: Ola Liljedahl > To: Jerin Jacob , "dev@dpdk.org" > > CC: Honnappa Nagarahalli , "Ananyev, > Konstantin" , "Gavin Hu (Arm Technology > China)" , Steve Capper , nd > , "stable@dpdk.org" > Subject: Re: [PATCH v3 1/3] ring: read tail using atomic load > user-agent: Microsoft-MacOutlook/10.10.0.180812 > > > On 06/10/2018, 09:42, "Jerin Jacob" wrote: > > -----Original Message----- > > Date: Fri, 5 Oct 2018 20:34:15 +0000 > > From: Ola Liljedahl > > To: Honnappa Nagarahalli , Jerin Jacob > > > > CC: "Ananyev, Konstantin" , "Gavin Hu (Arm > > Technology China)" , "dev@dpdk.org" , > > Steve Capper , nd , "stable@dpdk.org" > > > > Subject: Re: [PATCH v3 1/3] ring: read tail using atomic load > > user-agent: Microsoft-MacOutlook/10.10.0.180812 > > > > External Email > > > > On 05/10/2018, 22:29, "Honnappa Nagarahalli" wrote: > > > > > > > > I doubt it is possible to benchmark with such a precision so to see the > > > potential difference of one ADD instruction. > > > Just changes in function alignment can affect performance by percents. And > > > the natural variation when not using a 100% deterministic system is going to > > > be a lot larger than one cycle per ring buffer operation. > > > > > > Some of the other patches are also for correctness (e.g. load-acquire of tail) > > The discussion is about this patch alone. Other patches are already Acked. > > So the benchmarking then makes zero sense. > > Why ? > Because the noise in benchmarking (due to e.g. non-deterministic systems, potential changes in function and loop alignment) will be much larger than 1 cycle additional overhead per ring buffer operation. What will benchmarking tell us about the performance impact of this change that adds one ALU operation? Yes. There will be noise in bench marking. That the reason why checked with generated assembly code only of this patch. Found LDR vs LDR + ADD case.How much overhead it gives, it completely based micro architecture, like how many instruction issue it has, constraints on executing LD/ST on specific instruction issue etc? In any case LDR will be better than LDR + ADD in any micro architecture. > > > > > > > > > so while performance measurements may be interesting, we can't skip a bug > > > fix just because it proves to decrease performance. > > IMO, this patch is not a bug fix - in terms of it fixing any failures with the current code. > > It's a fix for correctness. Per the C++11 (and probably C11 as well due to the shared memory model), we have undefined behaviour here. If the compiler detects UB, it is allowed to do anything. Current compilers might not exploit this but future compilers could. > > All I am saying this, The code is not same and compiler(the very latest > gcc 8.2) is not smart enough understand it is a dead code. > What code is dead? The ADD instruction has a purpose, it is adding an offset (from ring buffer start to the tail field) to a base pointer. It's is merely (most likely) not the optimal code sequence for any ARM processor. > > I think, > The moment any __builtin_gcc comes the compiler add predefined template > which has additional "add" instruction. > I suspect the add instruction is because this atomic_load operates on a struct member at a non-zero offset and GCC's "template(s)" for atomic operations don't support register + immediate offset (because basically on AArch64/A64 ISA, all atomic operations except atomic_load(RELAXED) and atomic_store(RELAXED) only support the addressing mode base register without offset). I would be surprised if this minor instance of non-optimal code generation couldn't be corrected in the compiler. > > I think this specific case, > we ALL know that, > a) ht->tail will be 32 bit for life long of DPDK, it will be atomic in > all DPDK supported processors > b) The rte_pause() down which never make and compiler reordering etc. > For 32-bit ARM and 64-bit POWER (ppc64), the rte_pause() implementation looks like this (DPDK 18.08-rc0): > static inline void rte_pause(void) > { > } > > How does calling this function prevent compiler optimisations e.g. of the loop or of surrounding memory accesses? > Is rte_pause() supposed to behave like some kind of compiler barrier? I can't see anything in the DPDK documentation for rte_pause() that claims this. How about fixing rte_pause() then? Meaning issuing power saving instructions on missing archs. > > > so why to loose one cycle at worst case? It is easy loose one cycle and it very > difficult to get one back in fastpath. > I suggest you read up on why undefined behaviour is bad in C. Have a chat with Andrew Pinski. > > Not depending on the compiler memory barrier in rte_pause() would allow the compiler to make optimisations (e.g. schedule loads earlier) that actually increase performance. Since the atomic load of ht->tail here has relaxed MO, the compiler is allowed hoist later loads (and stores) ahead of it (and also push down and/or merge with stores after the load of ht->tail). But the compiler (memory) barrier in (that supposedly is part of) rte_pause() prevents such optimisations (well some accesses could be pushed down between the atomic load and the compiler barrier but not further than that). > > A C compiler that supports C11 and beyond implements the C11 memory model. The compiler understands the memory model and can optimise memory accesses according to the semantics of the model and the ordering directives in the code. Atomic operations using ATOMIC_SEQ_CST, ATOMIC_ACQUIRE, ATOMIC_RELEASE (and ATOMIC_ACQ_REL, ignoring ATOMIC_CONSUME here) each allow and disallow certain kinds of movements and other optimisations of memory accesses (loads and stores, I assume prefetches are also included). Atomic operations with ATOMIC_RELAXED don't impose any ordering constraints so give maximum flexibility to the compiler. Using a compiler memory barrier (e.g. asm volatile ("":::"memory")) is a much more brutal way of constraining the compiler. In arm64 case, it will have ATOMIC_RELAXED followed by asm volatile ("":::"memory") of rte_pause(). I would n't have any issue, if the generated code code is same or better than the exiting case. but it not the case, Right? > > > > > > > > > > > > > > -- Ola > > > > > > On 05/10/2018, 22:06, "Honnappa Nagarahalli" > > > wrote: > > > > > > Hi Jerin, > > > Thank you for generating the disassembly, that is really helpful. I > > > agree with you that we have the option of moving parts 2 and 3 forward. I > > > will let Gavin take a decision. > > > > > > I suggest that we run benchmarks on this patch alone and in combination > > > with other patches in the series. We have few Arm machines and we will run > > > on all of them along with x86. We take a decision based on that. > > > > > > Would that be a way to move forward? I think this should address both > > > your and Ola's concerns. > > > > > > I am open for other suggestions as well. > > > > > > Thank you, > > > Honnappa > > > > > > > > > > > So you don't want to write the proper C11 code because the compiler > > > > generates one extra instruction that way? > > > > You don't even know if that one extra instruction has any measurable > > > > impact on performance. E.g. it could be issued the cycle before together > > > > with other instructions. > > > > > > > > We can complain to the compiler writers that the code generation for > > > > __atomic_load_n(, __ATOMIC_RELAXED) is not optimal (at least on > > > > ARM/A64). I think the problem is that the __atomic builtins only accept > > > a > > > > base address without any offset and this is possibly because e.g. > > > load/store > > > > exclusive (LDX/STX) and load-acquire (LDAR) and store-release (STLR) > > > only > > > > accept a base register with no offset. So any offset has to be added > > > before > > > > the actual "atomic" instruction, LDR in this case. > > > > > > > > > > > > -- Ola > > > > > > > > > > > > On 05/10/2018, 19:07, "Jerin Jacob" > > > > wrote: > > > > > > > > -----Original Message----- > > > > > Date: Fri, 5 Oct 2018 15:11:44 +0000 > > > > > From: Honnappa Nagarahalli > > > > > To: "Ananyev, Konstantin" , Ola > > > > Liljedahl > > > > > , "Gavin Hu (Arm Technology China)" > > > > > , Jerin Jacob > > > > > > > > CC: "dev@dpdk.org" , Steve Capper > > > > , nd > > > > > , "stable@dpdk.org" > > > > > Subject: RE: [PATCH v3 1/3] ring: read tail using atomic load > > > > > > > > > > > > > Hi Jerin, > > > > > > > > > > > > > > > > Thanks for your review, inline comments from our > > > internal > > > > > > discussions. > > > > > > > > > > > > > > > > BR. Gavin > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > > From: Jerin Jacob > > > > > > > > > Sent: Saturday, September 29, 2018 6:49 PM > > > > > > > > > To: Gavin Hu (Arm Technology China) > > > > > > > > > > > > Cc: dev@dpdk.org; Honnappa Nagarahalli > > > > > > > > > ; Steve Capper > > > > > > > > > ; Ola Liljedahl > > > > ; > > > > > > nd > > > > > > > > > ; stable@dpdk.org > > > > > > > > > Subject: Re: [PATCH v3 1/3] ring: read tail using atomic > > > load > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > > > Date: Mon, 17 Sep 2018 16:17:22 +0800 > > > > > > > > > > From: Gavin Hu > > > > > > > > > > To: dev@dpdk.org > > > > > > > > > > CC: gavin.hu@arm.com, > > > Honnappa.Nagarahalli@arm.com, > > > > > > > > > > steve.capper@arm.com, Ola.Liljedahl@arm.com, > > > > > > > > > > jerin.jacob@caviumnetworks.com, nd@arm.com, > > > > > > stable@dpdk.org > > > > > > > > > > Subject: [PATCH v3 1/3] ring: read tail using atomic > > > load > > > > > > > > > > X-Mailer: git-send-email 2.7.4 > > > > > > > > > > > > > > > > > > > > External Email > > > > > > > > > > > > > > > > > > > > In update_tail, read ht->tail using > > > > __atomic_load.Although the > > > > > > > > > > compiler currently seems to be doing the right thing > > > even > > > > without > > > > > > > > > > _atomic_load, we don't want to give the compiler > > > > freedom to > > > > > > optimise > > > > > > > > > > what should be an atomic load, it should not be > > > arbitarily > > > > moved > > > > > > > > > > around. > > > > > > > > > > > > > > > > > > > > Fixes: 39368ebfc6 ("ring: introduce C11 memory model > > > > barrier > > > > > > option") > > > > > > > > > > Cc: stable@dpdk.org > > > > > > > > > > > > > > > > > > > > Signed-off-by: Gavin Hu > > > > > > > > > > Reviewed-by: Honnappa Nagarahalli > > > > > > > > > > > > > > > > Reviewed-by: Steve Capper > > > > > > > > > > Reviewed-by: Ola Liljedahl > > > > > > > > > > --- > > > > > > > > > > lib/librte_ring/rte_ring_c11_mem.h | 3 ++- > > > > > > > > > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > > > > > > > > > > > > > > > The read of ht->tail needs to be atomic, a non-atomic > > > read > > > > would not > > > > > > be correct. > > > > > > > > > > > > > > That's a 32bit value load. > > > > > > > AFAIK on all CPUs that we support it is an atomic operation. > > > > > > > [Ola] But that the ordinary C load is translated to an atomic > > > load > > > > for the > > > > > > target architecture is incidental. > > > > > > > > > > > > > > If the design requires an atomic load (which is the case here), > > > we > > > > > > > should use an atomic load on the language level. Then we can > > > be > > > > sure it will > > > > > > always be translated to an atomic load for the target in question > > > or > > > > > > compilation will fail. We don't have to depend on assumptions. > > > > > > > > > > > > We all know that 32bit load/store on cpu we support - are atomic. > > > > > > If it wouldn't be the case - DPDK would be broken in dozen places. > > > > > > So what the point to pretend that "it might be not atomic" if we > > > do > > > > know for > > > > > > sure that it is? > > > > > > I do understand that you want to use atomic_load(relaxed) here > > > for > > > > > > consistency, and to conform with C11 mem-model and I don't see > > > any > > > > harm in > > > > > > that. > > > > > We can continue to discuss the topic, it is a good discussion. But, as > > > far > > > > this patch is concerned, can I consider this as us having a consensus? > > > The > > > > file rte_ring_c11_mem.h is specifically for C11 memory model and I also > > > do > > > > not see any harm in having code that completely conforms to C11 > > > memory > > > > model. > > > > > > > > Have you guys checked the output assembly with and without atomic > > > > load? > > > > There is an extra "add" instruction with at least the code I have > > > checked. > > > > I think, compiler is not smart enough to understand it is a dead code > > > for > > > > arm64. > > > > > > > > ➜ [~] $ aarch64-linux-gnu-gcc -v > > > > Using built-in specs. > > > > COLLECT_GCC=aarch64-linux-gnu-gcc > > > > COLLECT_LTO_WRAPPER=/usr/lib/gcc/aarch64-linux-gnu/8.2.0/lto- > > > > wrapper > > > > Target: aarch64-linux-gnu > > > > Configured with: /build/aarch64-linux-gnu-gcc/src/gcc-8.2.0/configure > > > > --prefix=/usr --program-prefix=aarch64-linux-gnu- > > > > --with-local-prefix=/usr/aarch64-linux-gnu > > > > --with-sysroot=/usr/aarch64-linux-gnu > > > > --with-build-sysroot=/usr/aarch64-linux-gnu --libdir=/usr/lib > > > > --libexecdir=/usr/lib --target=aarch64-linux-gnu > > > > --host=x86_64-pc-linux-gnu --build=x86_64-pc-linux-gnu --disable-nls > > > > --enable-languages=c,c++ --enable-shared --enable-threads=posix > > > > --with-system-zlib --with-isl --enable-__cxa_atexit > > > > --disable-libunwind-exceptions --enable-clocale=gnu > > > > --disable-libstdcxx-pch --disable-libssp --enable-gnu-unique-object > > > > --enable-linker-build-id --enable-lto --enable-plugin > > > > --enable-install-libiberty --with-linker-hash-style=gnu > > > > --enable-gnu-indirect-function --disable-multilib --disable-werror > > > > --enable-checking=release > > > > Thread model: posix > > > > gcc version 8.2.0 (GCC) > > > > > > > > > > > > # build setup > > > > make -j 8 config T=arm64-armv8a-linuxapp-gcc CROSS=aarch64-linux- > > > gnu- > > > > make -j 8 test-build CROSS=aarch64-linux-gnu- > > > > > > > > # generate asm > > > > aarch64-linux-gnu-gdb -batch -ex 'file build/app/test ' -ex > > > 'disassemble /rs > > > > bucket_enqueue_single' > > > > > > > > I have uploaded generated file for your convenience > > > > with_atomic_load.txt(includes patch 1,2,3) > > > > ----------------------- > > > > https://pastebin.com/SQ6w1yRu > > > > > > > > without_atomic_load.txt(includes patch 2,3) > > > > ----------------------- > > > > https://pastebin.com/BpvnD0CA > > > > > > > > > > > > without_atomic > > > > ------------- > > > > 23 if (!single) > > > > 0x000000000068d290 <+240>: 85 00 00 35 cbnz w5, 0x68d2a0 > > > > > > > > 0x000000000068d294 <+244>: 82 04 40 b9 ldr w2, [x4, #4] > > > > 0x000000000068d298 <+248>: 5f 00 01 6b cmp w2, w1 > > > > 0x000000000068d29c <+252>: 21 01 00 54 b.ne 0x68d2c0 > > > > // b.any > > > > > > > > 24 while (unlikely(ht->tail != old_val)) > > > > 25 rte_pause(); > > > > > > > > > > > > with_atomic > > > > ----------- > > > > 23 if (!single) > > > > 0x000000000068ceb0 <+240>: 00 10 04 91 add x0, x0, #0x104 > > > > 0x000000000068ceb4 <+244>: 84 00 00 35 cbnz w4, 0x68cec4 > > > > > > > > 0x000000000068ceb8 <+248>: 02 00 40 b9 ldr w2, [x0] > > > > 0x000000000068cebc <+252>: 3f 00 02 6b cmp w1, w2 > > > > 0x000000000068cec0 <+256>: 01 09 00 54 b.ne 0x68cfe0 > > > > // b.any > > > > > > > > 24 while (unlikely(old_val != __atomic_load_n(&ht->tail, > > > > __ATOMIC_RELAXED))) > > > > > > > > > > > > I don't want to block this series of patches due this patch. Can we > > > make > > > > re spin one series with 2 and 3 patches. And Wait for patch 1 to > > > conclude? > > > > > > > > Thoughts? > > > > > > > > > > > > > > > > > > > > > > > > > > > But argument that we shouldn't assume 32bit load/store ops as > > > > atomic > > > > > > sounds a bit flaky to me. > > > > > > Konstantin > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > But there are no memory ordering requirements (with > > > > > > > > regards to other loads and/or stores by this thread) so > > > > relaxed > > > > > > memory order is sufficient. > > > > > > > > Another aspect of using __atomic_load_n() is that the > > > > > > > compiler cannot "optimise" this load (e.g. combine, hoist etc), it > > > has > > > > to be > > > > > > done as > > > > > > > > specified in the source code which is also what we need > > > here. > > > > > > > > > > > > > > I think Jerin points that rte_pause() acts here as compiler > > > > barrier too, > > > > > > > so no need to worry that compiler would optimize out the > > > loop. > > > > > > > [Ola] Sorry missed that. But the barrier behaviour of > > > rte_pause() > > > > > > > is not part of C11, is it essentially a hand-made feature to > > > support > > > > > > > the legacy multithreaded memory model (which uses explicit > > > HW > > > > and > > > > > > compiler barriers). I'd prefer code using the C11 memory model > > > not to > > > > > > depend on such legacy features. > > > > > > > > > > > > > > > > > > > > > > > > > > > > Konstantin > > > > > > > > > > > > > > > > > > > > > > > One point worth mentioning though is that this change is > > > for > > > > > > > the rte_ring_c11_mem.h file, not the legacy ring. It may be > > > worth > > > > persisting > > > > > > > > with getting the C11 code right when people are less > > > excited > > > > about > > > > > > sending a release out? > > > > > > > > > > > > > > > > We can explain that for C11 we would prefer to do loads > > > and > > > > stores > > > > > > as per the C11 memory model. In the case of rte_ring, the code is > > > > > > > > separated cleanly into C11 specific files anyway. > > > > > > > > > > > > > > > > I think reading ht->tail using __atomic_load_n() is the > > > most > > > > > > appropriate way. We show that ht->tail is used for > > > synchronization, > > > > we > > > > > > > > acknowledge that ht->tail may be written by other > > > threads > > > > > > > without any other kind of synchronization (e.g. no lock involved) > > > > and we > > > > > > require > > > > > > > > an atomic load (any write to ht->tail must also be atomic). > > > > > > > > > > > > > > > > Using volatile and explicit compiler (or processor) > > > memory > > > > barriers > > > > > > (fences) is the legacy pre-C11 way of accomplishing these things. > > > > > > > There's > > > > > > > > a reason why C11/C++11 moved away from the old ways. > > > > > > > > > > > > > > > > > > > > __atomic_store_n(&ht->tail, new_val, > > > > __ATOMIC_RELEASE); > > > > > > > > > > -- > > > > > > > > > > 2.7.4 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >