From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDEABA055A; Tue, 25 Feb 2020 14:41:20 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 46CB22C4F; Tue, 25 Feb 2020 14:41:20 +0100 (CET) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id 5A0371F1C for ; Tue, 25 Feb 2020 14:41:18 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Feb 2020 05:41:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,484,1574150400"; d="scan'208";a="231022816" Received: from orsmsx101.amr.corp.intel.com ([10.22.225.128]) by orsmga008.jf.intel.com with ESMTP; 25 Feb 2020 05:41:17 -0800 Received: from orsmsx156.amr.corp.intel.com (10.22.240.22) by ORSMSX101.amr.corp.intel.com (10.22.225.128) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 25 Feb 2020 05:41:16 -0800 Received: from ORSEDG001.ED.cps.intel.com (10.7.248.4) by ORSMSX156.amr.corp.intel.com (10.22.240.22) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 25 Feb 2020 05:41:16 -0800 Received: from NAM12-BN8-obe.outbound.protection.outlook.com (104.47.55.174) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 25 Feb 2020 05:41:16 -0800 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ZFesYY2DLVt7fJOrmOh7FwR+MGpP1Kducc3tVUvMmLyMsmI+iOz1c5mZWljxEFkErPT/VVx1162031DCca7LgHVmabG/dHrulU6wusXPmTIB2bWK2ZYNVHGHdJER5XDWGV9xVSQ4JqvNi3XLwBtCN5biKawX/Z6aPKQY/TNWWCJ3oenEQsr/8IDZcajlXL34YUo2nxxyIbLzcdD2Y812r2zqRfQIo2WSVf83OAde9quQwnYAmm9FWLGvg4FE9M+O1+fYVVSqaHdseALJbyvu1yFlmzQd5AeYvtrDUxjxniAaoWLrz4/3is/zEvoVS3QGBCqA8zXxUUjMdAM9SWXJ+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gXm8ESNd+LsZkHkD4XLnWJVm6CEvkc1Rx+Sn96vL1Io=; b=WNJ7J7Kew4T5R3nwEo00SwE8geIupBsQaaYuJSAkPVpyaCLKSoPazpPEnY1U6yeUgKilcmz/u+0tdTiCzYjmaZ3I/blpYeKtwnboI7758W0iPNC0zSWNJ04c56UfHJnLk2kKqgzYvp8IV75pPs3ap0J1zZADGpyeiKZs0Yywl2nomXQgoML41M7/ODGF+9IF2uPhvooYT8PRv5S4s+EcdMYtcw3ZRmLv6esur4m7c1LltMIqoLXr8E1DE81q1o47mx8FNBBO7LUE8YhnStE0FzrjfB28V6uQoLfMo0o5M20yi4kzKeUsFup2WlyHXuONXPPCiWVkW/WXs6OH4amxAQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gXm8ESNd+LsZkHkD4XLnWJVm6CEvkc1Rx+Sn96vL1Io=; b=ZAptjpi3Wjn8h23FsjkjYVpB+tQ/+55szLsYxeuBFDd8KWBzl0QxZlkB2fUv0emhPA1COKMyxr54ntu69FFZqg/9V4SfesQ18j2+tKnS/T16DAtSAxTauvdwqHLPKNctcdGXHBX/1g2KQfocecz/Kdz3G/dS0Fz7su8WxGK283s= Received: from SN6PR11MB2558.namprd11.prod.outlook.com (2603:10b6:805:5d::19) by SN6PR11MB3487.namprd11.prod.outlook.com (2603:10b6:805:c3::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2750.21; Tue, 25 Feb 2020 13:41:15 +0000 Received: from SN6PR11MB2558.namprd11.prod.outlook.com ([fe80::395e:eb75:6ab7:2ba5]) by SN6PR11MB2558.namprd11.prod.outlook.com ([fe80::395e:eb75:6ab7:2ba5%3]) with mapi id 15.20.2750.021; Tue, 25 Feb 2020 13:41:14 +0000 From: "Ananyev, Konstantin" To: Stephen Hemminger , Jerin Jacob CC: dpdk-dev , Olivier Matz , "drc@linux.vnet.ibm.com" Thread-Topic: [dpdk-dev] [RFC 0/6] New sync modes for ring Thread-Index: AQHV6waQaBpJG6uFVkOED+rLZZWNPqgqkXSAgAAQ8YCAABqxgIABDytw Date: Tue, 25 Feb 2020 13:41:14 +0000 Message-ID: References: <20200224113515.1744-1-konstantin.ananyev@intel.com> <20200224085919.3e73fda7@hermes.lan> <20200224113529.4c1c94ab@hermes.lan> In-Reply-To: <20200224113529.4c1c94ab@hermes.lan> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiMmMwMGE1MGYtYWRhMy00ZWM4LWFlZjUtYmE5ZjU4OGEzYTdlIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiK3d4aHdoR1lJV05LeFpSbmNUbFdKd2NReUFaQmxDWm9VWlFHS2pBN2FEQ2ZKT1l3Q0Mra2U3UFAzcU9LaW1FaSJ9 dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.2.0.6 x-ctpclassification: CTP_NT authentication-results: spf=none (sender IP is ) smtp.mailfrom=konstantin.ananyev@intel.com; x-originating-ip: [192.198.151.174] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: f0dbe986-bf82-4afc-5205-08d7b9f86259 x-ms-traffictypediagnostic: SN6PR11MB3487: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:10000; x-forefront-prvs: 0324C2C0E2 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(346002)(39860400002)(376002)(366004)(136003)(396003)(199004)(189003)(52536014)(478600001)(5660300002)(186003)(66556008)(76116006)(2906002)(966005)(64756008)(66476007)(66446008)(66946007)(86362001)(26005)(81156014)(33656002)(9686003)(110136005)(54906003)(316002)(6506007)(7696005)(4326008)(8936002)(81166006)(71200400001)(55016002)(8676002); DIR:OUT; SFP:1102; SCL:1; SRVR:SN6PR11MB3487; H:SN6PR11MB2558.namprd11.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; x-ms-exchange-senderadcheck: 1 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: eDwZUJd1yO5dfLFY7ea7d1VlzyAje9cWwkKiWtTUKjNCpfit95u8d+m3Ug5rfiTITdtjYIz4Ci2x5cG3GH4a3qCmbBhyhJpm8OC2xblyxfr/BZyDwJnGl2tKKC1GcJc0shVAMoxaBX3PpCVhjV6QOlWRRHHJ6BYqitD3fI1DZPkcj86CiOyWqaqgzGOXhsMZGMd64MT/Uvu3gPnsiyLRnCmB6YIAsnOOzTlVudK7RBEuJha2Jx55diqCD3K8PYT9Tu/POMdgeeo0nij5sFPYz+S0EDJbaHYirQfewIYSLN0361uBjSWG8FVenbPcKsVBUbfRiln5ZXn1d2JOglq4eHtJLD/3eTcXhny/zdz6UPo/5QR/AMN7gcbQ7E78ogXlRH0sD7TRSgfj3Wb2BpJT8VSTT3EDNSuu4ML52ZmJVv11wiQVeE9VB3EFQGoctiQdDTbVuLaI2UR7yNM4IuM1rMR2Nk6nWILnzfF04NQ5WQ4QPVZg/tpvKsK3WQok4jOz1wW8+93RX5CrSjs6vbuezQ== x-ms-exchange-antispam-messagedata: Sd6BdOJqVeQ4EP9PAV8OLpUpXmDUnDXj02GuP6aiSGBDucjmGcMkAklA+BY+FuOvKs1YWjKIN49QO5jyneleB0vUjSiSXM3ZtRuGGW7jv4RA3Sc5c7R3MMtbchgbujAmQFcyk0be7Bzsj6edcizK8Q== x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-Network-Message-Id: f0dbe986-bf82-4afc-5205-08d7b9f86259 X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Feb 2020 13:41:14.8233 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: SfplzSfwao3Z7CIlIn4UzMo9Nr7JOGvAEGkqDi6foVuQlLY2BmTmSupnT3b/bsGl+f6WtJGeAnBSxu8fdonfvFp4mpcS1DAln/mhXzbKAmA= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR11MB3487 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [RFC 0/6] New sync modes for ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > > > > Upfront note - that RFC is not a complete patch. > > > > It introduces an ABI breakage, plus it doesn't update ring_elem > > > > code properly, etc. > > > > I plan to deal with all these things in later versions. > > > > Right now I seek an initial feedback about proposed ideas. > > > > Would also ask people to repeat performance tests (see below) > > > > on their platforms to confirm the impact. > > > > > > > > More and more customers use(/try to use) DPDK based apps within > > > > overcommitted systems (multiple acttive threads over same pysical c= ores): > > > > VM, container deployments, etc. > > > > One quite common problem they hit: Lock-Holder-Preemption with rte_= ring. > > > > LHP is quite a common problem for spin-based sync primitives > > > > (spin-locks, etc.) on overcommitted systems. > > > > The situation gets much worse when some sort of > > > > fair-locking technique is used (ticket-lock, etc.). > > > > As now not only lock-owner but also lock-waiters scheduling > > > > order matters a lot. > > > > This is a well-known problem for kernel within VMs: > > > > http://www-archive.xenproject.org/files/xensummitboston08/LHP.pdf > > > > https://www.cs.hs-rm.de/~kaiser/events/wamos2017/Slides/selcuk.pdf > > > > The problem with rte_ring is that while head accusion is sort of > > > > un-fair locking, waiting on tail is very similar to ticket lock sch= ema - > > > > tail has to be updated in particular order. > > > > That makes current rte_ring implementation to perform > > > > really pure on some overcommited scenarios. > > > > > > Rather than reform rte_ring to fit this scenario, it would make > > > more sense to me to introduce another primitive.=20 I don't see much advantages it will bring us. As a disadvantages, for developers and maintainers - code duplication, for end users - extra code churn and removed ability to mix and match different sync modes in one ring. > The current lockless > > > ring performs very well for the isolated thread model that DPDK > > > was built around. This looks like a case of customers violating > > > the usage model of the DPDK and then being surprised at the fallout. For customers using isolated thread model - nothing should change (both in terms of API and performance). Existing sync modes MP/MC,SP/SC kept untouched, set up in the same way (via flags and _init_), and MP/MC remains as default one. >From other side I don't see why we should ignore customers that want to use their DPDK apps in different deployment scenarios. > > > > I agree with Stephen here. > > > > I think, adding more runtime check in the enqueue() and dequeue() will > > have a bad effect on the low-end cores too. We do have a run-time check in our current enqueue()/dequeue implementation= . In fact we support both modes: we have generic rte_ring_enqueue(/dequeue)_b= ulk(/burst) where sync behaviour is determined at runtime by value of prod(/cons).singl= e. Or user can call rte_ring_(mp/sp)_enqueue_* functions directly. This RFC follows exactly the same paradigm: rte_ring_enqueue(/dequeue)_bulk(/burst) kept generic and it's behaviour is determined at runtime, by value of prod(/cons).sync_type. Or user can call enqueue/dequeue with particular sync mode directly: rte_ring_(mp/sp/rts/hts)_enqueue_(bulk/burst)*. The only thing that changed: Format of prod/cons now could differ depending on mode selected at _init_. So you can't create a ring for let say SP mode and then in the middle of d= ata-path change your mind and start using MP_RTS mode. For existing modes (SP/MP, SC/MC) format remains the same and user can st= ill use them interchangeably, though of course that is an error prone practice= . =20 > > But I agree with the problem statement that in the virtualization use > > case, It may be possible to have N virtual cores runs on a physical > > core. > > > > IMO, The best solution would be keeping the ring API same and have a > > different flavor in "compile-time". Something like > > liburcu did for accommodating different flavors. > > > > i.e urcu-qsbr.h and urcu-bp.h will identical definition of API. The > > application can simply include ONE header file in a C file based on > > the flavor. I don't think it is a flexible enough approach. In one app user might need to have several rings with different sync modes. Or even user might need a ring with different sync modes for enqueue/dequeu= e. > > If need both at runtime. Need to have function pointer or so in the > > application and define the function in different c file by including > > the approaite flavor in C file. Big issue with function pointers here would be DPDK MP model. AFAIK, rte_ring is quite popular mechanism for IPC between DPDK apps. To support such model, we'll need to split rte_ring data into 'shared' and 'private' and initialize private one for every process that is going to= use it. That sounds like a massive change, and I am not sure the required effort wi= ll worth it.=20 BTW, if user just calls API functions without trying to access structure in= ternals directly, I don't think it would be a big difference for him what is inside: indirect function call or inlined switch(...) {}. =20 > This would also be a good time to consider the tradeoffs of the > heavy use of inlining that is done in rte_ring vs the impact that > has on API/ABI stability. Yes, hiding rte_ring implementation inside .c would help a lot in terms of ABI maintenance and would make our future life easier. The question is what is the price for it in terms of performance, and are we ready to pay it. Not to mention that it would cause changes in many other libs/apps... So I think it should be a subject for a separate discussion. But, agree it would be good at least to measure the performance impact of such change. If I'll have some spare cycles, will give it a try. Meanwhile, can I ask Jerin and other guys to repeat tests from this RFC on their HW? Before continuing discussion would probably be good to know does the suggested patch work as expected across different platforms. Thanks Konstantin =20