From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 1E93F91C8 for ; Fri, 4 Dec 2015 23:10:28 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP; 04 Dec 2015 14:10:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,382,1444719600"; d="scan'208";a="7749148" Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155]) by fmsmga004.fm.intel.com with ESMTP; 04 Dec 2015 14:10:28 -0800 Received: from irsmsx103.ger.corp.intel.com ([169.254.3.13]) by IRSMSX102.ger.corp.intel.com ([169.254.2.251]) with mapi id 14.03.0248.002; Fri, 4 Dec 2015 22:10:26 +0000 From: "Betts, Ian" To: Stephen Hemminger Thread-Topic: [PATCH v8 0/4] examples: add performance-thread Thread-Index: AQHRLn9lrOhOVTIF3UavZvcpaBLUvp67H02AgAA7lLA= Date: Fri, 4 Dec 2015 22:10:25 +0000 Message-ID: <877C1F8553E92F43898365570816082F35C0BD13@IRSMSX103.ger.corp.intel.com> References: <1449159683-7092-3-git-send-email-ian.betts@intel.com> <1449225265-14480-1-git-send-email-ian.betts@intel.com> <20151204100359.6b966aea@xeon-e3> In-Reply-To: <20151204100359.6b966aea@xeon-e3> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-inteldataclassification: CTP_PUBLIC x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsIiwiaWQiOiJmNTMyYmNiYS1kMWFmLTRhN2ItYjYyOC0xMTJhNmRhYmI5YTAiLCJwcm9wcyI6W3sibiI6IkludGVsRGF0YUNsYXNzaWZpY2F0aW9uIiwidmFscyI6W3sidmFsdWUiOiJDVFBfUFVCTElDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjQuMTAuMTkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiV3M3Y1hxZ3dzUnpVVVdSRFF6XC92NjBuanJnUkRHQ0kzNTZiUWRxMExmdlk9In0= x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH v8 0/4] examples: add performance-thread X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Dec 2015 22:10:29 -0000 -----Original Message----- From: Stephen Hemminger [mailto:stephen@networkplumber.org]=20 Sent: Friday, December 4, 2015 6:04 PM To: Betts, Ian Cc: dev@dpdk.org; Richardson, Bruce Subject: Re: [PATCH v8 0/4] examples: add performance-thread >Looks useful, but this needs more discussion. >Maybe it should be a separate library not tied into DPDK so it gets wider = use and testing? Also what are the limitations? >What if an lthread did a system call? What about interaction with rte_poll= ? >Earlier attempts at lightweight threading (fibers) would be worth looking = into. http://c2.com/cgi/wiki?CooperativeThreading >Intel Thread Building Blocks >IBM NGPT (now defunct) >There lots of hidden gotcha's here, like preemption (or not), and limitati= ons on interactions with other libraries. >Intel may have some milestone to get it into DPDK 2.2 but really this seem= s too late... These questions are valid and are the reason for making this an example app= lication rather than a component library of DPDK.=20 Making it an example gives people an opportunity to evaluate the concept, i= f it turns out to be of value it can be taken forward, and if it turns out not to be of much interest we will not evolve= it. There is a very detailed discussion in the accompanying sample app guide, w= hich I believe provides enough information for most=20 interested users to comprehend the scope of what is included, both in terms= of the features, the limitations and porting guidance. With respect to lthreads making system calls ( which is covered at some len= gth in the documentation BTW),=20 well this is the really the same questions as "what if a DPDK EAL thread ma= de a system call ?" =20 i.e. lthreads introduces no danger that does not already exist in a DPDK ap= plication. Several existing fibre libraries were evaluated before starting down this r= oad.=20 This work itself is heavily influenced by one of those projects : https://g= ithub.com/halayli/lthread.=20 There are a number of negatives with the existing implementations: None of = those we looked at are multicore capable, or at best only allow isolated instances of schedulers t= o be run on different cores.=20 The more sophisticated examples provide their own socket APIs with network= IO via the kernel stack, which is not interesting for DPDK. Nearly all of them use heavier context switch based on mkcontext and friend= s. So far as insufficient time to consider then, this has been in the roadmap = for 2.2 all along, and has been available In patchwork for anybody to look at. There has been no adverse comment so f= ar. =20