From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id CB4072C37 for ; Fri, 30 Jun 2017 13:14:42 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jun 2017 04:14:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,286,1496127600"; d="scan'208";a="105604261" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by orsmga002.jf.intel.com with ESMTP; 30 Jun 2017 04:14:40 -0700 Received: from irsmsx155.ger.corp.intel.com (163.33.192.3) by IRSMSX106.ger.corp.intel.com (163.33.3.31) with Microsoft SMTP Server (TLS) id 14.3.319.2; Fri, 30 Jun 2017 12:14:39 +0100 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.211]) by irsmsx155.ger.corp.intel.com ([169.254.14.182]) with mapi id 14.03.0319.002; Fri, 30 Jun 2017 12:14:39 +0100 From: "Van Haaren, Harry" To: Thomas Monjalon CC: "dev@dpdk.org" , 'Jerin Jacob' , "Wiles, Keith" , "Richardson, Bruce" Thread-Topic: Service lcores and Application lcores Thread-Index: AdLw4cih5pwlzbSuRKKmF+/821WngAAAHVUAAAPYppAABrlnAAAcDVPQ///8dYD//+X2QIAALWQA///neXA= Date: Fri, 30 Jun 2017 11:14:39 +0000 Message-ID: References: <2363216.DczB0HHKeo@xps> <1614665.GlQH7FWj5q@xps> In-Reply-To: <1614665.GlQH7FWj5q@xps> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNDExMzFmZGQtODM4ZS00OTFkLWJkYjEtMDhiYWVlYjU0YTZmIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IlliSGFtdzVEQk43SEgycmwrNnB0SW9QcmNZaWFZbktDYlM1QWhXb0lmbVk9In0= x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] Service lcores and Application lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Jun 2017 11:14:43 -0000 > From: Thomas Monjalon [mailto:thomas@monjalon.net] > Sent: Friday, June 30, 2017 11:39 AM > To: Van Haaren, Harry > Cc: dev@dpdk.org; 'Jerin Jacob' ; Wiles, = Keith > ; Richardson, Bruce > Subject: Re: Service lcores and Application lcores >=20 > 30/06/2017 12:18, Van Haaren, Harry: > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > 30/06/2017 10:52, Van Haaren, Harry: > > > > From: Thomas Monjalon [mailto:thomas@monjalon.net] > > > > > 29/06/2017 18:35, Van Haaren, Harry: > > > > > > 3) The problem; > > > > > > If a service core runs the SW PMD schedule() function (optio= n 2) *AND* > > > > > > the application lcore runs schedule() func (option 1), the r= esult is that > > > > > > two threads are concurrently running a multi-thread unsafe f= unction. > > > > > > > > > > Which function is multi-thread unsafe? > > > > > > > > With the current design, the service-callback does not have to be m= ulti-thread safe. > > > > For example, the eventdev SW PMD is not multi-thread safe. > > > > > > > > The service library handles serializing access to the service-callb= ack if multiple > cores > > > > are mapped to that service. This keeps the atomic complexity in one= place, and keeps > > > > services as light-weight to implement as possible. > > > > > > > > (We could consider forcing all service-callbacks to be multi-thread= safe by using > > > atomics, > > > > but we would not be able to optimize away the atomic cmpset if it i= s not required. > This > > > > feels heavy handed, and would cause useless atomic ops to execute.) > > > > > > OK thank you for the detailed explanation. > > > > > > > > Why the same function would be run by the service and by the sche= duler? > > > > > > > > The same function can be run concurrently by the application, and a= service core. > > > > The root cause that this could happen is that an application can *t= hink* it is the > > > > only one running threads, but in reality one or more service-cores = may be running > > > > in the background. > > > > > > > > The service lcores and application lcores existence without knowled= ge of the others > > > > behavior is the cause of concurrent running of the multi-thread uns= afe service > function. > > > > > > That's the part I still don't understand. > > > Why an application would run a function on its own core if it is alre= ady > > > run as a service? Can we just have a check that the service API exist= s > > > and that the service is running? > > > > The point is that really it is an application / service core mis-match. > > The application should never run a PMD that it knows also has a service= core running it. >=20 > Yes >=20 > > However, porting applications to the service-core API has an over-lap t= ime where an > > application on 17.05 will be required to call eg: rte_eventdev_schedule= () itself, and > > depending on startup EAL flags for service-cores, it may-or-may-not hav= e to call > schedule() manually. >=20 > Yes service cores may be unavailable, depending of user configuration. > That's why it must be possible to request the service core API > to know whether a service is run or not. Yep - an application can check if a service is running by calling rte_servi= ce_is_running(struct service_spec*); It returns true if a service-core is running, mapped to the service, and th= e service is start()-ed. > When porting an application to service core, you just have to run this > check, which is known to be available for DPDK 17.08 (check rte_version.h= ). Ok, so as part of porting to service-cores, applications are expected to sa= nity check the services vs their own lcore config. If there's no disagreement, I will add it to the releases notes of the V+1 = service-cores patchset. There is still a need for the rte_service_iterate() function as discussed i= n the other branch of this thread. I'll wait for consensus on that and post the next revision then.=20 Thanks for the questions / input! > > This is pretty error prone, and mis-configuration would cause A) deadlo= ck due to no CPU > cycles, B) segfault due to two cores.