From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 91D0C2BE1 for ; Fri, 30 Jun 2017 12:01:02 +0200 (CEST) Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jun 2017 03:01:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,285,1496127600"; d="scan'208";a="121242093" Received: from irsmsx153.ger.corp.intel.com ([163.33.192.75]) by fmsmga006.fm.intel.com with ESMTP; 30 Jun 2017 03:00:59 -0700 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.211]) by IRSMSX153.ger.corp.intel.com ([169.254.9.74]) with mapi id 14.03.0319.002; Fri, 30 Jun 2017 11:00:18 +0100 From: "Van Haaren, Harry" To: Jerin Jacob , "Richardson, Bruce" CC: "dev@dpdk.org" , "thomas@monjalon.net" , "Wiles, Keith" Thread-Topic: Service lcores and Application lcores Thread-Index: AdLw4cih5pwlzbSuRKKmF+/821WngAABjH8AABrSm4AAC4OFsA== Date: Fri, 30 Jun 2017 10:00:18 +0000 Message-ID: References: <20170629155707.GA15724@bricha3-MOBL3.ger.corp.intel.com> <20170630044508.GA3735@jerin> In-Reply-To: <20170630044508.GA3735@jerin> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiOWU4MDMwNDMtNmUwOS00ZTg1LWI3MzUtYmU3OWIyM2FlYTQ1IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IlA1NVNaQzRxbXk2bWxKU1RaWk91a3ZLdFhhSXlPUnhuczFoXC9VYmREVVZnPSJ9 x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] Service lcores and Application lcores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 30 Jun 2017 10:01:03 -0000 > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Friday, June 30, 2017 5:45 AM > To: Richardson, Bruce > Cc: Van Haaren, Harry ; dev@dpdk.org; thomas@= monjalon.net; > Wiles, Keith > Subject: Re: Service lcores and Application lcores >=20 > -----Original Message----- > > Date: Thu, 29 Jun 2017 16:57:08 +0100 > > From: Bruce Richardson > > To: "Van Haaren, Harry" > > CC: "dev@dpdk.org" , 'Jerin Jacob' > > , "thomas@monjalon.net" > > , "Wiles, Keith" > > Subject: Re: Service lcores and Application lcores > > User-Agent: Mutt/1.8.1 (2017-04-11) > > > > On Thu, Jun 29, 2017 at 03:36:04PM +0100, Van Haaren, Harry wrote: > > > Hi All, > > > A proposal for Eventdev, to ensure Service lcores and Application lco= res play nice; > > > > > > 1) Application lcores must not directly call rte_eventdev_schedule() > > > 2A) Service cores are the proper method to run services > > > 2B) If an application insists on running a service "manually" on an a= pp lcore, we > provide a function for that: > > > rte_service_run_from_app_lcore(struct service *srv); > > > > > > The above function would allow a pesky app to run services on its own= (non-service > core) lcores, but > > > does so through the service-core framework, allowing the service-libr= ary atomic to > keep access serialized as required for non-multi-thread-safe services. > > > > > > The above solution maintains the option of running the eventdev PMD a= s now (single- > core dedicated to a single service), while providing correct serializatio= n by using the > rte_service_run_from_app_lcore() function. Given the atomic is only used = when required > (multiple cores mapped to the service) there should be no performance del= ta. > > > > > > Given that the application should not invoke rte_eventdev_schedule(),= we could even > consider removing it from the Eventdev API. A PMD that requires cycles re= gisters a > service, and an application can use a service core or the run_from_app_lc= ore() function if > it wishes to invoke that service on an application owned lcore. > > > > > > > > > Opinions? > > > > I would be in favour of this proposal, except for the proposed name for > > the new function. It would be useful for an app to be able to "adopt" a > > service into it's main loop if so desired. If we do this, I think I'd >=20 > +1 >=20 > Agree with Harry and Bruce here. >=20 > I think, The adapter function should take "struct service *" and return > lcore_function_t so that it can run using exiting rte_eal_remote_launch() I don't think providing a remote-launch API is actually beneficial. Remote-= launching a single service is equivalent to adding that lcore as a service-core, and mapping it to jus= t that single service. The advantage of adding it as a service core, is future-proofing for if mor= e services need to be added to that core in future, and statistics of the service core infrastructure. = A convenience API could be provided to perform the core_add(), service_start(), enable_on_service() an= d core_start() APIs in one. Also, the remote_launch API doesn't solve the original problem - what if an= application lcore wishes to run one iteration of a service "manually". The remote_launch style API d= oes not solve this problem. Here a much simpler API to run a service... as a counter-proposal :) /** Runs one iteration of *service* on the calling lcore */ int rte_service_iterate(struct rte_service_spec *service); The iterate() function can check that the service is start()-ed, check the = number of mapped-lcores and utilize the atomic to prevent concurrent access= to multi-thread unsafe services. By exposing the function-pointer/userdata= directly, we lose that. Thinking about it, a function like rte_service_iterate() is the only functi= onally correct approach. (Exposing the callback directly brings us back to = the "application thread without atomic check" problem.) Thoughts? > > also support the removal of a dedicated schedule call from the eventdev > > API, or alternatively, if it is needed by other PMDs, leave it as a > > no-op in the sw PMD in favour of the service-cores managed function. >=20 > I would be in favor of removing eventdev schedule and > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability so that it is completely > transparent to application whether scheduler runs on HW or SW or "combina= tion > of both" Yep this bit sounds good!