From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 7D39F2C2F for ; Wed, 17 May 2017 14:47:56 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP; 17 May 2017 05:47:55 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.38,354,1491289200"; d="scan'208";a="1131370904" Received: from irsmsx106.ger.corp.intel.com ([163.33.3.31]) by orsmga001.jf.intel.com with ESMTP; 17 May 2017 05:47:54 -0700 Received: from irsmsx156.ger.corp.intel.com (10.108.20.68) by IRSMSX106.ger.corp.intel.com (163.33.3.31) with Microsoft SMTP Server (TLS) id 14.3.319.2; Wed, 17 May 2017 13:47:53 +0100 Received: from irsmsx109.ger.corp.intel.com ([169.254.13.12]) by IRSMSX156.ger.corp.intel.com ([169.254.3.246]) with mapi id 14.03.0319.002; Wed, 17 May 2017 13:47:53 +0100 From: "Ananyev, Konstantin" To: "Van Haaren, Harry" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [RFC] service core concept header implementation Thread-Index: AQHSxADBGAiAhtsYeEGWzMy/UpgPLaH4e72g Date: Wed, 17 May 2017 12:47:52 +0000 Message-ID: <2601191342CEEE43887BDE71AB9772583FAF750C@IRSMSX109.ger.corp.intel.com> References: <1493810961-139469-1-git-send-email-harry.van.haaren@intel.com> <1493810961-139469-2-git-send-email-harry.van.haaren@intel.com> In-Reply-To: <1493810961-139469-2-git-send-email-harry.van.haaren@intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 10.0.102.7 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [RFC] service core concept header implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 17 May 2017 12:47:57 -0000 Hi Harry, I had a look, my questions/comments below. >>From my perspective it looks like an attempt to introduce simple schedulin= g library inside DPDK. Though some things left unclear from my perspective, but probably I just mi= sunderstood your intentions here. Thanks Konstantin >=20 > This patch adds a service header to DPDK EAL. This header is > an RFC for a mechanism to allow DPDK components request a > callback function to be invoked. >=20 > The application can set the number of service cores, and > a coremask for each particular services. The implementation > of this functionality in rte_services.c (not completed) would > use atomics to ensure that a service can only be polled by a > single lcore at a time. >=20 > Signed-off-by: Harry van Haaren > --- > lib/librte_eal/common/include/rte_service.h | 211 ++++++++++++++++++++++= ++++++ > 1 file changed, 211 insertions(+) > create mode 100644 lib/librte_eal/common/include/rte_service.h >=20 ... > + > +/** > + * Signature of callback back function to run a service. > + */ > +typedef void (*rte_eal_service_func)(void *args); > + > +struct rte_service_config { > + /* name of the service */ > + char name[RTE_SERVICE_NAMESIZE]; > + /* cores that run this service */ > + uint64_t coremask; Why uint64_t? Even now by default we support more than 64 cores. We have rte_cpuset_t for such things. Ok, the first main question: Is that possible to register multiple services with intersecting coremasks = or not? If not, then I suppose there is no practical difference from what we have r= ight now with eal_remote_launch() . If yes, then several questions arise: a) How the service scheduling function will going to switch from one ser= vice to another on particular core? As we don't have interrupt framework for that, I suppose the only ch= oice is to rely on service voluntary fiving up cpu. Is that so? b) Would it be possible to specify percentage of core cycles that service = can consume? I suppose the answer is 'no', at least for current iteration.=20 > + /* when set, multiple lcores can run this service simultaneously withou= t > + * the need for software atomics to ensure that two cores do not > + * attempt to run the service at the same time. > + */ > + uint8_t multithread_capable; Ok, and what would happen if this flag is not set, and user specified more then one cpu in the coremask? > +}; > + > +/** @internal - this will not be visible to application, defined in a se= perate > + * rte_eal_service_impl.h header. Why not? If the application registers the service, it for sure needs to know what e= xactly that service would do (cb func and data). > The application does not need to be exposed > + * to the actual function pointers - so hide them. */ > +struct rte_service { > + /* callback to be called to run the service */ > + rte_eal_service_func cb; > + /* args to the callback function */ > + void *cb_args; If user plans to run that service on multiple cores, then he might need a separate instance of cb_args for each core. > + /* configuration of the service */ > + struct rte_service_config config; > +}; > + > +/** > + * @internal - Only DPDK internal components request "service" cores. > + * > + * Registers a service in EAL. > + * > + * Registered services' configurations are exposed to an application usi= ng > + * *rte_eal_service_get_all*. These services have names which can be und= erstood > + * by the application, and the application logic can decide how to alloc= ate > + * cores to run the various services. > + * > + * This function is expected to be called by a DPDK component to indicat= e that > + * it require a CPU to run a specific function in order for it to perfor= m its > + * processing. An example of such a component is the eventdev software P= MD. > + * > + * The config struct should be filled in as appropriate by the PMD. For = example > + * the name field should include some level of detail (e.g. "ethdev_p1_r= x_q3" > + * might mean RX from an ethdev from port 1, on queue 3). > + * > + * @param service > + * The service structure to be registered with EAL. > + * > + * @return > + * On success, zero > + * On failure, a negative error > + */ > +int rte_eal_service_register(const struct rte_service *service); > + > +/** > + * Get the count of services registered in the EAL. > + * > + * @return the number of services registered with EAL. > + */ > +uint32_t rte_eal_service_get_count(); > + > +/** > + * Writes all registered services to the application supplied array. > + * > + * This function can be used by the application to understand if and wha= t > + * services require running. Each service provides a config struct expos= ing > + * attributes of the service, which can be used by the application to de= cide on > + * its strategy of running services on cores. > + * > + * @param service_config > + * An array of service config structures to be filled in > + * > + * @param n > + * The size of the service config array > + * > + * @return 0 on successful providing all service configs > + * -ENOSPC if array passed in is too small > + */ > +int rte_eal_service_get_all(const struct rte_service_config *service_con= fig, > + uint32_t n); I'd suggest to follow snprintf() notation here: always return number of all= existing services. Then you wouldn't need rte_eal_service_get_count() at all. > + > +/** > + * Use *lcore_id* as a service core. > + * > + * EAL will internally remove *lcore_id* from the application accessible > + * core list. As a result, the application will never have its code run = on > + * the service core, making the service cores totally transparent to an = app. > + * > + * This function should be called before *rte_eal_service_set_coremask* = so that > + * when setting the core mask the value can be error checked to ensure t= hat EAL > + * has an lcore backing the coremask specified. > + */ > +int rte_eal_service_use_lcore(uint16_t lcore_id); > + > +/** > + * Set a coremask for a service. > + * > + * A configuration for a service indicates which cores run the service, = and > + * what other parameters are required to correclty handle the underlying= device. > + * > + * Examples of advanced usage might be a HW device that supports multipl= e lcores > + * transmitting packets on the same queue - the hardware provides a mech= anism > + * for multithreaded enqueue. In this case, the atomic_ > + * > + * @return 0 Success > + * -ENODEV Device with *name* does not exist > + * -ENOTSUP Configuration is un-supported > + */ > +int rte_eal_service_set_coremask(const char *name, uint64_t coremask); Not sure why do we need that function? We do know a service coremask after register() is executed. Would it be possible to add/remove cores from the service on the fly? I.E without stopping the actual service scheduling function first? >6. The application now calls remote_launch() as usual, and the application > + * can perform its processing as required without manually handling t= he > + * partitioning of lcore resources for DPDK functionality. Ok, if the user has to do remote_launch() manually for each service core an= yway... Again it means user can't start/stop individual services, it has to stop t= he whole service core first(and all services it is running) to stop a particular service. Sounds not very convenient for the user from my perspective. Wonder can we have an API like that: /* * reserves all cores in coremask as service cores. * in fact it would mean eal_remote_launch(service_main_func) on each of th= em. */ int rte_eal_service_start(const rte_cpuset_t *coremask); /* * stops all services and returns service lcores back to the EAL. */ int rte_eal_service_stop((const rte_cpuset_t *coremask); struct rte_service_config {name; cb_func; flags; ...}; =20 struct rte_service *rte_service_register(struct rte_service_config *cfg); rte_service_unregister(struct rte_service *srv); /* * adds/deletes lcore from the list of service cpus to run this service on= . * An open question - should we allow to do add/del lcore while service is= running or not. * From user perspective, it would be much more convenient to allow. =20 */ int rte_service_add_lcore(struct rte_service *, uint32_t lcore_id, void *lc= ore_cb_arg); int rte_service_del_lcore(struct rte_service *, uint32_t lcore_id); /* * starts/stops the service. */ int rte_service_start(struct rte_service *srv, const rte_cpuset_t *corema= sk); int rte_service_stop(struct rte_service *srv, const rte_cpuset_t *coremas= k);