From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 4265C1B67F for ; Mon, 23 Oct 2017 19:17:52 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Oct 2017 10:17:51 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.43,424,1503385200"; d="scan'208";a="1028351587" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by orsmga003.jf.intel.com with ESMTP; 23 Oct 2017 10:17:49 -0700 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.180]) by IRSMSX151.ger.corp.intel.com ([169.254.4.108]) with mapi id 14.03.0319.002; Mon, 23 Oct 2017 18:17:49 +0100 From: "Van Haaren, Harry" To: Pavan Nikhilesh , "jerin.jacob@caviumnetworks.com" , "hemant.agrawal@nxp.com" CC: "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v2 5/7] examples/eventdev: update sample app to use service Thread-Index: AQHTREGO9RS5OheQMk6U0gl8nDoe9KLxKj0w Date: Mon, 23 Oct 2017 17:17:48 +0000 Message-ID: References: <1507712990-13064-1-git-send-email-pbhagavatula@caviumnetworks.com> <1507912610-14409-1-git-send-email-pbhagavatula@caviumnetworks.com> <1507912610-14409-5-git-send-email-pbhagavatula@caviumnetworks.com> In-Reply-To: <1507912610-14409-5-git-send-email-pbhagavatula@caviumnetworks.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZjAzZjFhN2ItOTA1Yy00YzM5LTk0ODktMGExNGZhZDJmNmE3IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE2LjUuOS4zIiwiVHJ1c3RlZExhYmVsSGFzaCI6IlFzSVg5R0JlcWNiR3Y5dWVwYk5VNDAzcXZJQzEzVkdPdkdLVzJmZVFPQWs9In0= x-ctpclassification: CTP_IC dlp-product: dlpe-windows dlp-version: 11.0.0.116 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2 5/7] examples/eventdev: update sample app to use service X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 23 Oct 2017 17:17:53 -0000 > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com] > Sent: Friday, October 13, 2017 5:37 PM > To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren, > Harry > Cc: dev@dpdk.org; Pavan Bhagavatula > Subject: [dpdk-dev] [PATCH v2 5/7] examples/eventdev: update sample app t= o > use service >=20 > From: Pavan Bhagavatula >=20 > Update the sample app eventdev_pipeline_sw_pmd to use service cores for > event scheduling in case of sw eventdev. >=20 > Signed-off-by: Pavan Nikhilesh Comments inline - I think there are some side-effect changes in the applica= tion. > --- > examples/eventdev_pipeline_sw_pmd/main.c | 51 +++++++++++++++++++++-----= --- > --- > 1 file changed, 33 insertions(+), 18 deletions(-) >=20 > diff --git a/examples/eventdev_pipeline_sw_pmd/main.c > b/examples/eventdev_pipeline_sw_pmd/main.c > index 09b90c3..d5068d2 100644 > --- a/examples/eventdev_pipeline_sw_pmd/main.c > +++ b/examples/eventdev_pipeline_sw_pmd/main.c > @@ -46,6 +46,7 @@ > #include > #include > #include > +#include >=20 > #define MAX_NUM_STAGES 8 > #define BATCH_SIZE 16 > @@ -233,7 +234,7 @@ producer(void) > } >=20 > static inline void > -schedule_devices(uint8_t dev_id, unsigned int lcore_id) > +schedule_devices(unsigned int lcore_id) > { > if (fdata->rx_core[lcore_id] && (fdata->rx_single || > rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) { > @@ -241,16 +242,6 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_= id) > rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock)); > } >=20 > - if (fdata->sched_core[lcore_id] && (fdata->sched_single || > - rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) { > - rte_event_schedule(dev_id); > - if (cdata.dump_dev_signal) { > - rte_event_dev_dump(0, stdout); > - cdata.dump_dev_signal =3D 0; > - } > - rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock)); > - } See note below, about keeping the functionality provided by fdata->sched_core[] intact. > if (fdata->tx_core[lcore_id] && (fdata->tx_single || > rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) { > consumer(); > @@ -294,7 +285,7 @@ worker(void *arg) > while (!fdata->done) { > uint16_t i; >=20 > - schedule_devices(dev_id, lcore_id); > + schedule_devices(lcore_id); >=20 > if (!fdata->worker_core[lcore_id]) { > rte_pause(); > @@ -661,6 +652,27 @@ struct port_link { > }; >=20 > static int > +setup_scheduling_service(unsigned int lcore, uint8_t dev_id) > +{ > + int ret; > + uint32_t service_id; > + ret =3D rte_event_dev_service_id_get(dev_id, &service_id); > + if (ret =3D=3D -ESRCH) { > + printf("Event device [%d] doesn't need scheduling service\n", > + dev_id); > + return 0; > + } > + if (!ret) { > + rte_service_runstate_set(service_id, 1); > + rte_service_lcore_add(lcore); > + rte_service_map_lcore_set(service_id, lcore, 1); > + rte_service_lcore_start(lcore); > + } > + > + return ret; > +} > + > +static int > setup_eventdev(struct prod_data *prod_data, > struct cons_data *cons_data, > struct worker_data *worker_data) > @@ -839,6 +851,14 @@ setup_eventdev(struct prod_data *prod_data, > *cons_data =3D (struct cons_data){.dev_id =3D dev_id, > .port_id =3D i }; >=20 > + for (i =3D 0; i < MAX_NUM_CORE; i++) { > + if (fdata->sched_core[i] > + && setup_scheduling_service(i, dev_id)) { > + printf("Error setting up schedulig service on %d", i); > + return -1; > + } > + } Previously, the fdata->sched_core[] array contained a "coremask" for sched= uling. A core running the scheduling could *also* perform other work. AKA: a singl= e core could perform all of RX, Sched, Worker, and TX. Due to the service-core requiring to "take" the full core, there is no opti= on to have a core "split" its work into schedule() and RX,TX,Worker. This is a se= rvice core implementation limitation - however it should be resolved for this sample a= pp too. The solution is to enable an ordinary DPDK (non-service-core) thread to run a service. This MUST be enabled at the service-cores library level, to keep= atomics behavior of services etc), and hence removing rte_event_schedule() is still= required. The changes should become simpler than proposed here, instead of the wait_s= chedule() hack, we can just run an iteration of the SW PMD using the newly-added service co= re iter function. I have (just) sent a patch for service-cores to enable running a service on= an ordinary DPDK lcore, see here: http://dpdk.org/ml/archives/dev/2017-October/080022.h= tml Hope you can rework patches 4/7 and 5/7 to use the newly provided functiona= lity! Let me know if the intended usage of the new function is unclear in any way= . Regards, -Harry > + > if (rte_event_dev_start(dev_id) < 0) { > printf("Error starting eventdev\n"); > return -1; > @@ -944,8 +964,7 @@ main(int argc, char **argv) >=20 > if (!fdata->rx_core[lcore_id] && > !fdata->worker_core[lcore_id] && > - !fdata->tx_core[lcore_id] && > - !fdata->sched_core[lcore_id]) > + !fdata->tx_core[lcore_id]) > continue; >=20 > if (fdata->rx_core[lcore_id]) > @@ -958,10 +977,6 @@ main(int argc, char **argv) > "[%s()] lcore %d executing NIC Tx, and using eventdev > port %u\n", > __func__, lcore_id, cons_data.port_id); >=20 > - if (fdata->sched_core[lcore_id]) > - printf("[%s()] lcore %d executing scheduler\n", > - __func__, lcore_id); > - > if (fdata->worker_core[lcore_id]) > printf( > "[%s()] lcore %d executing worker, using eventdev port > %u\n", > -- > 2.7.4