From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 6643A5592 for ; Tue, 22 Nov 2016 15:04:55 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP; 22 Nov 2016 06:04:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,533,1473145200"; d="scan'208";a="194460058" Received: from irsmsx101.ger.corp.intel.com ([163.33.3.153]) by fmsmga004.fm.intel.com with ESMTP; 22 Nov 2016 06:04:28 -0800 Received: from irsmsx103.ger.corp.intel.com ([169.254.3.91]) by IRSMSX101.ger.corp.intel.com ([163.33.3.153]) with mapi id 14.03.0248.002; Tue, 22 Nov 2016 14:04:28 +0000 From: "Richardson, Bruce" To: Jerin Jacob , "Van Haaren, Harry" CC: "dev@dpdk.org" , "Eads, Gage" Thread-Topic: [dpdk-dev] [PATCH 7/7] examples/eventdev_pipeline: adding example Thread-Index: AQHSQDNcL+7AhFHr9k+rktWDFq6/kKDki4mAgACCtNA= Date: Tue, 22 Nov 2016 14:04:27 +0000 Message-ID: <59AF69C657FD0841A61C55336867B5B035B4D827@IRSMSX103.ger.corp.intel.com> References: <1479319207-130646-1-git-send-email-harry.van.haaren@intel.com> <1479319207-130646-8-git-send-email-harry.van.haaren@intel.com> <20161122060208.GA27648@svelivela-lt.caveonetworks.com> In-Reply-To: <20161122060208.GA27648@svelivela-lt.caveonetworks.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZDc5MGJmYmQtZTIyYi00NzRlLTg4MjItMTgzNDcxNzJlNWY5IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IjhpbTJWZVl1XC9XUEVWWkRic3ZnNm44bnBQOStaZEN3UjdxaWJ1bk5xVW5NPSJ9 x-ctpclassification: CTP_IC x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 7/7] examples/eventdev_pipeline: adding example X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 22 Nov 2016 14:04:57 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, November 22, 2016 6:02 AM > To: Van Haaren, Harry > Cc: dev@dpdk.org; Eads, Gage ; Richardson, Bruce > > Subject: Re: [dpdk-dev] [PATCH 7/7] examples/eventdev_pipeline: adding > example >=20 > On Wed, Nov 16, 2016 at 06:00:07PM +0000, Harry van Haaren wrote: > > This patch adds a sample app to the examples/ directory, which can be > > used as a reference application and for general testing. The > > application requires two ethdev ports and expects traffic to be > > flowing. The application must be run with the --vdev flags as follows > > to indicate to EAL that a virtual eventdev device called "evdev_sw0" is > available to be used: > > > > ./build/eventdev_pipeline --vdev evdev_sw0 > > > > The general flow of the traffic is as follows: > > > > Rx core -> Atomic Queue =3D> 4 worker cores =3D> TX core > > > > A scheduler core is required to do the packet scheduling, making this > > configuration require 7 cores (Rx, Tx, Scheduler, and 4 workers). > > Finally a master core brings the core count to 8 for this > > configuration. The >=20 > Thanks for the example application.I will try to share my views on ethdev > integration and usability perspective. Hope we can converge. Hi Jerin,=20 thanks for the feedback. We'll take it on board for a subsequent version we produce. Additional comments and queries on your feedback inline below. /Bruce >=20 > Some of the high level details first before getting into exact details. >=20 > 1) From the HW and ethdev integration perspective, The integrated NIC > controllers does not need producer core(s) to push the event/packets to > event queue. So, I was thinking to use 6WIND rte_flow spec to create the > "ethdev port to event queue wiring" connection by extending the output > ACTION definition, which specifies event queue its need to enqueued to fo= r > the given ethdev port (something your are doing in application). >=20 > I guess, the producer part of this example can be created as common code, > somewhere in rte_flow/ethdev to reuse. We would need this scheme also > where when we deal with external nics + HW event manager use case >=20 Yes. This is something to consider. For the pure-software model, we also might want to look at the opposite approach, where we register an ethdev with the scheduler for it to "pull" new packets from. This would allow it to bypass the port logic for the new packets.=20 An alternative for this is to extend the schedule API to allow a burst of packets to be passed in to be scheduled immediately as "NEW" packets. The e= nd results should be the same, saving cycles by bypassing unneeded processing for the new packets. > The complete event driven model can be verified and exercised without > integrating with eventdev subsystem. So I think, may be we need to focus > on functional applications without ethdev to verify the eventdev features > like(automatic multicore scaling, dynamic load balancing, pipelining, > packet ingress order maintenance and synchronization services) and then > integrate with ethdev Yes, comprehensive unit tests will be needed too. But an example app that pulls packets from an external NIC I also think is needed to get a feel for the performance of the scheduler with real traffic. >=20 > > + const unsigned cores_needed =3D num_workers + > > + /*main*/1 + > > + /*sched*/1 + > > + /*TX*/1 + > > + /*RX*/1; > > + >=20 > 2) One of the prime aims of the event driven model is to remove the fixed > function core mappings and enable automatic multicore scaling, dynamic > load balancing etc.I will try to use an example in review section to show > the method for removing "consumer core" in this case. Yes, I agree, but unfortunately, for some tasks, distributing those tasks across multiple cores can hurt performance overall do to resource contentio= n. =20 >=20 > > application can be configured for various numbers of flows and worker > > cores. Run the application with -h for details. > > > > Signed-off-by: Gage Eads > > Signed-off-by: Bruce Richardson > > Signed-off-by: Harry van Haaren > > --- > > examples/eventdev_pipeline/Makefile | 49 +++ > > examples/eventdev_pipeline/main.c | 718 > ++++++++++++++++++++++++++++++++++++ > > 2 files changed, 767 insertions(+) > > create mode 100644 examples/eventdev_pipeline/Makefile > > create mode 100644 examples/eventdev_pipeline/main.c > > > > +static int sched_type =3D RTE_SCHED_TYPE_ATOMIC; >=20 > RTE_SCHED_TYPE_ORDERED makes sense as a default. Most common case will > have ORDERD at first stage so that it can scale. >=20 > > + > > + > > +static int > > +worker(void *arg) > > +{ > > + struct rte_event rcv_events[BATCH_SIZE]; > > + > > + struct worker_data *data =3D (struct worker_data *)arg; > > + uint8_t event_dev_id =3D data->event_dev_id; > > + uint8_t event_port_id =3D data->event_port_id; > > + int32_t qid =3D data->qid; > > + size_t sent =3D 0, received =3D 0; > > + > > + while (!done) { > > + uint16_t i; > > + > > + uint16_t n =3D rte_event_dequeue_burst(event_dev_id, > > + event_port_id, > > + rcv_events, > > + RTE_DIM(rcv_events), > > + false); > > + if (n =3D=3D 0){ > > + rte_pause(); > > + /* Flush any buffered events */ > > + rte_event_dequeue(event_dev_id, > > + event_port_id, > > + NULL, > > + false); >=20 > The above can be done in implementation. May not be the candidate for > common code. >=20 > > + continue; > > + } > > + received +=3D n; > > + > > + for (i =3D 0; i < n; i++) { > > + struct ether_hdr *eth; > > + struct ether_addr addr; > > + struct rte_event *ev =3D &rcv_events[i]; > > + > > + ev->queue_id =3D qid; > > + ev->flow_id =3D 0; >=20 > Another way to deal wit out additional consumer core(it creates issue in > scaling and load balancing) is >=20 > in worker: > while(1) { >=20 > ev =3D dequeue(port); >=20 > // stage 1 app processing > if (ev.event_type =3D=3D RTE_EVENT_TYPE_ETHDEV) { > // identify the Ethernet port and tx queue the packet needs to > go > // create the flow based on that > ev.flow_id =3D flow(port_id, tx_queue_id); > ev.sched_type =3D RTE_SCHED_TYPE_ATOMIC; > ev.operation =3D RTE_EVENT_OP_FORWARD; > ev.event_type =3D RTE_EVENT_TYPE_CORE; > } // stage 2 app processing > else if (ev.event_type =3D=3D RTE_EVENT_TYPE_CORE) { > port_id =3D function_of(ev.flow_id) ;// look stage 1 processing > tx_queue_id =3D function_of(ev.flow_id) //look stage 1 > processing > remaining ethdev based tx is same as yours > } > enqueue(ev); > } > Yes, but you still need some core to do the work of pushing the packets int= o the scheduler from the NIC, if you don't have a hardware path from NIC to=20 HW scheduler. [Features like RSS can obviously help here with distributing = that work if needed] In the case you do have a HW path - which I assume is the Cavium case - I a= ssume that the EVENT_TYPE_ETHDEV branch above needs also to take care of desc to = mbuf processing, as is normally done by the PMD? =20 >=20 >=20 > > + ev->priority =3D 0; > > + ev->sched_type =3D RTE_SCHED_TYPE_ATOMIC; > > + ev->operation =3D RTE_EVENT_OP_FORWARD; > > + > > + uint64_t now =3D rte_rdtsc(); > > + while(now + 750 > rte_rdtsc()) {} >=20 > Why delay ? Simulate some work being done by the worker, which makes the app slightly m= ore realistic and also helps the scheduler as there is not so much contention o= n the shared cache lines. >=20 > > + > > + /* change mac addresses on packet */ > > + eth =3D rte_pktmbuf_mtod(ev->mbuf, struct ether_hdr *); > > + ether_addr_copy(ð->d_addr, &addr); > > + ether_addr_copy(ð->s_addr, ð->d_addr); > > + ether_addr_copy(&addr, ð->s_addr); > > + } > > + int ret =3D rte_event_enqueue_burst(event_dev_id, event_port_id, > > + rcv_events, n, false); > > + if (ret !=3D n) > > + rte_panic("worker %u thread failed to enqueue event\n", > > + rte_lcore_id()); > > + } > > + > > + /* Flush the buffered events */ > > + rte_event_dequeue(event_dev_id, event_port_id, NULL, false); > > + > > + if (!quiet) > > + printf(" worker %u thread done. RX=3D%zu TX=3D%zu\n", > > + rte_lcore_id(), received, sent); > > + > > + return 0; > > +} > > + > > +static int > > +scheduler(void *arg) > > +{ >=20 > Maybe better to abstract as "service core" or something like I mentioned > earlier, as HW implementation does not need this Sure, we can look at this. >=20 > > + RTE_SET_USED(arg); > > + size_t loops =3D 0; > > + > > + while (!done) { > > + /* Assumes an event dev ID of 0 */ > > + rte_event_schedule(0); > > + loops++; > > + } > > + > > + printf(" scheduler thread done. loops=3D%zu\n", loops); > > + > > + return 0; > > +} > > + > > + > > +static int > > +producer(void *arg) > > +{ > > + > > + struct prod_data *data =3D (struct prod_data *)arg; > > + size_t npackets =3D num_packets; > > + unsigned i; > > + uint64_t mbuf_seqno =3D 0; > > + size_t sent =3D 0; > > + uint8_t eth_port =3D 0; > > + uint8_t event_dev_id =3D data->event_dev_id; > > + uint8_t event_port_id =3D data->event_port_id; > > + int fid_counter =3D 0; > > + > > + while (!done) { > > + int ret; > > + unsigned num_ports =3D data->num_ports; > > + int32_t qid =3D data->qid; > > + struct rte_event events[BATCH_SIZE]; > > + struct rte_mbuf *mbufs[BATCH_SIZE]; > > + > > + uint16_t nb_rx =3D rte_eth_rx_burst(eth_port, 0, mbufs, > BATCH_SIZE); > > + if (++eth_port =3D=3D num_ports) > > + eth_port =3D 0; > > + if (nb_rx =3D=3D 0) { > > + rte_pause(); > > + /* Flush any buffered events */ > > + rte_event_dequeue(event_dev_id, > > + event_port_id, > > + NULL, > > + false); > > + continue; > > + } > > + > > + for (i =3D 0; i < nb_rx; i++) { > > + struct rte_mbuf *m =3D mbufs[i]; > > + struct rte_event *ev =3D &events[i]; > > + > > + ev->queue_id =3D qid; > > + ev->flow_id =3D fid_counter++ % 6; >=20 > To me, flow_id should be a function of port_id and rx queue number here. > right? I'd view it as app dependent. For a test app on IA, I'd expect to use the NIC RSS value as an initial flow value. NOTE: this is just a quick test app to demonstrate the concept for the RFC, so not everything in it is necessarily realistic or what we'd expect in a final version app. >=20 > > + ev->priority =3D 0; > > + m->udata64 =3D mbuf_seqno++; >=20 > Why update mbuf_seqno++ here. Shouldn't be something inside the > implementation? I think this was to help verifying the packet ordering. Again, may not be in a final version. >=20 > > + ev->mbuf =3D m; > > + ev->sched_type =3D sched_type; > > + ev->operation =3D RTE_EVENT_OP_NEW; > > + } > > + > > + do { > > + ret =3D rte_event_enqueue_burst(event_dev_id, > > + event_port_id, > > + events, > > + nb_rx, > > + false); > > + } while (ret =3D=3D -ENOSPC); >=20 > I guess, -ENOSPC can be checked inside the implementation. I guess, we ca= n > pass the info required in the configuration stage to decide the timeout. > May not be the candidate for common code. >=20 > > + if (ret !=3D nb_rx) > > + rte_panic("producer thread failed to enqueue *all* > events\n"); > > + > > + sent +=3D nb_rx; > > + > > + if (num_packets > 0 && npackets > 0) { > > + npackets -=3D nb_rx; > > + if (npackets =3D=3D 0) > > + break; > > + } > > + } > > + > > + /* Flush any buffered events */ > > + while (!done) > > + rte_event_dequeue(event_dev_id, event_port_id, NULL, false); > > + > > + printf(" prod thread done! TX=3D%zu across %u flows\n", sent, > > +num_fids); > > + > > + return 0; > > +} > > + >=20 > > +static uint8_t > > +setup_event_dev(struct prod_data *prod_data, > > + struct cons_data *cons_data, > > + struct worker_data *worker_data) > > +{ > > + config.nb_events_limit =3D 256; >=20 > In real application, we may need to pass as command line >=20 > > + config.dequeue_wait_ns =3D 0; > > + > > + ret =3D rte_event_dev_configure(id, &config); > > + if (ret) > > + rte_panic("Failed to configure the event dev\n"); > > + > > + /* Create queues */ > > + queue_config.event_queue_cfg =3D RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY; > > + queue_config.priority =3D 0; > > + > > + qid0 =3D 0; > > + ret =3D rte_event_queue_setup(id, qid0, &queue_config); > > + if (ret < 0) > > + rte_panic("Failed to create the scheduled QID\n"); > > + > > + queue_config.event_queue_cfg =3D RTE_EVENT_QUEUE_CFG_SINGLE_CONSUMER; > > + queue_config.priority =3D 0; > > + > > + cons_qid =3D 1; > > + ret =3D rte_event_queue_setup(id, cons_qid, &queue_config); > > + if (ret < 0) > > + rte_panic("Failed to create the cons directed QID\n"); > > + > > + queue_config.event_queue_cfg =3D RTE_EVENT_QUEUE_CFG_SINGLE_CONSUMER; >=20 > I guess its more of RTE_EVENT_QUEUE_CFG_SINGLE_PRODUCER case, Does it mak= e > sense to add RTE_EVENT_QUEUE_CFG_SINGLE_PRODUCER in spec, if you are > enqueueing only through that port. see next comment. >=20 > > + queue_config.priority =3D 0; > > + > > + prod_qid =3D 2; > > + ret =3D rte_event_queue_setup(id, prod_qid, &queue_config); > > + if (ret < 0) > > + rte_panic("Failed to create the prod directed QID\n"); > > + >=20 > Looks like prod_qid is just created as a dummy, The actual producer is en= - > queuing on qid0.Something not adding up. Possibly not. We'll check it out. >=20 > > + /* Create ports */ > > +#define LB_PORT_DEPTH 16 > > +#define DIR_PORT_DEPTH 32 > > + port_config.enqueue_queue_depth =3D LB_PORT_DEPTH; > > + port_config.dequeue_queue_depth =3D LB_PORT_DEPTH; >=20 > We need to check the info->max_enqueue_queue_depth. >=20 > Jerin