* [dpdk-dev] [PATCH 0/3] next-eventdev: RFC evendev pipeline sample app
@ 2017-04-21 9:51 Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
` (2 more replies)
0 siblings, 3 replies; 64+ messages in thread
From: Harry van Haaren @ 2017-04-21 9:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, Harry van Haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
The sample app is presented here as an RFC to the next-eventdev tree
to work towards having eventdev PMD generic sample applications.
Cheers, -Harry
Harry van Haaren (3):
examples/eventdev_pipeline: added sample app
doc: add eventdev pipeline to sample app ug
doc: add eventdev library to programmers guide
doc/guides/prog_guide/eventdev.rst | 365 +++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
doc/guides/sample_app_ug/eventdev_pipeline.rst | 188 +++++
doc/guides/sample_app_ug/index.rst | 1 +
examples/eventdev_pipeline/Makefile | 49 ++
examples/eventdev_pipeline/main.c | 975 ++++++++++++++++++++++++
7 files changed, 2573 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline.rst
create mode 100644 examples/eventdev_pipeline/Makefile
create mode 100644 examples/eventdev_pipeline/main.c
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-04-21 9:51 [dpdk-dev] [PATCH 0/3] next-eventdev: RFC evendev pipeline sample app Harry van Haaren
@ 2017-04-21 9:51 ` Harry van Haaren
2017-05-10 14:12 ` Jerin Jacob
` (2 more replies)
2017-04-21 9:51 ` [dpdk-dev] [PATCH 2/3] doc: add eventdev pipeline to sample app ug Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide Harry van Haaren
2 siblings, 3 replies; 64+ messages in thread
From: Harry van Haaren @ 2017-04-21 9:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, Harry van Haaren, Gage Eads, Bruce Richardson
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application recieves traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
examples/eventdev_pipeline/Makefile | 49 ++
examples/eventdev_pipeline/main.c | 975 ++++++++++++++++++++++++++++++++++++
2 files changed, 1024 insertions(+)
create mode 100644 examples/eventdev_pipeline/Makefile
create mode 100644 examples/eventdev_pipeline/main.c
diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
new file mode 100644
index 0000000..bab8916
--- /dev/null
+++ b/examples/eventdev_pipeline/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overriden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c
new file mode 100644
index 0000000..618e078
--- /dev/null
+++ b/examples/eventdev_pipeline/main.c
@@ -0,0 +1,975 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+static unsigned int active_cores;
+static unsigned int num_workers;
+static unsigned long num_packets = (1L << 25); /* do ~32M packets */
+static unsigned int num_fids = 512;
+static unsigned int num_priorities = 1;
+static unsigned int num_stages = 1;
+static unsigned int worker_cq_depth = 16;
+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
+static int16_t qid[MAX_NUM_STAGES] = {-1};
+static int worker_cycles;
+static int enable_queue_priorities;
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned num_nic_ports;
+};
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+};
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+};
+
+static unsigned *enqueue_cnt;
+static unsigned *dequeue_cnt;
+
+static volatile int done;
+static volatile int prod_stop;
+static int quiet;
+static int dump_dev;
+static int dump_dev_signal;
+
+static uint32_t rx_lock;
+static uint32_t tx_lock;
+static uint32_t sched_lock;
+static bool rx_single;
+static bool tx_single;
+static bool sched_single;
+
+static unsigned rx_core[MAX_NUM_CORE];
+static unsigned tx_core[MAX_NUM_CORE];
+static unsigned sched_core[MAX_NUM_CORE];
+static unsigned worker_core[MAX_NUM_CORE];
+
+static bool
+core_in_use(unsigned lcore_id) {
+ return (rx_core[lcore_id] || sched_core[lcore_id] ||
+ tx_core[lcore_id] || worker_core[lcore_id]);
+}
+
+static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+
+static void
+rte_eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t npackets;
+ static uint64_t received;
+ static uint64_t received_printed;
+ static uint64_t time_printed;
+ static uint64_t start_time;
+ unsigned i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ if (!npackets)
+ npackets = num_packets;
+
+ do {
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ time_printed = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ if (!quiet && received >= received_printed + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t delta_cycles = now - start_time;
+ const uint64_t elapsed_ms = delta_cycles / freq_khz;
+ const uint64_t interval_ms =
+ (now - time_printed) / freq_khz;
+
+ uint64_t rx_noprint = received - received_printed;
+ printf("# consumer RX=%"PRIu64", time %"PRIu64
+ "ms, avg %.3f mpps [current %.3f mpps]\n",
+ received, elapsed_ms,
+ (received) / (elapsed_ms * 1000.0),
+ rx_noprint / (interval_ms * 1000.0));
+ received_printed = received;
+ time_printed = now;
+ }
+
+ dequeue_cnt[0] += n;
+
+ if (num_packets > 0 && npackets > 0) {
+ npackets -= n;
+ if (npackets == 0 || npackets > num_packets)
+ done = 1;
+ }
+ } while (0);
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE];
+ struct rte_event ev[BATCH_SIZE];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+ enqueue_cnt[0] += nb_tx;
+
+ if (unlikely(prod_stop))
+ done = 1;
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned lcore_id)
+{
+ if (rx_core[lcore_id] && (rx_single ||
+ rte_atomic32_cmpset(&rx_lock, 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
+ }
+
+ if (sched_core[lcore_id] && (sched_single ||
+ rte_atomic32_cmpset(&sched_lock, 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
+ }
+
+ if (tx_core[lcore_id] && (tx_single ||
+ rte_atomic32_cmpset(&tx_lock, 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
+ }
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned lcore_id = rte_lcore_id();
+
+ while (!done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+ struct rte_mbuf *m = events[i].mbuf;
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == qid[0])
+ events[i].flow_id = m->hash.rss % num_fids;
+
+ events[i].queue_id = next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(ð->s_addr, ð->d_addr);
+ ether_addr_copy(&addr, ð->s_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + worker_cycles)
+ rte_pause();
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ enable_queue_priorities = 1;
+ break;
+ case 'o':
+ queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ quiet = 1;
+ break;
+ case 'D':
+ dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (worker_core[i])
+ num_workers++;
+ if (core_in_use(i))
+ active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned num_ports)
+{
+ uint8_t portid;
+ unsigned i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ tx_buf[i] = rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(tx_buf[i],
+ rte_eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = num_workers + 2;
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ const struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ const struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0)
+ printf("%d: Error configuring device\n", __LINE__);
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ qid[i] = i;
+ next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ const struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ enqueue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
+ sizeof(enqueue_cnt[0]), 0);
+ dequeue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
+ sizeof(dequeue_cnt[0]), 0);
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (done || prod_stop)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned cores_needed = active_cores;
+
+ if (!quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", num_workers);
+ printf("\tpackets: %lu\n", num_packets);
+ printf("\tpriorities: %u\n", num_priorities);
+ printf("\tQueue-prio: %u\n", enable_queue_priorities);
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
+ !tx_core[lcore_id] && !sched_core[lcore_id])
+ continue;
+
+ if (rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!quiet) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < num_workers; i++) {
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx",
+ worker_data[i].port_id);
+ pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
+ dev_id, statname, NULL);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 2/3] doc: add eventdev pipeline to sample app ug
2017-04-21 9:51 [dpdk-dev] [PATCH 0/3] next-eventdev: RFC evendev pipeline sample app Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
@ 2017-04-21 9:51 ` Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide Harry van Haaren
2 siblings, 0 replies; 64+ messages in thread
From: Harry van Haaren @ 2017-04-21 9:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, Harry van Haaren
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/guides/sample_app_ug/eventdev_pipeline.rst | 188 +++++++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
2 files changed, 189 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline.rst
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
new file mode 100644
index 0000000..31d1006
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -0,0 +1,188 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline Sample Application
+====================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API. It shows how an application can configure
+a pipeline and assign a set of worker cores to perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/eventdev_pipeline
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets recieved, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets recieved and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..11f5781 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide
2017-04-21 9:51 [dpdk-dev] [PATCH 0/3] next-eventdev: RFC evendev pipeline sample app Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 2/3] doc: add eventdev pipeline to sample app ug Harry van Haaren
@ 2017-04-21 9:51 ` Harry van Haaren
2017-04-21 11:14 ` Bruce Richardson
2 siblings, 1 reply; 64+ messages in thread
From: Harry van Haaren @ 2017-04-21 9:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, Harry van Haaren
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1360 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..4f6088e
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,365 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader. The eventdev framework is provided as a DPDK library, allowing
+applications to use it if they wish, but not require its usage.
+
+The goal of this library is to enable applications to build processing
+pipelines where the load balancing and scheduling is handled by the eventdev.
+Step-by-step instructions of the eventdev design is available in the `API
+Walktrough`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+A queue is a logical "stage" of a packet processing graph, where each stage
+has a specified scheduling type. The application configures each queue for a
+specific type of scheduling, and just enqueues all events to the eventdev.
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order. Parallel allows sending all flows to all CPU cores, without any
+re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walktrough
+--------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walktrough:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walktrough assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walktrough assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+.. code-block:: c
+
+ struct rte_event events[BATCH_SIZE];
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 : now tx or use the packets */
+ n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 is now no longer valid to use in the application, as
+ the eventdev has dropped any locks or released re-ordered packets */
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide
2017-04-21 9:51 ` [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide Harry van Haaren
@ 2017-04-21 11:14 ` Bruce Richardson
2017-04-21 14:00 ` Jerin Jacob
0 siblings, 1 reply; 64+ messages in thread
From: Bruce Richardson @ 2017-04-21 11:14 UTC (permalink / raw)
To: Harry van Haaren; +Cc: dev, jerin.jacob
On Fri, Apr 21, 2017 at 10:51:39AM +0100, Harry van Haaren wrote:
> This commit adds an entry in the programmers guide
> explaining the eventdev library.
>
> The rte_event struct, queues and ports are explained.
> An API walktrough of a simple two stage atomic pipeline
> provides the reader with a step by step overview of the
> expected usage of the Eventdev API.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> ---
> doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
> doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
> doc/guides/prog_guide/index.rst | 1 +
> 3 files changed, 1360 insertions(+)
> create mode 100644 doc/guides/prog_guide/eventdev.rst
> create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
>
Hi Harry,
I think this should be a separate patch, since it is suitable for
inclusion in 17.05, rather than 17.08 for the new sample app.
/Bruce
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide
2017-04-21 11:14 ` Bruce Richardson
@ 2017-04-21 14:00 ` Jerin Jacob
0 siblings, 0 replies; 64+ messages in thread
From: Jerin Jacob @ 2017-04-21 14:00 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Harry van Haaren, dev
-----Original Message-----
> Date: Fri, 21 Apr 2017 12:14:41 +0100
> From: Bruce Richardson <bruce.richardson@intel.com>
> To: Harry van Haaren <harry.van.haaren@intel.com>
> CC: dev@dpdk.org, jerin.jacob@caviumnetworks.com
> Subject: Re: [dpdk-dev] [PATCH 3/3] doc: add eventdev library to
> programmers guide
> User-Agent: Mutt/1.8.0 (2017-02-23)
>
> On Fri, Apr 21, 2017 at 10:51:39AM +0100, Harry van Haaren wrote:
> > This commit adds an entry in the programmers guide
> > explaining the eventdev library.
> >
> > The rte_event struct, queues and ports are explained.
> > An API walktrough of a simple two stage atomic pipeline
> > provides the reader with a step by step overview of the
> > expected usage of the Eventdev API.
> >
> > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> > ---
> > doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
> > doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
> > doc/guides/prog_guide/index.rst | 1 +
> > 3 files changed, 1360 insertions(+)
> > create mode 100644 doc/guides/prog_guide/eventdev.rst
> > create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
> >
> Hi Harry,
>
> I think this should be a separate patch, since it is suitable for
> inclusion in 17.05, rather than 17.08 for the new sample app.
IMHO, It is better we add programmers guide along with generic eventdev PMD
sample applications. Since the programming guide also talk about ethdev
+ eventdev as a use case, I think, we can address this for v17.08 along
with generic way of dealing some of the ethdev + eventdev capabilities.
http://dpdk.org/ml/archives/dev/2017-April/064204.html
http://dpdk.org/ml/archives/dev/2017-April/064526.html
>
> /Bruce
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
@ 2017-05-10 14:12 ` Jerin Jacob
2017-05-10 16:40 ` Eads, Gage
` (2 more replies)
2017-05-17 18:03 ` Jerin Jacob
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline " David Hunt
2 siblings, 3 replies; 64+ messages in thread
From: Jerin Jacob @ 2017-05-10 14:12 UTC (permalink / raw)
To: Harry van Haaren; +Cc: dev, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Fri, 21 Apr 2017 10:51:37 +0100
> From: Harry van Haaren <harry.van.haaren@intel.com>
> To: dev@dpdk.org
> CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
> <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> Richardson <bruce.richardson@intel.com>
> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> X-Mailer: git-send-email 2.7.4
>
> This commit adds a sample app for the eventdev library.
> The app has been tested with DPDK 17.05-rc2, hence this
> release (or later) is recommended.
>
> The sample app showcases a pipeline processing use-case,
> with event scheduling and processing defined per stage.
> The application recieves traffic as normal, with each
> packet traversing the pipeline. Once the packet has
> been processed by each of the pipeline stages, it is
> transmitted again.
>
> The app provides a framework to utilize cores for a single
> role or multiple roles. Examples of roles are the RX core,
> TX core, Scheduling core (in the case of the event/sw PMD),
> and worker cores.
>
> Various flags are available to configure numbers of stages,
> cycles of work at each stage, type of scheduling, number of
> worker cores, queue depths etc. For a full explaination,
> please refer to the documentation.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Thanks for the example application to share the SW view.
I could make it run on HW after some tweaking(not optimized though)
[...]
> +#define MAX_NUM_STAGES 8
> +#define BATCH_SIZE 16
> +#define MAX_NUM_CORE 64
How about RTE_MAX_LCORE?
> +
> +static unsigned int active_cores;
> +static unsigned int num_workers;
> +static unsigned long num_packets = (1L << 25); /* do ~32M packets */
> +static unsigned int num_fids = 512;
> +static unsigned int num_priorities = 1;
looks like its not used.
> +static unsigned int num_stages = 1;
> +static unsigned int worker_cq_depth = 16;
> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
> +static int16_t qid[MAX_NUM_STAGES] = {-1};
Moving all fastpath related variables under a structure with cache
aligned will help.
> +static int worker_cycles;
> +static int enable_queue_priorities;
> +
> +struct prod_data {
> + uint8_t dev_id;
> + uint8_t port_id;
> + int32_t qid;
> + unsigned num_nic_ports;
> +};
cache aligned ?
> +
> +struct cons_data {
> + uint8_t dev_id;
> + uint8_t port_id;
> +};
> +
cache aligned ?
> +static struct prod_data prod_data;
> +static struct cons_data cons_data;
> +
> +struct worker_data {
> + uint8_t dev_id;
> + uint8_t port_id;
> +};
cache aligned ?
> +
> +static unsigned *enqueue_cnt;
> +static unsigned *dequeue_cnt;
> +
> +static volatile int done;
> +static volatile int prod_stop;
No one updating the prod_stop.
> +static int quiet;
> +static int dump_dev;
> +static int dump_dev_signal;
> +
> +static uint32_t rx_lock;
> +static uint32_t tx_lock;
> +static uint32_t sched_lock;
> +static bool rx_single;
> +static bool tx_single;
> +static bool sched_single;
> +
> +static unsigned rx_core[MAX_NUM_CORE];
> +static unsigned tx_core[MAX_NUM_CORE];
> +static unsigned sched_core[MAX_NUM_CORE];
> +static unsigned worker_core[MAX_NUM_CORE];
> +
> +static bool
> +core_in_use(unsigned lcore_id) {
> + return (rx_core[lcore_id] || sched_core[lcore_id] ||
> + tx_core[lcore_id] || worker_core[lcore_id]);
> +}
> +
> +static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
> +
> +static void
> +rte_eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
> + void *userdata)
IMO, It is better to not use rte_eth_* for application functions.
> +{
> + int port_id = (uintptr_t) userdata;
> + unsigned _sent = 0;
> +
> + do {
> + /* Note: hard-coded TX queue */
> + _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
> + unsent - _sent);
> + } while (_sent != unsent);
> +}
> +
> +static int
> +consumer(void)
> +{
> + const uint64_t freq_khz = rte_get_timer_hz() / 1000;
> + struct rte_event packets[BATCH_SIZE];
> +
> + static uint64_t npackets;
> + static uint64_t received;
> + static uint64_t received_printed;
> + static uint64_t time_printed;
> + static uint64_t start_time;
> + unsigned i, j;
> + uint8_t dev_id = cons_data.dev_id;
> + uint8_t port_id = cons_data.port_id;
> +
> + if (!npackets)
> + npackets = num_packets;
> +
> + do {
> + uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
> + packets, RTE_DIM(packets), 0);
const uint16_t n =
> +
> + if (n == 0) {
> + for (j = 0; j < rte_eth_dev_count(); j++)
> + rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
> + return 0;
> + }
> + if (start_time == 0)
> + time_printed = start_time = rte_get_timer_cycles();
> +
> + received += n;
> + for (i = 0; i < n; i++) {
> + uint8_t outport = packets[i].mbuf->port;
> + rte_eth_tx_buffer(outport, 0, tx_buf[outport],
> + packets[i].mbuf);
> + }
> +
> + if (!quiet && received >= received_printed + (1<<22)) {
> + const uint64_t now = rte_get_timer_cycles();
> + const uint64_t delta_cycles = now - start_time;
> + const uint64_t elapsed_ms = delta_cycles / freq_khz;
> + const uint64_t interval_ms =
> + (now - time_printed) / freq_khz;
> +
> + uint64_t rx_noprint = received - received_printed;
> + printf("# consumer RX=%"PRIu64", time %"PRIu64
> + "ms, avg %.3f mpps [current %.3f mpps]\n",
> + received, elapsed_ms,
> + (received) / (elapsed_ms * 1000.0),
> + rx_noprint / (interval_ms * 1000.0));
> + received_printed = received;
> + time_printed = now;
> + }
> +
> + dequeue_cnt[0] += n;
> +
> + if (num_packets > 0 && npackets > 0) {
> + npackets -= n;
> + if (npackets == 0 || npackets > num_packets)
> + done = 1;
> + }
Looks like very complicated logic.I think we can simplify it.
> + } while (0);
do while(0); really required here?
> +
> + return 0;
> +}
> +
> +static int
> +producer(void)
> +{
> + static uint8_t eth_port;
> + struct rte_mbuf *mbufs[BATCH_SIZE];
> + struct rte_event ev[BATCH_SIZE];
> + uint32_t i, num_ports = prod_data.num_nic_ports;
> + int32_t qid = prod_data.qid;
> + uint8_t dev_id = prod_data.dev_id;
> + uint8_t port_id = prod_data.port_id;
> + uint32_t prio_idx = 0;
> +
> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
> + if (++eth_port == num_ports)
> + eth_port = 0;
> + if (nb_rx == 0) {
> + rte_pause();
> + return 0;
> + }
> +
> + for (i = 0; i < nb_rx; i++) {
> + ev[i].flow_id = mbufs[i]->hash.rss;
prefetching the buff[i+1] may help here?
> + ev[i].op = RTE_EVENT_OP_NEW;
> + ev[i].sched_type = queue_type;
The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY != RTE_SCHED_TYPE_ORDERED. So, we
cannot assign .sched_type as queue_type.
I think, one option could be to avoid translation in application is to
- Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY
- Introduce a new RTE_EVENT_DEV_CAP_ to denote RTE_EVENT_QUEUE_CFG_ALL_TYPES cap
ability
- add sched_type in struct rte_event_queue_conf. If capability flag is
not set then implementation takes sched_type value for the queue.
Any thoughts?
> + ev[i].queue_id = qid;
> + ev[i].event_type = RTE_EVENT_TYPE_CPU;
IMO, RTE_EVENT_TYPE_ETHERNET is the better option here as it is
producing the Ethernet packets/events.
> + ev[i].sub_event_type = 0;
> + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
> + ev[i].mbuf = mbufs[i];
> + RTE_SET_USED(prio_idx);
> + }
> +
> + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
For producer pattern i.e a burst of RTE_EVENT_OP_NEW, OcteonTX can do burst
operation unlike FORWARD case(which is one event at a time).Earlier, I
thought I can abstract the producer pattern in PMD, but it looks like we
are going with application driven producer model based on latest RFC.So I think,
we can add one flag to rte_event_enqueue_burst to denote all the events
are of type RTE_EVENT_OP_NEW as hint.SW driver can ignore this.
I can send a patch for the same.
Any thoughts?
> + if (nb_tx != nb_rx) {
> + for (i = nb_tx; i < nb_rx; i++)
> + rte_pktmbuf_free(mbufs[i]);
> + }
> + enqueue_cnt[0] += nb_tx;
> +
> + if (unlikely(prod_stop))
I think, No one updating the prod_stop
> + done = 1;
> +
> + return 0;
> +}
> +
> +static inline void
> +schedule_devices(uint8_t dev_id, unsigned lcore_id)
> +{
> + if (rx_core[lcore_id] && (rx_single ||
> + rte_atomic32_cmpset(&rx_lock, 0, 1))) {
This pattern(rte_atomic32_cmpset) makes application can inject only
"one core" worth of packets. Not enough for low-end cores. May be we need
multiple producer options. I think, new RFC is addressing it.
> + producer();
> + rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
> + }
> +
> + if (sched_core[lcore_id] && (sched_single ||
> + rte_atomic32_cmpset(&sched_lock, 0, 1))) {
> + rte_event_schedule(dev_id);
> + if (dump_dev_signal) {
> + rte_event_dev_dump(0, stdout);
> + dump_dev_signal = 0;
> + }
> + rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
> + }
Lot of unwanted code if RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED set.
I think, We can make common code with compile time aware and make
runtime workers based on the flag..
i.e
rte_eal_remote_launch(worker_x, &worker_data[worker_idx], lcore_id);
rte_eal_remote_launch(worker_y, &worker_data[worker_idx], lcore_id);
May we can improve after initial version.
> +
> + if (tx_core[lcore_id] && (tx_single ||
> + rte_atomic32_cmpset(&tx_lock, 0, 1))) {
> + consumer();
Should consumer() need to come in this pattern? I am thinking like
if events is from last stage then call consumer() in worker()
I think, above scheme works better when the _same_ worker code need to run the
case where
1) ethdev HW is capable to enqueuing the packets to same txq from
multiple thread
2) ethdev is not capable to do so.
So, The above cases can be addressed in configuration time where we link
the queues to port
case 1) Link all workers to last queue
case 2) Link only worker to last queue
and keeping the common worker code.
HW implementation has functional and performance issue if "two" ports are
assigned to one lcore for dequeue. The above scheme fixes that problem too.
> + rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
> + }
> +}
> +
> +static int
> +worker(void *arg)
> +{
> + struct rte_event events[BATCH_SIZE];
> +
> + struct worker_data *data = (struct worker_data *)arg;
> + uint8_t dev_id = data->dev_id;
> + uint8_t port_id = data->port_id;
> + size_t sent = 0, received = 0;
> + unsigned lcore_id = rte_lcore_id();
> +
> + while (!done) {
> + uint16_t i;
> +
> + schedule_devices(dev_id, lcore_id);
> +
> + if (!worker_core[lcore_id]) {
> + rte_pause();
> + continue;
> + }
> +
> + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
> + events, RTE_DIM(events), 0);
> +
> + if (nb_rx == 0) {
> + rte_pause();
> + continue;
> + }
> + received += nb_rx;
> +
> + for (i = 0; i < nb_rx; i++) {
> + struct ether_hdr *eth;
> + struct ether_addr addr;
> + struct rte_mbuf *m = events[i].mbuf;
> +
> + /* The first worker stage does classification */
> + if (events[i].queue_id == qid[0])
> + events[i].flow_id = m->hash.rss % num_fids;
Not sure why we need do(shrinking the flows) this in worker() in queue based pipeline.
If an PMD has any specific requirement on num_fids,I think, we
can move this configuration stage or PMD can choose optimum fid internally to
avoid modulus operation tax in fastpath in all PMD.
Does struct rte_event_queue_conf.nb_atomic_flows help here?
> +
> + events[i].queue_id = next_qid[events[i].queue_id];
> + events[i].op = RTE_EVENT_OP_FORWARD;
missing events[i].sched_type.HW PMD does not work with this.
I think, we can use similar scheme like next_qid for next_sched_type.
> +
> + /* change mac addresses on packet (to use mbuf data) */
> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
> + ether_addr_copy(ð->d_addr, &addr);
> + ether_addr_copy(ð->s_addr, ð->d_addr);
> + ether_addr_copy(&addr, ð->s_addr);
IMO, We can make packet processing code code as "static inline function" so
different worker types can reuse.
> +
> + /* do a number of cycles of work per packet */
> + volatile uint64_t start_tsc = rte_rdtsc();
> + while (rte_rdtsc() < start_tsc + worker_cycles)
> + rte_pause();
Ditto.
I think, All worker specific variables like "worker_cycles" can moved into
one structure and use.
> + }
> + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
> + events, nb_rx);
> + while (nb_tx < nb_rx && !done)
> + nb_tx += rte_event_enqueue_burst(dev_id, port_id,
> + events + nb_tx,
> + nb_rx - nb_tx);
> + sent += nb_tx;
> + }
> +
> + if (!quiet)
> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
> + rte_lcore_id(), received, sent);
> +
> + return 0;
> +}
> +
> +/*
> + * Parse the coremask given as argument (hexadecimal string) and fill
> + * the global configuration (core role and core count) with the parsed
> + * value.
> + */
> +static int xdigit2val(unsigned char c)
multiple instance of "xdigit2val" in DPDK repo. May be we can push this
as common code.
> +{
> + int val;
> +
> + if (isdigit(c))
> + val = c - '0';
> + else if (isupper(c))
> + val = c - 'A' + 10;
> + else
> + val = c - 'a' + 10;
> + return val;
> +}
> +
> +
> +static void
> +usage(void)
> +{
> + const char *usage_str =
> + " Usage: eventdev_demo [options]\n"
> + " Options:\n"
> + " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
> + " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
I think, this parameter now, effects the application fast path code.I think,
it should eventdev configuration para-mater.
> + " -s, --num_stages=N Use N atomic stages (default 1)\n"
> + " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
> + " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
> + " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
> + " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
> + " -c --cq-depth=N Worker CQ depth (default 16)\n"
> + " -W --work-cycles=N Worker cycles (default 0)\n"
> + " -P --queue-priority Enable scheduler queue prioritization\n"
> + " -o, --ordered Use ordered scheduling\n"
> + " -p, --parallel Use parallel scheduling\n"
IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of
operation. It is valid have to any combination. We need to express that in
command like
example:
3 stage with
O->A->P
> + " -q, --quiet Minimize printed output\n"
> + " -D, --dump Print detailed statistics before exit"
> + "\n";
> + fprintf(stderr, "%s", usage_str);
> + exit(1);
> +}
> +
[...]
> + rx_single = (popcnt == 1);
> + break;
> + case 't':
> + tx_lcore_mask = parse_coremask(optarg);
> + popcnt = __builtin_popcountll(tx_lcore_mask);
> + tx_single = (popcnt == 1);
> + break;
> + case 'e':
> + sched_lcore_mask = parse_coremask(optarg);
> + popcnt = __builtin_popcountll(sched_lcore_mask);
> + sched_single = (popcnt == 1);
> + break;
> + default:
> + usage();
> + }
> + }
> +
> + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
> + sched_lcore_mask == 0 || tx_lcore_mask == 0) {
Need to honor RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED i.e sched_lcore_mask
is zero can be valid case.
> + printf("Core part of pipeline was not assigned any cores. "
> + "This will stall the pipeline, please check core masks "
> + "(use -h for details on setting core masks):\n"
> + "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
> + "\n\tworkers: %"PRIu64"\n",
> + rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
> + worker_lcore_mask);
> + rte_exit(-1, "Fix core masks\n");
> + }
> + if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
> + usage();
> +
> + for (i = 0; i < MAX_NUM_CORE; i++) {
> + rx_core[i] = !!(rx_lcore_mask & (1UL << i));
> + tx_core[i] = !!(tx_lcore_mask & (1UL << i));
> + sched_core[i] = !!(sched_lcore_mask & (1UL << i));
> + worker_core[i] = !!(worker_lcore_mask & (1UL << i));
> +
> + if (worker_core[i])
> + num_workers++;
> + if (core_in_use(i))
> + active_cores++;
> + }
> +}
> +
> +
> +struct port_link {
> + uint8_t queue_id;
> + uint8_t priority;
> +};
> +
> +static int
> +setup_eventdev(struct prod_data *prod_data,
> + struct cons_data *cons_data,
> + struct worker_data *worker_data)
> +{
> + const uint8_t dev_id = 0;
> + /* +1 stages is for a SINGLE_LINK TX stage */
> + const uint8_t nb_queues = num_stages + 1;
> + /* + 2 is one port for producer and one for consumer */
> + const uint8_t nb_ports = num_workers + 2;
selection of number of ports is a function of rte_event_has_producer().
I think, it will be addressed with RFC.
> + const struct rte_event_dev_config config = {
> + .nb_event_queues = nb_queues,
> + .nb_event_ports = nb_ports,
> + .nb_events_limit = 4096,
> + .nb_event_queue_flows = 1024,
> + .nb_event_port_dequeue_depth = 128,
> + .nb_event_port_enqueue_depth = 128,
OCTEONTX PMD driver has .nb_event_port_dequeue_depth = 1 and
.nb_event_port_enqueue_depth = 1 and struct rte_event_dev_info.min_dequeue_timeout_ns
= 853 value.
I think, we need to check the rte_event_dev_info_get() first to get the sane
values and take RTE_MIN or RTE_MAX based on the use case.
or
I can ignore this value in OCTEONTX PMD. But I am not sure NXP case,
Any thoughts from NXP folks.
> + };
> + const struct rte_event_port_conf wkr_p_conf = {
> + .dequeue_depth = worker_cq_depth,
Same as above
> + .enqueue_depth = 64,
Same as above
> + .new_event_threshold = 4096,
> + };
> + struct rte_event_queue_conf wkr_q_conf = {
> + .event_queue_cfg = queue_type,
> + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
> + .nb_atomic_flows = 1024,
> + .nb_atomic_order_sequences = 1024,
> + };
> + const struct rte_event_port_conf tx_p_conf = {
> + .dequeue_depth = 128,
Same as above
> + .enqueue_depth = 128,
Same as above
> + .new_event_threshold = 4096,
> + };
> + const struct rte_event_queue_conf tx_q_conf = {
> + .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
> + .event_queue_cfg =
> + RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
> + RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
> + .nb_atomic_flows = 1024,
> + .nb_atomic_order_sequences = 1024,
> + };
> +
> + struct port_link worker_queues[MAX_NUM_STAGES];
> + struct port_link tx_queue;
> + unsigned i;
> +
> + int ret, ndev = rte_event_dev_count();
> + if (ndev < 1) {
> + printf("%d: No Eventdev Devices Found\n", __LINE__);
> + return -1;
> + }
> +
> + struct rte_event_dev_info dev_info;
> + ret = rte_event_dev_info_get(dev_id, &dev_info);
> + printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
> +
> + ret = rte_event_dev_configure(dev_id, &config);
> + if (ret < 0)
> + printf("%d: Error configuring device\n", __LINE__)
Don't process further with failed configure.
> +
> + /* Q creation - one load balanced per pipeline stage*/
> +
> + /* set up one port per worker, linking to all stage queues */
> + for (i = 0; i < num_workers; i++) {
> + struct worker_data *w = &worker_data[i];
> + w->dev_id = dev_id;
> + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
> + printf("Error setting up port %d\n", i);
> + return -1;
> + }
> +
> + uint32_t s;
> + for (s = 0; s < num_stages; s++) {
> + if (rte_event_port_link(dev_id, i,
> + &worker_queues[s].queue_id,
> + &worker_queues[s].priority,
> + 1) != 1) {
> + printf("%d: error creating link for port %d\n",
> + __LINE__, i);
> + return -1;
> + }
> + }
> + w->port_id = i;
> + }
> + /* port for consumer, linked to TX queue */
> + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
If ethdev supports MT txq queue support then this port can be linked to
worker too. something to consider for future.
> + printf("Error setting up port %d\n", i);
> + return -1;
> + }
> + if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
> + &tx_queue.priority, 1) != 1) {
> + printf("%d: error creating link for port %d\n",
> + __LINE__, i);
> + return -1;
> + }
> + /* port for producer, no links */
> + const struct rte_event_port_conf rx_p_conf = {
> + .dequeue_depth = 8,
> + .enqueue_depth = 8,
same as above issue.You could get default config first and configure.
> + .new_event_threshold = 1200,
> + };
> + if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
> + printf("Error setting up port %d\n", i);
> + return -1;
> + }
> +
> + *prod_data = (struct prod_data){.dev_id = dev_id,
> + .port_id = i + 1,
> + .qid = qid[0] };
> + *cons_data = (struct cons_data){.dev_id = dev_id,
> + .port_id = i };
> +
> + enqueue_cnt = rte_calloc(0,
> + RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
> + sizeof(enqueue_cnt[0]), 0);
> + dequeue_cnt = rte_calloc(0,
> + RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
> + sizeof(dequeue_cnt[0]), 0);
Why array? looks like enqueue_cnt[1] and dequeue_cnt[1] not used anywhere.
> +
> + if (rte_event_dev_start(dev_id) < 0) {
> + printf("Error starting eventdev\n");
> + return -1;
> + }
> +
> + return dev_id;
> +}
> +
> +static void
> +signal_handler(int signum)
> +{
> + if (done || prod_stop)
I think, No one updating the prod_stop
> + rte_exit(1, "Exiting on signal %d\n", signum);
> + if (signum == SIGINT || signum == SIGTERM) {
> + printf("\n\nSignal %d received, preparing to exit...\n",
> + signum);
> + done = 1;
> + }
> + if (signum == SIGTSTP)
> + rte_event_dev_dump(0, stdout);
> +}
> +
> +int
> +main(int argc, char **argv)
[...]
> + RTE_LCORE_FOREACH_SLAVE(lcore_id) {
> + if (lcore_id >= MAX_NUM_CORE)
> + break;
> +
> + if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
> + !tx_core[lcore_id] && !sched_core[lcore_id])
> + continue;
> +
> + if (rx_core[lcore_id])
> + printf(
> + "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
> + __func__, lcore_id, prod_data.port_id);
These prints wont show if rx,tx, scheduler running on master core(as we
are browsing through RTE_LCORE_FOREACH_SLAVE)
> +
> + if (!quiet) {
> + printf("\nPort Workload distribution:\n");
> + uint32_t i;
> + uint64_t tot_pkts = 0;
> + uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
> + for (i = 0; i < num_workers; i++) {
> + char statname[64];
> + snprintf(statname, sizeof(statname), "port_%u_rx",
> + worker_data[i].port_id);
Please check "port_%u_rx" xstat availability with PMD first.
> + pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
> + dev_id, statname, NULL);
> + tot_pkts += pkts_per_wkr[i];
> + }
> + for (i = 0; i < num_workers; i++) {
> + float pc = pkts_per_wkr[i] * 100 /
> + ((float)tot_pkts);
> + printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
> + i, pc, pkts_per_wkr[i]);
> + }
> +
> + }
> +
> + return 0;
> +}
As final note, considering the different options in fastpath, I was
thinking like introducing app/test-eventdev like app/testpmd and have
set of function pointers# for different modes like "macswap", "txonly"
in testpmd to exercise different options and framework for adding new use
cases.I will work on that to check the feasibility.
##
struct fwd_engine {
const char *fwd_mode_name; /**< Forwarding mode name. */
port_fwd_begin_t port_fwd_begin; /**< NULL if nothing special to do. */
port_fwd_end_t port_fwd_end; /**< NULL if nothing special to do. */
packet_fwd_t packet_fwd; /**< Mandatory. */
};
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-05-10 14:12 ` Jerin Jacob
@ 2017-05-10 16:40 ` Eads, Gage
2017-05-10 20:16 ` Eads, Gage
2017-06-26 14:46 ` Hunt, David
2 siblings, 0 replies; 64+ messages in thread
From: Eads, Gage @ 2017-05-10 16:40 UTC (permalink / raw)
To: Jerin Jacob, Van Haaren, Harry; +Cc: dev, Richardson, Bruce
> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Wednesday, May 10, 2017 9:12 AM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Eads, Gage <gage.eads@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Subject: Re: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>
> -----Original Message-----
> > Date: Fri, 21 Apr 2017 10:51:37 +0100
> > From: Harry van Haaren <harry.van.haaren@intel.com>
> > To: dev@dpdk.org
> > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
> > <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> > Richardson <bruce.richardson@intel.com>
> > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> > X-Mailer: git-send-email 2.7.4
> >
> > This commit adds a sample app for the eventdev library.
> > The app has been tested with DPDK 17.05-rc2, hence this release (or
> > later) is recommended.
> >
<snip>
>
> > + ev[i].op = RTE_EVENT_OP_NEW;
> > + ev[i].sched_type = queue_type;
>
> The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY !=
> RTE_SCHED_TYPE_ORDERED. So, we cannot assign .sched_type as
> queue_type.
>
> I think, one option could be to avoid translation in application is to
> - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES,
> RTE_EVENT_QUEUE_CFG_*_ONLY
> - Introduce a new RTE_EVENT_DEV_CAP_ to denote
> RTE_EVENT_QUEUE_CFG_ALL_TYPES cap ability
> - add sched_type in struct rte_event_queue_conf. If capability flag is
> not set then implementation takes sched_type value for the queue.
>
> Any thoughts?
I'm not sure this change is needed. We could create a sched_type[] array, indexed by queue ID, for assigning the event's sched type.
With the proposed approach, the sched_type assignment would still be needed for "all types"-capable PMDs, so I'm not sure it buys us anything in the application.
>
>
> > + ev[i].queue_id = qid;
> > + ev[i].event_type = RTE_EVENT_TYPE_CPU;
>
> IMO, RTE_EVENT_TYPE_ETHERNET is the better option here as it is producing
> the Ethernet packets/events.
>
> > + ev[i].sub_event_type = 0;
> > + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
> > + ev[i].mbuf = mbufs[i];
> > + RTE_SET_USED(prio_idx);
> > + }
> > +
> > + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev,
> > +nb_rx);
>
> For producer pattern i.e a burst of RTE_EVENT_OP_NEW, OcteonTX can do
> burst operation unlike FORWARD case(which is one event at a time).Earlier, I
> thought I can abstract the producer pattern in PMD, but it looks like we are
> going with application driven producer model based on latest RFC.So I think,
> we can add one flag to rte_event_enqueue_burst to denote all the events are
> of type RTE_EVENT_OP_NEW as hint.SW driver can ignore this.
>
> I can send a patch for the same.
>
> Any thoughts?
>
Makes sense, though I'm a little hesitant about putting this sort of PMD-specific hint in the enqueue API. Perhaps we can use the impl_opaque field, or have the PMD inspect the event_type (if TYPE_ETHDEV, assume all packets in the burst are NEWs)?
>
> > + if (nb_tx != nb_rx) {
> > + for (i = nb_tx; i < nb_rx; i++)
> > + rte_pktmbuf_free(mbufs[i]);
> > + }
> > + enqueue_cnt[0] += nb_tx;
> > +
> > + if (unlikely(prod_stop))
>
> I think, No one updating the prod_stop
>
> > + done = 1;
> > +
> > + return 0;
> > +}
> > +
> > +static inline void
> > +schedule_devices(uint8_t dev_id, unsigned lcore_id) {
> > + if (rx_core[lcore_id] && (rx_single ||
> > + rte_atomic32_cmpset(&rx_lock, 0, 1))) {
>
> This pattern(rte_atomic32_cmpset) makes application can inject only "one
> core" worth of packets. Not enough for low-end cores. May be we need
> multiple producer options. I think, new RFC is addressing it.
>
Right, in the "wimpy" core case one can partition the Rx queues across multiple adapters, and assign the adapters to different service cores. The proposal doesn't explicitly state this, but rte_eth_rx_event_adapter_run() is not intended to be MT-safe -- so the service core implementation would need something along the lines of the cmpset if a service is affinitized to multiple service cores.
Thanks,
Gage
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-05-10 14:12 ` Jerin Jacob
2017-05-10 16:40 ` Eads, Gage
@ 2017-05-10 20:16 ` Eads, Gage
2017-06-26 14:46 ` Hunt, David
2 siblings, 0 replies; 64+ messages in thread
From: Eads, Gage @ 2017-05-10 20:16 UTC (permalink / raw)
To: Jerin Jacob, Van Haaren, Harry; +Cc: dev, Richardson, Bruce
> -----Original Message-----
> From: Eads, Gage
> Sent: Wednesday, May 10, 2017 11:41 AM
> To: 'Jerin Jacob' <jerin.jacob@caviumnetworks.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: RE: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Wednesday, May 10, 2017 9:12 AM
> > To: Van Haaren, Harry <harry.van.haaren@intel.com>
> > Cc: dev@dpdk.org; Eads, Gage <gage.eads@intel.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>
> > Subject: Re: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> >
> > -----Original Message-----
> > > Date: Fri, 21 Apr 2017 10:51:37 +0100 > From: Harry van Haaren
> > <harry.van.haaren@intel.com> > To: dev@dpdk.org > CC:
> > jerin.jacob@caviumnetworks.com, Harry van Haaren >
> > <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> > > Richardson <bruce.richardson@intel.com> > Subject: [PATCH 1/3]
> > examples/eventdev_pipeline: added sample app > X-Mailer:
> > git-send-email 2.7.4 > > This commit adds a sample app for the
> > eventdev library.
> > > The app has been tested with DPDK 17.05-rc2, hence this release (or
> > > later) is recommended.
> > >
>
> <snip>
>
> >
> > > + ev[i].op = RTE_EVENT_OP_NEW;
> > > + ev[i].sched_type = queue_type;
> >
> > The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY !=
> > RTE_SCHED_TYPE_ORDERED. So, we cannot assign .sched_type as
> > queue_type.
> >
> > I think, one option could be to avoid translation in application is
> > to
> > - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES,
> RTE_EVENT_QUEUE_CFG_*_ONLY
> > - Introduce a new RTE_EVENT_DEV_CAP_ to denote
> > RTE_EVENT_QUEUE_CFG_ALL_TYPES cap ability
> > - add sched_type in struct rte_event_queue_conf. If capability flag is
> > not set then implementation takes sched_type value for the queue.
> >
> > Any thoughts?
>
> I'm not sure this change is needed. We could create a sched_type[] array,
> indexed by queue ID, for assigning the event's sched type.
>
> With the proposed approach, the sched_type assignment would still be needed
> for "all types"-capable PMDs, so I'm not sure it buys us anything in the
> application.
>
> >
> >
> > > + ev[i].queue_id = qid;
> > > + ev[i].event_type = RTE_EVENT_TYPE_CPU;
> >
> > IMO, RTE_EVENT_TYPE_ETHERNET is the better option here as it is
> > producing the Ethernet packets/events.
> >
> > > + ev[i].sub_event_type = 0;
> > > + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
> > > + ev[i].mbuf = mbufs[i];
> > > + RTE_SET_USED(prio_idx);
> > > + }
> > > +
> > > + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev,
> > > +nb_rx);
> >
> > For producer pattern i.e a burst of RTE_EVENT_OP_NEW, OcteonTX can do
> > burst operation unlike FORWARD case(which is one event at a
> > time).Earlier, I thought I can abstract the producer pattern in PMD,
> > but it looks like we are going with application driven producer model
> > based on latest RFC.So I think, we can add one flag to
> > rte_event_enqueue_burst to denote all the events are of type
> RTE_EVENT_OP_NEW as hint.SW driver can ignore this.
> >
> > I can send a patch for the same.
> >
> > Any thoughts?
> >
>
> Makes sense, though I'm a little hesitant about putting this sort of PMD-specific
> hint in the enqueue API. Perhaps we can use the impl_opaque field, or have the
> PMD inspect the event_type (if TYPE_ETHDEV, assume all packets in the burst
> are NEWs)?
>
Another idea -- what if the enqueue burst argument had these four values: Mixed, all RTE_EVENT_OP_NEW, all RTE_EVENT_OP_FWD, all RTE_EVENT_OP_RELEASE?
In the "all _" cases, the app doesn't need to set the event's op field and the PMD can potentially optimize for that type of operation.
In the "mixed" case the PMD would inspect each event's op field and handle them accordingly.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
2017-05-10 14:12 ` Jerin Jacob
@ 2017-05-17 18:03 ` Jerin Jacob
2017-05-18 10:13 ` Bruce Richardson
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline " David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-05-17 18:03 UTC (permalink / raw)
To: Harry van Haaren; +Cc: dev, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Fri, 21 Apr 2017 10:51:37 +0100
> From: Harry van Haaren <harry.van.haaren@intel.com>
> To: dev@dpdk.org
> CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
> <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> Richardson <bruce.richardson@intel.com>
> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> X-Mailer: git-send-email 2.7.4
>
> This commit adds a sample app for the eventdev library.
> The app has been tested with DPDK 17.05-rc2, hence this
> release (or later) is recommended.
>
> The sample app showcases a pipeline processing use-case,
> with event scheduling and processing defined per stage.
> The application recieves traffic as normal, with each
> packet traversing the pipeline. Once the packet has
> been processed by each of the pipeline stages, it is
> transmitted again.
>
> The app provides a framework to utilize cores for a single
> role or multiple roles. Examples of roles are the RX core,
> TX core, Scheduling core (in the case of the event/sw PMD),
> and worker cores.
>
> Various flags are available to configure numbers of stages,
> cycles of work at each stage, type of scheduling, number of
> worker cores, queue depths etc. For a full explaination,
> please refer to the documentation.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> ---
> +
> +static inline void
> +schedule_devices(uint8_t dev_id, unsigned lcore_id)
> +{
> + if (rx_core[lcore_id] && (rx_single ||
> + rte_atomic32_cmpset(&rx_lock, 0, 1))) {
> + producer();
> + rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
> + }
> +
> + if (sched_core[lcore_id] && (sched_single ||
> + rte_atomic32_cmpset(&sched_lock, 0, 1))) {
> + rte_event_schedule(dev_id);
One question here,
Does rte_event_schedule()'s SW PMD implementation capable of running
concurrently on multiple cores?
Context:
Currently I am writing a testpmd like test framework to realize
different use cases along with with performance test cases like throughput
and latency and making sure it works on SW and HW driver.
I see the following segfault problem when rte_event_schedule() invoked on
multiple core currently. Is it expected?
#0 0x000000000043e945 in __pull_port_lb (allow_reorder=0, port_id=2,
sw=0x7ff93f3cb540) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406:11647:beg:0x43e945
[Current thread is 1 (Thread 0x7ff9fbd34700 (LWP 796))]
(gdb) bt
#0 0x000000000043e945 in __pull_port_lb (allow_reorder=0, port_id=2,
sw=0x7ff93f3cb540) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406
#1 sw_schedule_pull_port_no_reorder (port_id=2, sw=0x7ff93f3cb540) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:495
#2 sw_event_schedule (dev=<optimized out>) at
/export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:566
#3 0x000000000040b4af in rte_event_schedule (dev_id=<optimized out>) at
/export/dpdk-thunderx/build/include/rte_eventdev.h:1092
#4 worker (arg=<optimized out>) at
/export/dpdk-thunderx/app/test-eventdev/test_queue_order.c:200
#5 0x000000000042d14b in eal_thread_loop (arg=<optimized out>) at
/export/dpdk-thunderx/lib/librte_eal/linuxapp/eal/eal_thread.c:184
#6 0x00007ff9fd8e32e7 in start_thread () from /usr/lib/libpthread.so.0
#7 0x00007ff9fd62454f in clone () from /usr/lib/libc.so.6
(gdb) list
401 */
402 uint32_t iq_num = PRIO_TO_IQ(qe->priority);
403 struct sw_qid *qid = &sw->qids[qe->queue_id];
404
405 if ((flags & QE_FLAG_VALID) &&
406
iq_ring_free_count(qid->iq[iq_num]) == 0)
407 break;
408
409 /* now process based on flags. Note that for
directed
410 * queues, the enqueue_flush masks off all but
the
(gdb)
> + if (dump_dev_signal) {
> + rte_event_dev_dump(0, stdout);
> + dump_dev_signal = 0;
> + }
> + rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
> + }
> +
> + if (tx_core[lcore_id] && (tx_single ||
> + rte_atomic32_cmpset(&tx_lock, 0, 1))) {
> + consumer();
> + rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
> + }
> +}
> +
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-05-17 18:03 ` Jerin Jacob
@ 2017-05-18 10:13 ` Bruce Richardson
0 siblings, 0 replies; 64+ messages in thread
From: Bruce Richardson @ 2017-05-18 10:13 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Harry van Haaren, dev, Gage Eads
On Wed, May 17, 2017 at 11:33:16PM +0530, Jerin Jacob wrote:
> -----Original Message-----
> > Date: Fri, 21 Apr 2017 10:51:37 +0100
> > From: Harry van Haaren <harry.van.haaren@intel.com>
> > To: dev@dpdk.org
> > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
> > <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> > Richardson <bruce.richardson@intel.com>
> > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> > X-Mailer: git-send-email 2.7.4
> >
> > This commit adds a sample app for the eventdev library.
> > The app has been tested with DPDK 17.05-rc2, hence this
> > release (or later) is recommended.
> >
> > The sample app showcases a pipeline processing use-case,
> > with event scheduling and processing defined per stage.
> > The application recieves traffic as normal, with each
> > packet traversing the pipeline. Once the packet has
> > been processed by each of the pipeline stages, it is
> > transmitted again.
> >
> > The app provides a framework to utilize cores for a single
> > role or multiple roles. Examples of roles are the RX core,
> > TX core, Scheduling core (in the case of the event/sw PMD),
> > and worker cores.
> >
> > Various flags are available to configure numbers of stages,
> > cycles of work at each stage, type of scheduling, number of
> > worker cores, queue depths etc. For a full explaination,
> > please refer to the documentation.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> > ---
> > +
> > +static inline void
> > +schedule_devices(uint8_t dev_id, unsigned lcore_id)
> > +{
> > + if (rx_core[lcore_id] && (rx_single ||
> > + rte_atomic32_cmpset(&rx_lock, 0, 1))) {
> > + producer();
> > + rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
> > + }
> > +
> > + if (sched_core[lcore_id] && (sched_single ||
> > + rte_atomic32_cmpset(&sched_lock, 0, 1))) {
> > + rte_event_schedule(dev_id);
>
> One question here,
>
> Does rte_event_schedule()'s SW PMD implementation capable of running
> concurrently on multiple cores?
>
No, it's not. It's designed to be called on a single (dedicated) core.
> Context:
> Currently I am writing a testpmd like test framework to realize
> different use cases along with with performance test cases like throughput
> and latency and making sure it works on SW and HW driver.
>
> I see the following segfault problem when rte_event_schedule() invoked on
> multiple core currently. Is it expected?
Yes,pretty much.
/Bruce
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline sample app
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
2017-05-10 14:12 ` Jerin Jacob
2017-05-17 18:03 ` Jerin Jacob
@ 2017-06-26 14:41 ` David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 1/3] examples/eventdev_pipeline: added " David Hunt
` (2 more replies)
2 siblings, 3 replies; 64+ messages in thread
From: David Hunt @ 2017-06-26 14:41 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
The sample app is presented here as an RFC to the next-eventdev tree
to work towards having eventdev PMD generic sample applications.
[1/3] examples/eventdev_pipeline: added sample app
[2/3] doc: add eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] examples/eventdev_pipeline: added sample app
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-06-26 14:41 ` David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 2/3] doc: add eventdev pipeline to sample app ug David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-06-26 14:41 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
examples/Makefile | 2 +
examples/eventdev_pipeline/Makefile | 49 ++
examples/eventdev_pipeline/main.c | 975 ++++++++++++++++++++++++++++++++++++
3 files changed, 1026 insertions(+)
create mode 100644 examples/eventdev_pipeline/Makefile
create mode 100644 examples/eventdev_pipeline/main.c
diff --git a/examples/Makefile b/examples/Makefile
index 6298626..a6dcc2b 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
new file mode 100644
index 0000000..6e5350a
--- /dev/null
+++ b/examples/eventdev_pipeline/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c
new file mode 100644
index 0000000..1f4f283
--- /dev/null
+++ b/examples/eventdev_pipeline/main.c
@@ -0,0 +1,975 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+static unsigned int active_cores;
+static unsigned int num_workers;
+static unsigned long num_packets = (1L << 25); /* do ~32M packets */
+static unsigned int num_fids = 512;
+static unsigned int num_priorities = 1;
+static unsigned int num_stages = 1;
+static unsigned int worker_cq_depth = 16;
+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
+static int16_t qid[MAX_NUM_STAGES] = {-1};
+static int worker_cycles;
+static int enable_queue_priorities;
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+};
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+};
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+};
+
+static unsigned int *enqueue_cnt;
+static unsigned int *dequeue_cnt;
+
+static volatile int done;
+static volatile int prod_stop;
+static int quiet;
+static int dump_dev;
+static int dump_dev_signal;
+
+static uint32_t rx_lock;
+static uint32_t tx_lock;
+static uint32_t sched_lock;
+static bool rx_single;
+static bool tx_single;
+static bool sched_single;
+
+static unsigned int rx_core[MAX_NUM_CORE];
+static unsigned int tx_core[MAX_NUM_CORE];
+static unsigned int sched_core[MAX_NUM_CORE];
+static unsigned int worker_core[MAX_NUM_CORE];
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (rx_core[lcore_id] || sched_core[lcore_id] ||
+ tx_core[lcore_id] || worker_core[lcore_id]);
+}
+
+static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+
+static void
+rte_eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t npackets;
+ static uint64_t received;
+ static uint64_t received_printed;
+ static uint64_t time_printed;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ if (!npackets)
+ npackets = num_packets;
+
+ do {
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ time_printed = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ if (!quiet && received >= received_printed + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t delta_cycles = now - start_time;
+ const uint64_t elapsed_ms = delta_cycles / freq_khz;
+ const uint64_t interval_ms =
+ (now - time_printed) / freq_khz;
+
+ uint64_t rx_noprint = received - received_printed;
+ printf("# consumer RX=%"PRIu64", time %"PRIu64
+ "ms, avg %.3f mpps [current %.3f mpps]\n",
+ received, elapsed_ms,
+ (received) / (elapsed_ms * 1000.0),
+ rx_noprint / (interval_ms * 1000.0));
+ received_printed = received;
+ time_printed = now;
+ }
+
+ dequeue_cnt[0] += n;
+
+ if (num_packets > 0 && npackets > 0) {
+ npackets -= n;
+ if (npackets == 0 || npackets > num_packets)
+ done = 1;
+ }
+ } while (0);
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE];
+ struct rte_event ev[BATCH_SIZE];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+ enqueue_cnt[0] += nb_tx;
+
+ if (unlikely(prod_stop))
+ done = 1;
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (rx_core[lcore_id] && (rx_single ||
+ rte_atomic32_cmpset(&rx_lock, 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
+ }
+
+ if (sched_core[lcore_id] && (sched_single ||
+ rte_atomic32_cmpset(&sched_lock, 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
+ }
+
+ if (tx_core[lcore_id] && (tx_single ||
+ rte_atomic32_cmpset(&tx_lock, 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
+ }
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+ struct rte_mbuf *m = events[i].mbuf;
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == qid[0])
+ events[i].flow_id = m->hash.rss % num_fids;
+
+ events[i].queue_id = next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(ð->s_addr, ð->d_addr);
+ ether_addr_copy(&addr, ð->s_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + worker_cycles)
+ rte_pause();
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ enable_queue_priorities = 1;
+ break;
+ case 'o':
+ queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ quiet = 1;
+ break;
+ case 'D':
+ dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (worker_core[i])
+ num_workers++;
+ if (core_in_use(i))
+ active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ tx_buf[i] = rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(tx_buf[i],
+ rte_eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = num_workers + 2;
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ const struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ const struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0)
+ printf("%d: Error configuring device\n", __LINE__);
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ qid[i] = i;
+ next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ const struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ enqueue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
+ sizeof(enqueue_cnt[0]), 0);
+ dequeue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
+ sizeof(dequeue_cnt[0]), 0);
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (done || prod_stop)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = active_cores;
+
+ if (!quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", num_workers);
+ printf("\tpackets: %lu\n", num_packets);
+ printf("\tpriorities: %u\n", num_priorities);
+ printf("\tQueue-prio: %u\n", enable_queue_priorities);
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
+ !tx_core[lcore_id] && !sched_core[lcore_id])
+ continue;
+
+ if (rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!quiet) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < num_workers; i++) {
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx",
+ worker_data[i].port_id);
+ pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
+ dev_id, statname, NULL);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] doc: add eventdev pipeline to sample app ug
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-26 14:41 ` David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-06-26 14:41 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
doc/guides/sample_app_ug/eventdev_pipeline.rst | 188 +++++++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
2 files changed, 189 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline.rst
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
new file mode 100644
index 0000000..bb09224
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -0,0 +1,188 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline Sample Application
+====================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API. It shows how an application can configure
+a pipeline and assign a set of worker cores to perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/eventdev_pipeline
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets received, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets received and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..11f5781 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] doc: add eventdev library to programmers guide
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 2/3] doc: add eventdev pipeline to sample app ug David Hunt
@ 2017-06-26 14:41 ` David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-06-26 14:41 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1360 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..4f6088e
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,365 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader. The eventdev framework is provided as a DPDK library, allowing
+applications to use it if they wish, but not require its usage.
+
+The goal of this library is to enable applications to build processing
+pipelines where the load balancing and scheduling is handled by the eventdev.
+Step-by-step instructions of the eventdev design is available in the `API
+Walktrough`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+A queue is a logical "stage" of a packet processing graph, where each stage
+has a specified scheduling type. The application configures each queue for a
+specific type of scheduling, and just enqueues all events to the eventdev.
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order. Parallel allows sending all flows to all CPU cores, without any
+re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walktrough
+--------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walktrough:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walktrough assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walktrough assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+.. code-block:: c
+
+ struct rte_event events[BATCH_SIZE];
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 : now tx or use the packets */
+ n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 is now no longer valid to use in the application, as
+ the eventdev has dropped any locks or released re-ordered packets */
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-05-10 14:12 ` Jerin Jacob
2017-05-10 16:40 ` Eads, Gage
2017-05-10 20:16 ` Eads, Gage
@ 2017-06-26 14:46 ` Hunt, David
2017-06-27 9:35 ` Jerin Jacob
2 siblings, 1 reply; 64+ messages in thread
From: Hunt, David @ 2017-06-26 14:46 UTC (permalink / raw)
To: Jerin Jacob, Harry van Haaren; +Cc: dev, Gage Eads, Bruce Richardson
Hi Jerin,
I'm assisting Harry on the sample app, and have just pushed up a V2
patch based on your feedback. I've addressed most of your suggestions,
comments below. There may still a couple of outstanding questions that
need further discussion.
Regards,
Dave
On 10/5/2017 3:12 PM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Fri, 21 Apr 2017 10:51:37 +0100
>> From: Harry van Haaren <harry.van.haaren@intel.com>
>> To: dev@dpdk.org
>> CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
>> <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
>> Richardson <bruce.richardson@intel.com>
>> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>> X-Mailer: git-send-email 2.7.4
>>
>> This commit adds a sample app for the eventdev library.
>> The app has been tested with DPDK 17.05-rc2, hence this
>> release (or later) is recommended.
>>
>> The sample app showcases a pipeline processing use-case,
>> with event scheduling and processing defined per stage.
>> The application recieves traffic as normal, with each
>> packet traversing the pipeline. Once the packet has
>> been processed by each of the pipeline stages, it is
>> transmitted again.
>>
>> The app provides a framework to utilize cores for a single
>> role or multiple roles. Examples of roles are the RX core,
>> TX core, Scheduling core (in the case of the event/sw PMD),
>> and worker cores.
>>
>> Various flags are available to configure numbers of stages,
>> cycles of work at each stage, type of scheduling, number of
>> worker cores, queue depths etc. For a full explaination,
>> please refer to the documentation.
>>
>> Signed-off-by: Gage Eads <gage.eads@intel.com>
>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
>
> Thanks for the example application to share the SW view.
> I could make it run on HW after some tweaking(not optimized though)
>
> [...]
>> +#define MAX_NUM_STAGES 8
>> +#define BATCH_SIZE 16
>> +#define MAX_NUM_CORE 64
>
> How about RTE_MAX_LCORE?
Core usage in the sample app is held in a uint64_t. Adding arrays would
be possible, but I feel that the extra effort would not give that much
benefit. I've left as is for the moment, unless you see any strong
requirement to go beyond 64 cores?
>
>> +
>> +static unsigned int active_cores;
>> +static unsigned int num_workers;
>> +static unsigned long num_packets = (1L << 25); /* do ~32M packets */
>> +static unsigned int num_fids = 512;
>> +static unsigned int num_priorities = 1;
>
> looks like its not used.
Yes, Removed.
>
>> +static unsigned int num_stages = 1;
>> +static unsigned int worker_cq_depth = 16;
>> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
>> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
>> +static int16_t qid[MAX_NUM_STAGES] = {-1};
>
> Moving all fastpath related variables under a structure with cache
> aligned will help.
I tried a few different combinations of this, and saw no gains, some
losses. So will leave as is for the moment, if that's OK.
>
>> +static int worker_cycles;
>> +static int enable_queue_priorities;
>> +
>> +struct prod_data {
>> + uint8_t dev_id;
>> + uint8_t port_id;
>> + int32_t qid;
>> + unsigned num_nic_ports;
>> +};
Yes, saw a percent or two gain when this plus following two data structs
cache aligned.
>
> cache aligned ?
>
>> +
>> +struct cons_data {
>> + uint8_t dev_id;
>> + uint8_t port_id;
>> +};
>> +
>
> cache aligned ?
Yes, see comment above
>
>> +static struct prod_data prod_data;
>> +static struct cons_data cons_data;
>> +
>> +struct worker_data {
>> + uint8_t dev_id;
>> + uint8_t port_id;
>> +};
>
> cache aligned ?
Yes, see comment above
>
>> +
>> +static unsigned *enqueue_cnt;
>> +static unsigned *dequeue_cnt;
>> +
>> +static volatile int done;
>> +static volatile int prod_stop;
>
> No one updating the prod_stop.
Old var, removed.
>
>> +static int quiet;
>> +static int dump_dev;
>> +static int dump_dev_signal;
>> +
>> +static uint32_t rx_lock;
>> +static uint32_t tx_lock;
>> +static uint32_t sched_lock;
>> +static bool rx_single;
>> +static bool tx_single;
>> +static bool sched_single;
>> +
>> +static unsigned rx_core[MAX_NUM_CORE];
>> +static unsigned tx_core[MAX_NUM_CORE];
>> +static unsigned sched_core[MAX_NUM_CORE];
>> +static unsigned worker_core[MAX_NUM_CORE];
>> +
>> +static bool
>> +core_in_use(unsigned lcore_id) {
>> + return (rx_core[lcore_id] || sched_core[lcore_id] ||
>> + tx_core[lcore_id] || worker_core[lcore_id]);
>> +}
>> +
>> +static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
>> +
>> +static void
>> +rte_eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
>> + void *userdata)
>
> IMO, It is better to not use rte_eth_* for application functions.
Sure, removed the 'rte_' part of the function name.
>
>> +{
>> + int port_id = (uintptr_t) userdata;
>> + unsigned _sent = 0;
>> +
>> + do {
>> + /* Note: hard-coded TX queue */
>> + _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
>> + unsent - _sent);
>> + } while (_sent != unsent);
>> +}
>> +
>> +static int
>> +consumer(void)
>> +{
>> + const uint64_t freq_khz = rte_get_timer_hz() / 1000;
>> + struct rte_event packets[BATCH_SIZE];
>> +
>> + static uint64_t npackets;
>> + static uint64_t received;
>> + static uint64_t received_printed;
>> + static uint64_t time_printed;
>> + static uint64_t start_time;
>> + unsigned i, j;
>> + uint8_t dev_id = cons_data.dev_id;
>> + uint8_t port_id = cons_data.port_id;
>> +
>> + if (!npackets)
>> + npackets = num_packets;
>> +
>> + do {
>> + uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
>> + packets, RTE_DIM(packets), 0);
>
> const uint16_t n =
sure.
>
>> +
>> + if (n == 0) {
>> + for (j = 0; j < rte_eth_dev_count(); j++)
>> + rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
>> + return 0;
>> + }
>> + if (start_time == 0)
>> + time_printed = start_time = rte_get_timer_cycles();
>> +
>> + received += n;
>> + for (i = 0; i < n; i++) {
>> + uint8_t outport = packets[i].mbuf->port;
>> + rte_eth_tx_buffer(outport, 0, tx_buf[outport],
>> + packets[i].mbuf);
>> + }
>> +
>> + if (!quiet && received >= received_printed + (1<<22)) {
>> + const uint64_t now = rte_get_timer_cycles();
>> + const uint64_t delta_cycles = now - start_time;
>> + const uint64_t elapsed_ms = delta_cycles / freq_khz;
>> + const uint64_t interval_ms =
>> + (now - time_printed) / freq_khz;
>> +
>> + uint64_t rx_noprint = received - received_printed;
>> + printf("# consumer RX=%"PRIu64", time %"PRIu64
>> + "ms, avg %.3f mpps [current %.3f mpps]\n",
>> + received, elapsed_ms,
>> + (received) / (elapsed_ms * 1000.0),
>> + rx_noprint / (interval_ms * 1000.0));
>> + received_printed = received;
>> + time_printed = now;
>> + }
>> +
>> + dequeue_cnt[0] += n;
>> +
>> + if (num_packets > 0 && npackets > 0) {
>> + npackets -= n;
>> + if (npackets == 0 || npackets > num_packets)
>> + done = 1;
>> + }
>
> Looks like very complicated logic.I think we can simplify it.
I've simplified this.
>
>> + } while (0);
>
> do while(0); really required here?
Removed.
>
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +producer(void)
>> +{
>> + static uint8_t eth_port;
>> + struct rte_mbuf *mbufs[BATCH_SIZE];
>> + struct rte_event ev[BATCH_SIZE];
>> + uint32_t i, num_ports = prod_data.num_nic_ports;
>> + int32_t qid = prod_data.qid;
>> + uint8_t dev_id = prod_data.dev_id;
>> + uint8_t port_id = prod_data.port_id;
>> + uint32_t prio_idx = 0;
>> +
>> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs,
BATCH_SIZE);
>> + if (++eth_port == num_ports)
>> + eth_port = 0;
>> + if (nb_rx == 0) {
>> + rte_pause();
>> + return 0;
>> + }
>> +
>> + for (i = 0; i < nb_rx; i++) {
>> + ev[i].flow_id = mbufs[i]->hash.rss;
>
> prefetching the buff[i+1] may help here?
I tried, didn't make much difference.
>
>> + ev[i].op = RTE_EVENT_OP_NEW;
>> + ev[i].sched_type = queue_type;
>
> The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY !=
RTE_SCHED_TYPE_ORDERED. So, we
> cannot assign .sched_type as queue_type.
>
> I think, one option could be to avoid translation in application is to
> - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY
> - Introduce a new RTE_EVENT_DEV_CAP_ to denote
RTE_EVENT_QUEUE_CFG_ALL_TYPES cap
> ability
> - add sched_type in struct rte_event_queue_conf. If capability flag is
> not set then implementation takes sched_type value for the queue.
>
> Any thoughts?
Not sure here, would it be ok for the moment, and we can work on a patch
in the future?
>
>
>> + ev[i].queue_id = qid;
>> + ev[i].event_type = RTE_EVENT_TYPE_CPU;
>
> IMO, RTE_EVENT_TYPE_ETHERNET is the better option here as it is
> producing the Ethernet packets/events.
Changed to RTE_EVENT_TYPE_ETHDEV.
>
>> + ev[i].sub_event_type = 0;
>> + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
>> + ev[i].mbuf = mbufs[i];
>> + RTE_SET_USED(prio_idx);
>> + }
>> +
>> + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev,
nb_rx);
>
> For producer pattern i.e a burst of RTE_EVENT_OP_NEW, OcteonTX can do
burst
> operation unlike FORWARD case(which is one event at a time).Earlier, I
> thought I can abstract the producer pattern in PMD, but it looks like we
> are going with application driven producer model based on latest
RFC.So I think,
> we can add one flag to rte_event_enqueue_burst to denote all the events
> are of type RTE_EVENT_OP_NEW as hint.SW driver can ignore this.
>
> I can send a patch for the same.
>
> Any thoughts?
I think this comment is closed now, as your patch is upstreamed, afaik?
>
>
>> + if (nb_tx != nb_rx) {
>> + for (i = nb_tx; i < nb_rx; i++)
>> + rte_pktmbuf_free(mbufs[i]);
>> + }
>> + enqueue_cnt[0] += nb_tx;
>> +
>> + if (unlikely(prod_stop))
>
> I think, No one updating the prod_stop
Removed.
>
>> + done = 1;
>> +
>> + return 0;
>> +}
>> +
>> +static inline void
>> +schedule_devices(uint8_t dev_id, unsigned lcore_id)
>> +{
>> + if (rx_core[lcore_id] && (rx_single ||
>> + rte_atomic32_cmpset(&rx_lock, 0, 1))) {
>
> This pattern(rte_atomic32_cmpset) makes application can inject only
> "one core" worth of packets. Not enough for low-end cores. May be we need
> multiple producer options. I think, new RFC is addressing it.
OK, will leave this to the RFC.
>
>> + producer();
>> + rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
>> + }
>> +
>> + if (sched_core[lcore_id] && (sched_single ||
>> + rte_atomic32_cmpset(&sched_lock, 0, 1))) {
>> + rte_event_schedule(dev_id);
>> + if (dump_dev_signal) {
>> + rte_event_dev_dump(0, stdout);
>> + dump_dev_signal = 0;
>> + }
>> + rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
>> + }
>
> Lot of unwanted code if RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED set.
>
> I think, We can make common code with compile time aware and make
> runtime workers based on the flag..
> i.e
> rte_eal_remote_launch(worker_x, &worker_data[worker_idx], lcore_id);
> rte_eal_remote_launch(worker_y, &worker_data[worker_idx], lcore_id);
>
> May we can improve after initial version.
Yes, we can clean up after initial version.
>
>> +
>> + if (tx_core[lcore_id] && (tx_single ||
>> + rte_atomic32_cmpset(&tx_lock, 0, 1))) {
>> + consumer();
>
> Should consumer() need to come in this pattern? I am thinking like
> if events is from last stage then call consumer() in worker()
>
> I think, above scheme works better when the _same_ worker code need
to run the
> case where
> 1) ethdev HW is capable to enqueuing the packets to same txq from
> multiple thread
> 2) ethdev is not capable to do so.
>
> So, The above cases can be addressed in configuration time where we link
> the queues to port
> case 1) Link all workers to last queue
> case 2) Link only worker to last queue
>
> and keeping the common worker code.
>
> HW implementation has functional and performance issue if "two" ports are
> assigned to one lcore for dequeue. The above scheme fixes that
problem too.
Can we have a bit more discussion on this item? Is this needed for this
sample app, or can we perhaps work a patch for this later? Harry?
>
>> + rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
>> + }
>> +}
>> +
>> +static int
>> +worker(void *arg)
>> +{
>> + struct rte_event events[BATCH_SIZE];
>> +
>> + struct worker_data *data = (struct worker_data *)arg;
>> + uint8_t dev_id = data->dev_id;
>> + uint8_t port_id = data->port_id;
>> + size_t sent = 0, received = 0;
>> + unsigned lcore_id = rte_lcore_id();
>> +
>> + while (!done) {
>> + uint16_t i;
>> +
>> + schedule_devices(dev_id, lcore_id);
>> +
>> + if (!worker_core[lcore_id]) {
>> + rte_pause();
>> + continue;
>> + }
>> +
>> + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
>> + events, RTE_DIM(events), 0);
>> +
>> + if (nb_rx == 0) {
>> + rte_pause();
>> + continue;
>> + }
>> + received += nb_rx;
>> +
>> + for (i = 0; i < nb_rx; i++) {
>> + struct ether_hdr *eth;
>> + struct ether_addr addr;
>> + struct rte_mbuf *m = events[i].mbuf;
>> +
>> + /* The first worker stage does classification */
>> + if (events[i].queue_id == qid[0])
>> + events[i].flow_id = m->hash.rss % num_fids;
>
> Not sure why we need do(shrinking the flows) this in worker() in
queue based pipeline.
> If an PMD has any specific requirement on num_fids,I think, we
> can move this configuration stage or PMD can choose optimum fid
internally to
> avoid modulus operation tax in fastpath in all PMD.
>
> Does struct rte_event_queue_conf.nb_atomic_flows help here?
In my tests the modulus makes very little difference in the throughput.
And I think it's good to have a way of varying the number of flows for
testing different scenarios, even if it's not the most performant.
>
>> +
>> + events[i].queue_id = next_qid[events[i].queue_id];
>> + events[i].op = RTE_EVENT_OP_FORWARD;
>
> missing events[i].sched_type.HW PMD does not work with this.
> I think, we can use similar scheme like next_qid for next_sched_type.
Done. added events[i].sched_type = queue_type.
>
>> +
>> + /* change mac addresses on packet (to use mbuf data) */
>> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
>> + ether_addr_copy(ð->d_addr, &addr);
>> + ether_addr_copy(ð->s_addr, ð->d_addr);
>> + ether_addr_copy(&addr, ð->s_addr);
>
> IMO, We can make packet processing code code as "static inline
function" so
> different worker types can reuse.
Done. moved out to a work() function.
>
>> +
>> + /* do a number of cycles of work per packet */
>> + volatile uint64_t start_tsc = rte_rdtsc();
>> + while (rte_rdtsc() < start_tsc + worker_cycles)
>> + rte_pause();
>
> Ditto.
Done. moved out to a work() function.
>
> I think, All worker specific variables like "worker_cycles" can moved
into
> one structure and use.
>
>> + }
>> + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
>> + events, nb_rx);
>> + while (nb_tx < nb_rx && !done)
>> + nb_tx += rte_event_enqueue_burst(dev_id, port_id,
>> + events + nb_tx,
>> + nb_rx - nb_tx);
>> + sent += nb_tx;
>> + }
>> +
>> + if (!quiet)
>> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
>> + rte_lcore_id(), received, sent);
>> +
>> + return 0;
>> +}
>> +
>> +/*
>> + * Parse the coremask given as argument (hexadecimal string) and fill
>> + * the global configuration (core role and core count) with the parsed
>> + * value.
>> + */
>> +static int xdigit2val(unsigned char c)
>
> multiple instance of "xdigit2val" in DPDK repo. May be we can push this
> as common code.
Sure, that's something we can look at in a separate patch, now that it's
being used more and more.
>
>> +{
>> + int val;
>> +
>> + if (isdigit(c))
>> + val = c - '0';
>> + else if (isupper(c))
>> + val = c - 'A' + 10;
>> + else
>> + val = c - 'a' + 10;
>> + return val;
>> +}
>> +
>> +
>> +static void
>> +usage(void)
>> +{
>> + const char *usage_str =
>> + " Usage: eventdev_demo [options]\n"
>> + " Options:\n"
>> + " -n, --packets=N Send N packets (default
~32M), 0 implies no limit\n"
>> + " -f, --atomic-flows=N Use N random flows from 1
to N (default 16)\n"
>
> I think, this parameter now, effects the application fast path code.I
think,
> it should eventdev configuration para-mater.
See above comment on num_fids
>
>> + " -s, --num_stages=N Use N atomic stages
(default 1)\n"
>> + " -r, --rx-mask=core mask Run NIC rx on CPUs in core
mask\n"
>> + " -w, --worker-mask=core mask Run worker on CPUs in core
mask\n"
>> + " -t, --tx-mask=core mask Run NIC tx on CPUs in core
mask\n"
>> + " -e --sched-mask=core mask Run scheduler on CPUs in
core mask\n"
>> + " -c --cq-depth=N Worker CQ depth (default 16)\n"
>> + " -W --work-cycles=N Worker cycles (default 0)\n"
>> + " -P --queue-priority Enable scheduler queue
prioritization\n"
>> + " -o, --ordered Use ordered scheduling\n"
>> + " -p, --parallel Use parallel scheduling\n"
>
> IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of
> operation. It is valid have to any combination. We need to express
that in
> command like
> example:
> 3 stage with
> O->A->P
How about we add an option that specifies the mode of operation for each
stage in a string? Maybe have a '-m' option (modes) e.g. '-m appo' for 4
stages with atomic, parallel, paralled, ordered. Or maybe reuse your
test-eventdev parameter style?
>
>> + " -q, --quiet Minimize printed output\n"
>> + " -D, --dump Print detailed statistics
before exit"
>> + "\n";
>> + fprintf(stderr, "%s", usage_str);
>> + exit(1);
>> +}
>> +
>
> [...]
>
>> + rx_single = (popcnt == 1);
>> + break;
>> + case 't':
>> + tx_lcore_mask = parse_coremask(optarg);
>> + popcnt = __builtin_popcountll(tx_lcore_mask);
>> + tx_single = (popcnt == 1);
>> + break;
>> + case 'e':
>> + sched_lcore_mask = parse_coremask(optarg);
>> + popcnt = __builtin_popcountll(sched_lcore_mask);
>> + sched_single = (popcnt == 1);
>> + break;
>> + default:
>> + usage();
>> + }
>> + }
>> +
>> + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
>> + sched_lcore_mask == 0 || tx_lcore_mask == 0) {
>
> Need to honor RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED i.e sched_lcore_mask
> is zero can be valid case.
I'll seperate this out to a check for RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
Need to do it later in eventdev_setup(), after eventdev instance is created.
>
>> + printf("Core part of pipeline was not assigned any cores. "
>> + "This will stall the pipeline, please check core masks "
>> + "(use -h for details on setting core masks):\n"
>> + "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
>> + "\n\tworkers: %"PRIu64"\n",
>> + rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
>> + worker_lcore_mask);
>> + rte_exit(-1, "Fix core masks\n");
>> + }
>> + if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
>> + usage();
>> +
>> + for (i = 0; i < MAX_NUM_CORE; i++) {
>> + rx_core[i] = !!(rx_lcore_mask & (1UL << i));
>> + tx_core[i] = !!(tx_lcore_mask & (1UL << i));
>> + sched_core[i] = !!(sched_lcore_mask & (1UL << i));
>> + worker_core[i] = !!(worker_lcore_mask & (1UL << i));
>> +
>> + if (worker_core[i])
>> + num_workers++;
>> + if (core_in_use(i))
>> + active_cores++;
>> + }
>> +}
>> +
>> +
>> +struct port_link {
>> + uint8_t queue_id;
>> + uint8_t priority;
>> +};
>> +
>> +static int
>> +setup_eventdev(struct prod_data *prod_data,
>> + struct cons_data *cons_data,
>> + struct worker_data *worker_data)
>> +{
>> + const uint8_t dev_id = 0;
>> + /* +1 stages is for a SINGLE_LINK TX stage */
>> + const uint8_t nb_queues = num_stages + 1;
>> + /* + 2 is one port for producer and one for consumer */
>> + const uint8_t nb_ports = num_workers + 2;
>
> selection of number of ports is a function of rte_event_has_producer().
> I think, it will be addressed with RFC.
>
>> + const struct rte_event_dev_config config = {
>> + .nb_event_queues = nb_queues,
>> + .nb_event_ports = nb_ports,
>> + .nb_events_limit = 4096,
>> + .nb_event_queue_flows = 1024,
>> + .nb_event_port_dequeue_depth = 128,
>> + .nb_event_port_enqueue_depth = 128,
>
> OCTEONTX PMD driver has .nb_event_port_dequeue_depth = 1 and
> .nb_event_port_enqueue_depth = 1 and struct
rte_event_dev_info.min_dequeue_timeout_ns
> = 853 value.
> I think, we need to check the rte_event_dev_info_get() first to get
the sane
> values and take RTE_MIN or RTE_MAX based on the use case.
>
> or
>
> I can ignore this value in OCTEONTX PMD. But I am not sure NXP case,
> Any thoughts from NXP folks.
>
>
Added in code to do the relevant checks. It should now limit the
enqueue/dequeue depths to the configured depth for the port if it's less.
>> + };
>> + const struct rte_event_port_conf wkr_p_conf = {
>> + .dequeue_depth = worker_cq_depth,
>
> Same as above
See previous comment.
>
>> + .enqueue_depth = 64,
>
> Same as above
See previous comment.
>
>> + .new_event_threshold = 4096,
>> + };
>> + struct rte_event_queue_conf wkr_q_conf = {
>> + .event_queue_cfg = queue_type,
>> + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
>> + .nb_atomic_flows = 1024,
>> + .nb_atomic_order_sequences = 1024,
>> + };
>> + const struct rte_event_port_conf tx_p_conf = {
>> + .dequeue_depth = 128,
>
> Same as above
See previous comment.
>> + .enqueue_depth = 128,
>
> Same as above
See previous comment.
>> + .new_event_threshold = 4096,
>> + };
>> + const struct rte_event_queue_conf tx_q_conf = {
>> + .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
>> + .event_queue_cfg =
>> + RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
>> + RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
>> + .nb_atomic_flows = 1024,
>> + .nb_atomic_order_sequences = 1024,
>> + };
>> +
>> + struct port_link worker_queues[MAX_NUM_STAGES];
>> + struct port_link tx_queue;
>> + unsigned i;
>> +
>> + int ret, ndev = rte_event_dev_count();
>> + if (ndev < 1) {
>> + printf("%d: No Eventdev Devices Found\n", __LINE__);
>> + return -1;
>> + }
>> +
>> + struct rte_event_dev_info dev_info;
>> + ret = rte_event_dev_info_get(dev_id, &dev_info);
>> + printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
>> +
>> + ret = rte_event_dev_configure(dev_id, &config);
>> + if (ret < 0)
>> + printf("%d: Error configuring device\n", __LINE__)
>
> Don't process further with failed configure.
>
Done.
>> +
>> + /* Q creation - one load balanced per pipeline stage*/
>> +
>> + /* set up one port per worker, linking to all stage queues */
>> + for (i = 0; i < num_workers; i++) {
>> + struct worker_data *w = &worker_data[i];
>> + w->dev_id = dev_id;
>> + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
>> + printf("Error setting up port %d\n", i);
>> + return -1;
>> + }
>> +
>> + uint32_t s;
>> + for (s = 0; s < num_stages; s++) {
>> + if (rte_event_port_link(dev_id, i,
>> + &worker_queues[s].queue_id,
>> + &worker_queues[s].priority,
>> + 1) != 1) {
>> + printf("%d: error creating link for port %d\n",
>> + __LINE__, i);
>> + return -1;
>> + }
>> + }
>> + w->port_id = i;
>> + }
>> + /* port for consumer, linked to TX queue */
>> + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
>
> If ethdev supports MT txq queue support then this port can be linked to
> worker too. something to consider for future.
>
Sure. No change for now.
>> + printf("Error setting up port %d\n", i);
>> + return -1;
>> + }
>> + if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
>> + &tx_queue.priority, 1) != 1) {
>> + printf("%d: error creating link for port %d\n",
>> + __LINE__, i);
>> + return -1;
>> + }
>> + /* port for producer, no links */
>> + const struct rte_event_port_conf rx_p_conf = {
>> + .dequeue_depth = 8,
>> + .enqueue_depth = 8,
>
> same as above issue.You could get default config first and configure.
>
Done. Checking config.nb_event_port_dequeue_depth and reducing if less.
>> + .new_event_threshold = 1200,
>> + };
>> + if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
>> + printf("Error setting up port %d\n", i);
>> + return -1;
>> + }
>> +
>> + *prod_data = (struct prod_data){.dev_id = dev_id,
>> + .port_id = i + 1,
>> + .qid = qid[0] };
>> + *cons_data = (struct cons_data){.dev_id = dev_id,
>> + .port_id = i };
>> +
>> + enqueue_cnt = rte_calloc(0,
>> + RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
>> + sizeof(enqueue_cnt[0]), 0);
>> + dequeue_cnt = rte_calloc(0,
>> + RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
>> + sizeof(dequeue_cnt[0]), 0);
>
> Why array? looks like enqueue_cnt[1] and dequeue_cnt[1] not used
anywhere.
>
Looks like there was an intention to extend this more. And all the app
does is increment without using. I've removed these two vars to clean up.
>> +
>> + if (rte_event_dev_start(dev_id) < 0) {
>> + printf("Error starting eventdev\n");
>> + return -1;
>> + }
>> +
>> + return dev_id;
>> +}
>> +
>> +static void
>> +signal_handler(int signum)
>> +{
>> + if (done || prod_stop)
>
> I think, No one updating the prod_stop
>
Removed.
>> + rte_exit(1, "Exiting on signal %d\n", signum);
>> + if (signum == SIGINT || signum == SIGTERM) {
>> + printf("\n\nSignal %d received, preparing to exit...\n",
>> + signum);
>> + done = 1;
>> + }
>> + if (signum == SIGTSTP)
>> + rte_event_dev_dump(0, stdout);
>> +}
>> +
>> +int
>> +main(int argc, char **argv)
>
> [...]
>
>> + RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>> + if (lcore_id >= MAX_NUM_CORE)
>> + break;
>> +
>> + if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
>> + !tx_core[lcore_id] && !sched_core[lcore_id])
>> + continue;
>> +
>> + if (rx_core[lcore_id])
>> + printf(
>> + "[%s()] lcore %d executing NIC Rx,
and using eventdev port %u\n",
>> + __func__, lcore_id, prod_data.port_id);
>
> These prints wont show if rx,tx, scheduler running on master core(as we
> are browsing through RTE_LCORE_FOREACH_SLAVE)
OK, changed to RTE_LCORE_FOREACH, which also includes the master core.
>
>> +
>> + if (!quiet) {
>> + printf("\nPort Workload distribution:\n");
>> + uint32_t i;
>> + uint64_t tot_pkts = 0;
>> + uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
>> + for (i = 0; i < num_workers; i++) {
>> + char statname[64];
>> + snprintf(statname, sizeof(statname), "port_%u_rx",
>> + worker_data[i].port_id);
>
> Please check "port_%u_rx" xstat availability with PMD first.
Added a check after rte_event_dev_xstats_by_name_get() to see if it's
not -ENOTSUP.
>> + pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
>> + dev_id, statname, NULL);
>> + tot_pkts += pkts_per_wkr[i];
>> + }
>> + for (i = 0; i < num_workers; i++) {
>> + float pc = pkts_per_wkr[i] * 100 /
>> + ((float)tot_pkts);
>> + printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
>> + i, pc, pkts_per_wkr[i]);
>> + }
>> +
>> + }
>> +
>> + return 0;
>> +}
>
> As final note, considering the different options in fastpath, I was
> thinking like introducing app/test-eventdev like app/testpmd and have
> set of function pointers# for different modes like "macswap", "txonly"
> in testpmd to exercise different options and framework for adding new use
> cases.I will work on that to check the feasibility.
>
> ##
> struct fwd_engine {
> const char *fwd_mode_name; /**< Forwarding mode name. */
> port_fwd_begin_t port_fwd_begin; /**< NULL if nothing special
to do. */
> port_fwd_end_t port_fwd_end; /**< NULL if nothing special
to do. */
> packet_fwd_t packet_fwd; /**< Mandatory. */
> };
>
>> --
>> 2.7.4
>>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-06-26 14:46 ` Hunt, David
@ 2017-06-27 9:35 ` Jerin Jacob
2017-06-27 13:12 ` Hunt, David
0 siblings, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-06-27 9:35 UTC (permalink / raw)
To: Hunt, David; +Cc: Harry van Haaren, dev, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Mon, 26 Jun 2017 15:46:47 +0100
> From: "Hunt, David" <david.hunt@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, Harry van Haaren
> <harry.van.haaren@intel.com>
> CC: dev@dpdk.org, Gage Eads <gage.eads@intel.com>, Bruce Richardson
> <bruce.richardson@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added
> sample app
> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hi Jerin,
Hi David,
Looks like you have sent the old version. The below mentioned comments
are not addressed in v2.
>
> I'm assisting Harry on the sample app, and have just pushed up a V2 patch
> based on your feedback. I've addressed most of your suggestions, comments
> below. There may still a couple of outstanding questions that need further
> discussion.
A few general comments:
1) Nikhil/Gage's proposal on ethdev rx to eventdev adapter will change the major
portion(eventdev setup and producer()) of this application
2) Producing one lcore worth of packets, really cant show as example
eventdev application as it will be pretty bad in low-end machine.
At least application infrastructure should not limit.
Considering above points, Should we wait for rx adapter to complete
first? I would like to show this as real world application to use eventdev.
Thoughts?
On the same note:
Can we finalize on rx adapter proposal? I can work on v1 of patch and
common code if Nikhil or Gage don't have bandwidth. Let me know?
last followup:
http://dpdk.org/ml/archives/dev/2017-June/068776.html
>
> Regards,
> Dave
>
> On 10/5/2017 3:12 PM, Jerin Jacob wrote:
> > -----Original Message-----
> >> Date: Fri, 21 Apr 2017 10:51:37 +0100
> >> From: Harry van Haaren <harry.van.haaren@intel.com>
> >> To: dev@dpdk.org
> >> CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
> >> <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> >> Richardson <bruce.richardson@intel.com>
> >> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> >> X-Mailer: git-send-email 2.7.4
> >>
> >> This commit adds a sample app for the eventdev library.
> >> The app has been tested with DPDK 17.05-rc2, hence this
> >> release (or later) is recommended.
> >>
> >> The sample app showcases a pipeline processing use-case,
> >> with event scheduling and processing defined per stage.
> >> The application recieves traffic as normal, with each
> >> packet traversing the pipeline. Once the packet has
> >> been processed by each of the pipeline stages, it is
> >> transmitted again.
> >>
> >> The app provides a framework to utilize cores for a single
> >> role or multiple roles. Examples of roles are the RX core,
> >> TX core, Scheduling core (in the case of the event/sw PMD),
> >> and worker cores.
> >>
> >> Various flags are available to configure numbers of stages,
> >> cycles of work at each stage, type of scheduling, number of
> >> worker cores, queue depths etc. For a full explaination,
> >> please refer to the documentation.
> >>
> >> Signed-off-by: Gage Eads <gage.eads@intel.com>
> >> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> >> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> >
> > Thanks for the example application to share the SW view.
> > I could make it run on HW after some tweaking(not optimized though)
> >
> > [...]
> >> +#define MAX_NUM_STAGES 8
> >> +#define BATCH_SIZE 16
> >> +#define MAX_NUM_CORE 64
> >
> > How about RTE_MAX_LCORE?
>
> Core usage in the sample app is held in a uint64_t. Adding arrays would be
> possible, but I feel that the extra effort would not give that much benefit.
> I've left as is for the moment, unless you see any strong requirement to go
> beyond 64 cores?
I think, it is OK. Again with service core infrastructure this will change.
>
> >
> >> +
> >> +static unsigned int active_cores;
> >> +static unsigned int num_workers;
> >> +static unsigned long num_packets = (1L << 25); /* do ~32M packets */
> >> +static unsigned int num_fids = 512;
> >> +static unsigned int num_priorities = 1;
> >
> > looks like its not used.
>
> Yes, Removed.
>
> >
> >> +static unsigned int num_stages = 1;
> >> +static unsigned int worker_cq_depth = 16;
> >> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> >> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
> >> +static int16_t qid[MAX_NUM_STAGES] = {-1};
> >
> > Moving all fastpath related variables under a structure with cache
> > aligned will help.
>
> I tried a few different combinations of this, and saw no gains, some losses.
> So will leave as is for the moment, if that's OK.
I think, the one are using in fastpath better to allocate from huge page
using rte_malloc()
>
> >
> >> +static int worker_cycles;
> >> +static int enable_queue_priorities;
> >> +
> >> +struct prod_data {
> >> + uint8_t dev_id;
> >> + uint8_t port_id;
> >> + int32_t qid;
> >> + unsigned num_nic_ports;
> >> +};
>
> Yes, saw a percent or two gain when this plus following two data structs
> cache aligned.
looks like it not fixed in v2. Looks like you have sent the old
version.
>
> >
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int
> >> +producer(void)
> >> +{
> >> + static uint8_t eth_port;
> >> + struct rte_mbuf *mbufs[BATCH_SIZE];
> >> + struct rte_event ev[BATCH_SIZE];
> >> + uint32_t i, num_ports = prod_data.num_nic_ports;
> >> + int32_t qid = prod_data.qid;
> >> + uint8_t dev_id = prod_data.dev_id;
> >> + uint8_t port_id = prod_data.port_id;
> >> + uint32_t prio_idx = 0;
> >> +
> >> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs,
> BATCH_SIZE);
> >> + if (++eth_port == num_ports)
> >> + eth_port = 0;
> >> + if (nb_rx == 0) {
> >> + rte_pause();
> >> + return 0;
> >> + }
> >> +
> >> + for (i = 0; i < nb_rx; i++) {
> >> + ev[i].flow_id = mbufs[i]->hash.rss;
> >
> > prefetching the buff[i+1] may help here?
>
> I tried, didn't make much difference.
OK.
>
> >
> >> + ev[i].op = RTE_EVENT_OP_NEW;
> >> + ev[i].sched_type = queue_type;
> >
> > The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY != RTE_SCHED_TYPE_ORDERED.
> So, we
> > cannot assign .sched_type as queue_type.
> >
> > I think, one option could be to avoid translation in application is to
> > - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY
> > - Introduce a new RTE_EVENT_DEV_CAP_ to denote
> RTE_EVENT_QUEUE_CFG_ALL_TYPES cap
> > ability
> > - add sched_type in struct rte_event_queue_conf. If capability flag is
> > not set then implementation takes sched_type value for the queue.
> >
> > Any thoughts?
>
>
> Not sure here, would it be ok for the moment, and we can work on a patch in
> the future?
OK
> >> +
> >> + if (tx_core[lcore_id] && (tx_single ||
> >> + rte_atomic32_cmpset(&tx_lock, 0, 1))) {
> >> + consumer();
> >
> > Should consumer() need to come in this pattern? I am thinking like
> > if events is from last stage then call consumer() in worker()
> >
> > I think, above scheme works better when the _same_ worker code need to run
> the
> > case where
> > 1) ethdev HW is capable to enqueuing the packets to same txq from
> > multiple thread
> > 2) ethdev is not capable to do so.
> >
> > So, The above cases can be addressed in configuration time where we link
> > the queues to port
> > case 1) Link all workers to last queue
> > case 2) Link only worker to last queue
> >
> > and keeping the common worker code.
> >
> > HW implementation has functional and performance issue if "two" ports are
> > assigned to one lcore for dequeue. The above scheme fixes that problem
> too.
>
>
> Can we have a bit more discussion on this item? Is this needed for this
> sample app, or can we perhaps work a patch for this later? Harry?
As explained above, Is there any issue in keeping consumer() for last
stage ?
>
>
> >
> >> + rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
> >> + }
> >> +}
> >> +
> >> +static int
> >> +worker(void *arg)
> >> +{
> >> + struct rte_event events[BATCH_SIZE];
> >> +
> >> + struct worker_data *data = (struct worker_data *)arg;
> >> + uint8_t dev_id = data->dev_id;
> >> + uint8_t port_id = data->port_id;
> >> + size_t sent = 0, received = 0;
> >> + unsigned lcore_id = rte_lcore_id();
> >> +
> >> + while (!done) {
> >> + uint16_t i;
> >> +
> >> + schedule_devices(dev_id, lcore_id);
> >> +
> >> + if (!worker_core[lcore_id]) {
> >> + rte_pause();
> >> + continue;
> >> + }
> >> +
> >> + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
> >> + events, RTE_DIM(events), 0);
> >> +
> >> + if (nb_rx == 0) {
> >> + rte_pause();
> >> + continue;
> >> + }
> >> + received += nb_rx;
> >> +
> >> + for (i = 0; i < nb_rx; i++) {
> >> + struct ether_hdr *eth;
> >> + struct ether_addr addr;
> >> + struct rte_mbuf *m = events[i].mbuf;
> >> +
> >> + /* The first worker stage does classification */
> >> + if (events[i].queue_id == qid[0])
> >> + events[i].flow_id = m->hash.rss % num_fids;
> >
> > Not sure why we need do(shrinking the flows) this in worker() in queue
> based pipeline.
> > If an PMD has any specific requirement on num_fids,I think, we
> > can move this configuration stage or PMD can choose optimum fid internally
> to
> > avoid modulus operation tax in fastpath in all PMD.
> >
> > Does struct rte_event_queue_conf.nb_atomic_flows help here?
>
> In my tests the modulus makes very little difference in the throughput. And
> I think it's good to have a way of varying the number of flows for testing
> different scenarios, even if it's not the most performant.
Not sure.
>
> >
> >> +
> >> + events[i].queue_id = next_qid[events[i].queue_id];
> >> + events[i].op = RTE_EVENT_OP_FORWARD;
> >
> > missing events[i].sched_type.HW PMD does not work with this.
> > I think, we can use similar scheme like next_qid for next_sched_type.
>
> Done. added events[i].sched_type = queue_type.
>
> >
> >> +
> >> + /* change mac addresses on packet (to use mbuf data) */
> >> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
> >> + ether_addr_copy(ð->d_addr, &addr);
> >> + ether_addr_copy(ð->s_addr, ð->d_addr);
> >> + ether_addr_copy(&addr, ð->s_addr);
> >
> > IMO, We can make packet processing code code as "static inline function"
> so
> > different worker types can reuse.
>
> Done. moved out to a work() function.
I think, mac swap should do in last stage, not on each forward.
ie. With existing code, 2 stage forward makes in original order.
>
> >
> >> +
> >> + /* do a number of cycles of work per packet */
> >> + volatile uint64_t start_tsc = rte_rdtsc();
> >> + while (rte_rdtsc() < start_tsc + worker_cycles)
> >> + rte_pause();
> >
> > Ditto.
>
> Done. moved out to a work() function.
>
> >
> > I think, All worker specific variables like "worker_cycles" can moved into
> > one structure and use.
> >
> >> + }
> >> + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
> >> + events, nb_rx);
> >> + while (nb_tx < nb_rx && !done)
> >> + nb_tx += rte_event_enqueue_burst(dev_id, port_id,
> >> + events + nb_tx,
> >> + nb_rx - nb_tx);
> >> + sent += nb_tx;
> >> + }
> >> +
> >> + if (!quiet)
> >> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
> >> + rte_lcore_id(), received, sent);
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +/*
> >> + * Parse the coremask given as argument (hexadecimal string) and fill
> >> + * the global configuration (core role and core count) with the parsed
> >> + * value.
> >> + */
> >> +static int xdigit2val(unsigned char c)
> >
> > multiple instance of "xdigit2val" in DPDK repo. May be we can push this
> > as common code.
>
> Sure, that's something we can look at in a separate patch, now that it's
> being used more and more.
make sense.
>
> >
> >> +{
> >> + int val;
> >> +
> >> + if (isdigit(c))
> >> + val = c - '0';
> >> + else if (isupper(c))
> >> + val = c - 'A' + 10;
> >> + else
> >> + val = c - 'a' + 10;
> >> + return val;
> >> +}
> >> +
> >> +
> >> +static void
> >> +usage(void)
> >> +{
> >> + const char *usage_str =
> >> + " Usage: eventdev_demo [options]\n"
> >> + " Options:\n"
> >> + " -n, --packets=N Send N packets (default ~32M), 0
> implies no limit\n"
> >> + " -f, --atomic-flows=N Use N random flows from 1 to N
> (default 16)\n"
> >
> > I think, this parameter now, effects the application fast path code.I
> think,
> > it should eventdev configuration para-mater.
>
> See above comment on num_fids
>
> >
> >> + " -s, --num_stages=N Use N atomic stages (default
> 1)\n"
> >> + " -r, --rx-mask=core mask Run NIC rx on CPUs in core
> mask\n"
> >> + " -w, --worker-mask=core mask Run worker on CPUs in core
> mask\n"
> >> + " -t, --tx-mask=core mask Run NIC tx on CPUs in core
> mask\n"
> >> + " -e --sched-mask=core mask Run scheduler on CPUs in core
> mask\n"
> >> + " -c --cq-depth=N Worker CQ depth (default 16)\n"
> >> + " -W --work-cycles=N Worker cycles (default 0)\n"
> >> + " -P --queue-priority Enable scheduler queue
> prioritization\n"
> >> + " -o, --ordered Use ordered scheduling\n"
> >> + " -p, --parallel Use parallel scheduling\n"
> >
> > IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of
> > operation. It is valid have to any combination. We need to express that in
> > command like
> > example:
> > 3 stage with
> > O->A->P
>
> How about we add an option that specifies the mode of operation for each
> stage in a string? Maybe have a '-m' option (modes) e.g. '-m appo' for 4
> stages with atomic, parallel, paralled, ordered. Or maybe reuse your
> test-eventdev parameter style?
Any scheme is fine.
>
> >
> >> + " -q, --quiet Minimize printed output\n"
> >> + " -D, --dump Print detailed statistics before
> exit"
> >> + "\n";
> >> + fprintf(stderr, "%s", usage_str);
> >> + exit(1);
> >> +}
> >> +
> >
> > [...]
> >
> >> + rx_single = (popcnt == 1);
> >> + break;
> >> + case 't':
> >> + tx_lcore_mask = parse_coremask(optarg);
> >> + popcnt = __builtin_popcountll(tx_lcore_mask);
> >> + tx_single = (popcnt == 1);
> >> + break;
> >> + case 'e':
> >> + sched_lcore_mask = parse_coremask(optarg);
> >> + popcnt = __builtin_popcountll(sched_lcore_mask);
> >> + sched_single = (popcnt == 1);
> >> + break;
> >> + default:
> >> + usage();
> >> + }
> >> + }
> >> +
> >> + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
> >> + sched_lcore_mask == 0 || tx_lcore_mask == 0) {
> >
> >> +
> >> + /* Q creation - one load balanced per pipeline stage*/
> >> +
> >> + /* set up one port per worker, linking to all stage queues */
> >> + for (i = 0; i < num_workers; i++) {
> >> + struct worker_data *w = &worker_data[i];
> >> + w->dev_id = dev_id;
> >> + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
> >> + printf("Error setting up port %d\n", i);
> >> + return -1;
> >> + }
> >> +
> >> + uint32_t s;
> >> + for (s = 0; s < num_stages; s++) {
> >> + if (rte_event_port_link(dev_id, i,
> >> + &worker_queues[s].queue_id,
> >> + &worker_queues[s].priority,
> >> + 1) != 1) {
> >> + printf("%d: error creating link for port %d\n",
> >> + __LINE__, i);
> >> + return -1;
> >> + }
> >> + }
> >> + w->port_id = i;
> >> + }
> >> + /* port for consumer, linked to TX queue */
> >> + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
> >
> > If ethdev supports MT txq queue support then this port can be linked to
> > worker too. something to consider for future.
> >
>
> Sure. No change for now.
OK
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 0/3] next-eventdev: evendev pipeline sample app
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-27 12:54 ` David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 1/3] examples/eventdev_pipeline: added " David Hunt
` (2 more replies)
0 siblings, 3 replies; 64+ messages in thread
From: David Hunt @ 2017-06-27 12:54 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* None, incorrect patch upload
Changes in patch v3:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
The sample app is presented here as an RFC to the next-eventdev tree
to work towards having eventdev PMD generic sample applications.
[1/3] examples/eventdev_pipeline: added sample app
[2/3] doc: add eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 1/3] examples/eventdev_pipeline: added sample app
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-06-27 12:54 ` David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 2/3] doc: add eventdev pipeline to sample app ug David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-06-27 12:54 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
examples/Makefile | 2 +
examples/eventdev_pipeline/Makefile | 49 ++
examples/eventdev_pipeline/main.c | 999 ++++++++++++++++++++++++++++++++++++
3 files changed, 1050 insertions(+)
create mode 100644 examples/eventdev_pipeline/Makefile
create mode 100644 examples/eventdev_pipeline/main.c
diff --git a/examples/Makefile b/examples/Makefile
index 6298626..a6dcc2b 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
new file mode 100644
index 0000000..6e5350a
--- /dev/null
+++ b/examples/eventdev_pipeline/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c
new file mode 100644
index 0000000..f1386a4
--- /dev/null
+++ b/examples/eventdev_pipeline/main.c
@@ -0,0 +1,999 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+static unsigned int active_cores;
+static unsigned int num_workers;
+static long num_packets = (1L << 25); /* do ~32M packets */
+static unsigned int num_fids = 512;
+static unsigned int num_stages = 1;
+static unsigned int worker_cq_depth = 16;
+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
+static int16_t qid[MAX_NUM_STAGES] = {-1};
+static int worker_cycles;
+static int enable_queue_priorities;
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+} __rte_cache_aligned;
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static unsigned int *enqueue_cnt;
+static unsigned int *dequeue_cnt;
+
+static volatile int done;
+static int quiet;
+static int dump_dev;
+static int dump_dev_signal;
+
+static uint32_t rx_lock;
+static uint32_t tx_lock;
+static uint32_t sched_lock;
+static bool rx_single;
+static bool tx_single;
+static bool sched_single;
+
+static unsigned int rx_core[MAX_NUM_CORE];
+static unsigned int tx_core[MAX_NUM_CORE];
+static unsigned int sched_core[MAX_NUM_CORE];
+static unsigned int worker_core[MAX_NUM_CORE];
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (rx_core[lcore_id] || sched_core[lcore_id] ||
+ tx_core[lcore_id] || worker_core[lcore_id]);
+}
+
+static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+
+static void
+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t received;
+ static uint64_t last_pkts;
+ static uint64_t last_time;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ last_time = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ /* Print out mpps every 1<22 packets */
+ if (!quiet && received >= last_pkts + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t total_ms = (now - start_time) / freq_khz;
+ const uint64_t delta_ms = (now - last_time) / freq_khz;
+ uint64_t delta_pkts = received - last_pkts;
+
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
+ "avg %.3f mpps [current %.3f mpps]\n",
+ received,
+ total_ms,
+ received / (total_ms * 1000.0),
+ delta_pkts / (delta_ms * 1000.0));
+ last_pkts = received;
+ last_time = now;
+ }
+
+ dequeue_cnt[0] += n;
+
+ num_packets -= n;
+ if (num_packets <= 0)
+ done = 1;
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE+2];
+ struct rte_event ev[BATCH_SIZE+2];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+ enqueue_cnt[0] += nb_tx;
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (rx_core[lcore_id] && (rx_single ||
+ rte_atomic32_cmpset(&rx_lock, 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
+ }
+
+ if (sched_core[lcore_id] && (sched_single ||
+ rte_atomic32_cmpset(&sched_lock, 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
+ }
+
+ if (tx_core[lcore_id] && (tx_single ||
+ rte_atomic32_cmpset(&tx_lock, 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
+ }
+}
+
+
+
+static inline void
+work(struct rte_mbuf *m)
+{
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(ð->s_addr, ð->d_addr);
+ ether_addr_copy(&addr, ð->s_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + worker_cycles)
+ rte_pause();
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ const uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == qid[0])
+ events[i].flow_id = events[i].mbuf->hash.rss
+ % num_fids;
+
+ events[i].queue_id = next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+ events[i].sched_type = queue_type;
+
+ work(events[i].mbuf);
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ enable_queue_priorities = 1;
+ break;
+ case 'o':
+ queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ quiet = 1;
+ break;
+ case 'D':
+ dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (worker_core[i])
+ num_workers++;
+ if (core_in_use(i))
+ active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ tx_buf[i] = rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(tx_buf[i],
+ eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = num_workers + 2;
+ struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ if (dev_info.max_event_port_dequeue_depth <
+ config.nb_event_port_dequeue_depth)
+ config.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+ if (dev_info.max_event_port_enqueue_depth <
+ config.nb_event_port_enqueue_depth)
+ config.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0) {
+ printf("%d: Error configuring device\n", __LINE__);
+ return -1;
+ }
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ qid[i] = i;
+ next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ enqueue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
+ sizeof(enqueue_cnt[0]), 0);
+ dequeue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
+ sizeof(dequeue_cnt[0]), 0);
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (done)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = active_cores;
+
+ if (!quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", num_workers);
+ printf("\tpackets: %lu\n", num_packets);
+ printf("\tQueue-prio: %u\n", enable_queue_priorities);
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
+ !tx_core[lcore_id] && !sched_core[lcore_id])
+ continue;
+
+ if (rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!quiet) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < num_workers; i++) {
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx",
+ worker_data[i].port_id);
+ pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
+ dev_id, statname, NULL);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 2/3] doc: add eventdev pipeline to sample app ug
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-27 12:54 ` David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-06-27 12:54 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
doc/guides/sample_app_ug/eventdev_pipeline.rst | 188 +++++++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
2 files changed, 189 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline.rst
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
new file mode 100644
index 0000000..bb09224
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -0,0 +1,188 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline Sample Application
+====================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API. It shows how an application can configure
+a pipeline and assign a set of worker cores to perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/eventdev_pipeline
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets received, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets received and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..11f5781 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v3 3/3] doc: add eventdev library to programmers guide
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 2/3] doc: add eventdev pipeline to sample app ug David Hunt
@ 2017-06-27 12:54 ` David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-06-27 12:54 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1360 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..4f6088e
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,365 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader. The eventdev framework is provided as a DPDK library, allowing
+applications to use it if they wish, but not require its usage.
+
+The goal of this library is to enable applications to build processing
+pipelines where the load balancing and scheduling is handled by the eventdev.
+Step-by-step instructions of the eventdev design is available in the `API
+Walktrough`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+A queue is a logical "stage" of a packet processing graph, where each stage
+has a specified scheduling type. The application configures each queue for a
+specific type of scheduling, and just enqueues all events to the eventdev.
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order. Parallel allows sending all flows to all CPU cores, without any
+re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walktrough
+--------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walktrough:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walktrough assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walktrough assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+.. code-block:: c
+
+ struct rte_event events[BATCH_SIZE];
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 : now tx or use the packets */
+ n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 is now no longer valid to use in the application, as
+ the eventdev has dropped any locks or released re-ordered packets */
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-06-27 9:35 ` Jerin Jacob
@ 2017-06-27 13:12 ` Hunt, David
2017-06-29 7:17 ` Jerin Jacob
0 siblings, 1 reply; 64+ messages in thread
From: Hunt, David @ 2017-06-27 13:12 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Harry van Haaren, dev, Gage Eads, Bruce Richardson
Hi Jerin:
On 27/6/2017 10:35 AM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Mon, 26 Jun 2017 15:46:47 +0100
>> From: "Hunt, David" <david.hunt@intel.com>
>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, Harry van Haaren
>> <harry.van.haaren@intel.com>
>> CC: dev@dpdk.org, Gage Eads <gage.eads@intel.com>, Bruce Richardson
>> <bruce.richardson@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added
>> sample app
>> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
>> Thunderbird/45.8.0
>>
>> Hi Jerin,
> Hi David,
>
> Looks like you have sent the old version. The below mentioned comments
> are not addressed in v2.
Oops. Glitch in the Matrix. I've just pushed a V3 with the changes.
>> I'm assisting Harry on the sample app, and have just pushed up a V2 patch
>> based on your feedback. I've addressed most of your suggestions, comments
>> below. There may still a couple of outstanding questions that need further
>> discussion.
> A few general comments:
> 1) Nikhil/Gage's proposal on ethdev rx to eventdev adapter will change the major
> portion(eventdev setup and producer()) of this application
> 2) Producing one lcore worth of packets, really cant show as example
> eventdev application as it will be pretty bad in low-end machine.
> At least application infrastructure should not limit.
>
> Considering above points, Should we wait for rx adapter to complete
> first? I would like to show this as real world application to use eventdev.
>
> Thoughts?
>
> On the same note:
> Can we finalize on rx adapter proposal? I can work on v1 of patch and
> common code if Nikhil or Gage don't have bandwidth. Let me know?
>
> last followup:
> http://dpdk.org/ml/archives/dev/2017-June/068776.html
I had a quick chat with Harry, and wonder if we'd be as well to merge
the app as it is now, and as the new frameworks become available, the
app can be updated to make use of them? I feel it would be better to
have something out there for people to play with than waiting for 17.11.
Also, if you have bandwidth to patch the app for your desired use cases,
that would be a good contribution. I'd only be guessing for some of it :)
>> Regards,
>> Dave
>>
>> On 10/5/2017 3:12 PM, Jerin Jacob wrote:
>>> -----Original Message-----
>>>> Date: Fri, 21 Apr 2017 10:51:37 +0100
>>>> From: Harry van Haaren <harry.van.haaren@intel.com>
>>>> To: dev@dpdk.org
>>>> CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
>>>> <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
>>>> Richardson <bruce.richardson@intel.com>
>>>> Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
>>>> X-Mailer: git-send-email 2.7.4
>>>>
>>>> This commit adds a sample app for the eventdev library.
>>>> The app has been tested with DPDK 17.05-rc2, hence this
>>>> release (or later) is recommended.
>>>>
>>>> The sample app showcases a pipeline processing use-case,
>>>> with event scheduling and processing defined per stage.
>>>> The application recieves traffic as normal, with each
>>>> packet traversing the pipeline. Once the packet has
>>>> been processed by each of the pipeline stages, it is
>>>> transmitted again.
>>>>
>>>> The app provides a framework to utilize cores for a single
>>>> role or multiple roles. Examples of roles are the RX core,
>>>> TX core, Scheduling core (in the case of the event/sw PMD),
>>>> and worker cores.
>>>>
>>>> Various flags are available to configure numbers of stages,
>>>> cycles of work at each stage, type of scheduling, number of
>>>> worker cores, queue depths etc. For a full explaination,
>>>> please refer to the documentation.
>>>>
>>>> Signed-off-by: Gage Eads <gage.eads@intel.com>
>>>> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
>>>> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
>>> Thanks for the example application to share the SW view.
>>> I could make it run on HW after some tweaking(not optimized though)
>>>
>>> [...]
>>>> +#define MAX_NUM_STAGES 8
>>>> +#define BATCH_SIZE 16
>>>> +#define MAX_NUM_CORE 64
>>> How about RTE_MAX_LCORE?
>> Core usage in the sample app is held in a uint64_t. Adding arrays would be
>> possible, but I feel that the extra effort would not give that much benefit.
>> I've left as is for the moment, unless you see any strong requirement to go
>> beyond 64 cores?
> I think, it is OK. Again with service core infrastructure this will change.
>
>>>> +
>>>> +static unsigned int active_cores;
>>>> +static unsigned int num_workers;
>>>> +static unsigned long num_packets = (1L << 25); /* do ~32M packets */
>>>> +static unsigned int num_fids = 512;
>>>> +static unsigned int num_priorities = 1;
>>> looks like its not used.
>> Yes, Removed.
>>
>>>> +static unsigned int num_stages = 1;
>>>> +static unsigned int worker_cq_depth = 16;
>>>> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
>>>> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
>>>> +static int16_t qid[MAX_NUM_STAGES] = {-1};
>>> Moving all fastpath related variables under a structure with cache
>>> aligned will help.
>> I tried a few different combinations of this, and saw no gains, some losses.
>> So will leave as is for the moment, if that's OK.
> I think, the one are using in fastpath better to allocate from huge page
> using rte_malloc()
>
>>>> +static int worker_cycles;
>>>> +static int enable_queue_priorities;
>>>> +
>>>> +struct prod_data {
>>>> + uint8_t dev_id;
>>>> + uint8_t port_id;
>>>> + int32_t qid;
>>>> + unsigned num_nic_ports;
>>>> +};
>> Yes, saw a percent or two gain when this plus following two data structs
>> cache aligned.
> looks like it not fixed in v2. Looks like you have sent the old
> version.
>
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static int
>>>> +producer(void)
>>>> +{
>>>> + static uint8_t eth_port;
>>>> + struct rte_mbuf *mbufs[BATCH_SIZE];
>>>> + struct rte_event ev[BATCH_SIZE];
>>>> + uint32_t i, num_ports = prod_data.num_nic_ports;
>>>> + int32_t qid = prod_data.qid;
>>>> + uint8_t dev_id = prod_data.dev_id;
>>>> + uint8_t port_id = prod_data.port_id;
>>>> + uint32_t prio_idx = 0;
>>>> +
>>>> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs,
>> BATCH_SIZE);
>>>> + if (++eth_port == num_ports)
>>>> + eth_port = 0;
>>>> + if (nb_rx == 0) {
>>>> + rte_pause();
>>>> + return 0;
>>>> + }
>>>> +
>>>> + for (i = 0; i < nb_rx; i++) {
>>>> + ev[i].flow_id = mbufs[i]->hash.rss;
>>> prefetching the buff[i+1] may help here?
>> I tried, didn't make much difference.
> OK.
>
>>>> + ev[i].op = RTE_EVENT_OP_NEW;
>>>> + ev[i].sched_type = queue_type;
>>> The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY != RTE_SCHED_TYPE_ORDERED.
>> So, we
>>> cannot assign .sched_type as queue_type.
>>>
>>> I think, one option could be to avoid translation in application is to
>>> - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY
>>> - Introduce a new RTE_EVENT_DEV_CAP_ to denote
>> RTE_EVENT_QUEUE_CFG_ALL_TYPES cap
>>> ability
>>> - add sched_type in struct rte_event_queue_conf. If capability flag is
>>> not set then implementation takes sched_type value for the queue.
>>>
>>> Any thoughts?
>>
>> Not sure here, would it be ok for the moment, and we can work on a patch in
>> the future?
> OK
>
>>>> +
>>>> + if (tx_core[lcore_id] && (tx_single ||
>>>> + rte_atomic32_cmpset(&tx_lock, 0, 1))) {
>>>> + consumer();
>>> Should consumer() need to come in this pattern? I am thinking like
>>> if events is from last stage then call consumer() in worker()
>>>
>>> I think, above scheme works better when the _same_ worker code need to run
>> the
>>> case where
>>> 1) ethdev HW is capable to enqueuing the packets to same txq from
>>> multiple thread
>>> 2) ethdev is not capable to do so.
>>>
>>> So, The above cases can be addressed in configuration time where we link
>>> the queues to port
>>> case 1) Link all workers to last queue
>>> case 2) Link only worker to last queue
>>>
>>> and keeping the common worker code.
>>>
>>> HW implementation has functional and performance issue if "two" ports are
>>> assigned to one lcore for dequeue. The above scheme fixes that problem
>> too.
>>
>>
>> Can we have a bit more discussion on this item? Is this needed for this
>> sample app, or can we perhaps work a patch for this later? Harry?
> As explained above, Is there any issue in keeping consumer() for last
> stage ?
I would probably see this as a future enhancement as per my initial
comments above. Any hardware or new framework additions are welcome as
future patches to the app.
>>
>>>> + rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
>>>> + }
>>>> +}
>>>> +
>>>> +static int
>>>> +worker(void *arg)
>>>> +{
>>>> + struct rte_event events[BATCH_SIZE];
>>>> +
>>>> + struct worker_data *data = (struct worker_data *)arg;
>>>> + uint8_t dev_id = data->dev_id;
>>>> + uint8_t port_id = data->port_id;
>>>> + size_t sent = 0, received = 0;
>>>> + unsigned lcore_id = rte_lcore_id();
>>>> +
>>>> + while (!done) {
>>>> + uint16_t i;
>>>> +
>>>> + schedule_devices(dev_id, lcore_id);
>>>> +
>>>> + if (!worker_core[lcore_id]) {
>>>> + rte_pause();
>>>> + continue;
>>>> + }
>>>> +
>>>> + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
>>>> + events, RTE_DIM(events), 0);
>>>> +
>>>> + if (nb_rx == 0) {
>>>> + rte_pause();
>>>> + continue;
>>>> + }
>>>> + received += nb_rx;
>>>> +
>>>> + for (i = 0; i < nb_rx; i++) {
>>>> + struct ether_hdr *eth;
>>>> + struct ether_addr addr;
>>>> + struct rte_mbuf *m = events[i].mbuf;
>>>> +
>>>> + /* The first worker stage does classification */
>>>> + if (events[i].queue_id == qid[0])
>>>> + events[i].flow_id = m->hash.rss % num_fids;
>>> Not sure why we need do(shrinking the flows) this in worker() in queue
>> based pipeline.
>>> If an PMD has any specific requirement on num_fids,I think, we
>>> can move this configuration stage or PMD can choose optimum fid internally
>> to
>>> avoid modulus operation tax in fastpath in all PMD.
>>>
>>> Does struct rte_event_queue_conf.nb_atomic_flows help here?
>> In my tests the modulus makes very little difference in the throughput. And
>> I think it's good to have a way of varying the number of flows for testing
>> different scenarios, even if it's not the most performant.
> Not sure.
>
>>>> +
>>>> + events[i].queue_id = next_qid[events[i].queue_id];
>>>> + events[i].op = RTE_EVENT_OP_FORWARD;
>>> missing events[i].sched_type.HW PMD does not work with this.
>>> I think, we can use similar scheme like next_qid for next_sched_type.
>> Done. added events[i].sched_type = queue_type.
>>
>>>> +
>>>> + /* change mac addresses on packet (to use mbuf data) */
>>>> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
>>>> + ether_addr_copy(ð->d_addr, &addr);
>>>> + ether_addr_copy(ð->s_addr, ð->d_addr);
>>>> + ether_addr_copy(&addr, ð->s_addr);
>>> IMO, We can make packet processing code code as "static inline function"
>> so
>>> different worker types can reuse.
>> Done. moved out to a work() function.
> I think, mac swap should do in last stage, not on each forward.
> ie. With existing code, 2 stage forward makes in original order.
>
>>>> +
>>>> + /* do a number of cycles of work per packet */
>>>> + volatile uint64_t start_tsc = rte_rdtsc();
>>>> + while (rte_rdtsc() < start_tsc + worker_cycles)
>>>> + rte_pause();
>>> Ditto.
>> Done. moved out to a work() function.
>>
>>> I think, All worker specific variables like "worker_cycles" can moved into
>>> one structure and use.
>>>
>>>> + }
>>>> + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
>>>> + events, nb_rx);
>>>> + while (nb_tx < nb_rx && !done)
>>>> + nb_tx += rte_event_enqueue_burst(dev_id, port_id,
>>>> + events + nb_tx,
>>>> + nb_rx - nb_tx);
>>>> + sent += nb_tx;
>>>> + }
>>>> +
>>>> + if (!quiet)
>>>> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
>>>> + rte_lcore_id(), received, sent);
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +/*
>>>> + * Parse the coremask given as argument (hexadecimal string) and fill
>>>> + * the global configuration (core role and core count) with the parsed
>>>> + * value.
>>>> + */
>>>> +static int xdigit2val(unsigned char c)
>>> multiple instance of "xdigit2val" in DPDK repo. May be we can push this
>>> as common code.
>> Sure, that's something we can look at in a separate patch, now that it's
>> being used more and more.
> make sense.
>
>>>> +{
>>>> + int val;
>>>> +
>>>> + if (isdigit(c))
>>>> + val = c - '0';
>>>> + else if (isupper(c))
>>>> + val = c - 'A' + 10;
>>>> + else
>>>> + val = c - 'a' + 10;
>>>> + return val;
>>>> +}
>>>> +
>>>> +
>>>> +static void
>>>> +usage(void)
>>>> +{
>>>> + const char *usage_str =
>>>> + " Usage: eventdev_demo [options]\n"
>>>> + " Options:\n"
>>>> + " -n, --packets=N Send N packets (default ~32M), 0
>> implies no limit\n"
>>>> + " -f, --atomic-flows=N Use N random flows from 1 to N
>> (default 16)\n"
>>> I think, this parameter now, effects the application fast path code.I
>> think,
>>> it should eventdev configuration para-mater.
>> See above comment on num_fids
>>
>>>> + " -s, --num_stages=N Use N atomic stages (default
>> 1)\n"
>>>> + " -r, --rx-mask=core mask Run NIC rx on CPUs in core
>> mask\n"
>>>> + " -w, --worker-mask=core mask Run worker on CPUs in core
>> mask\n"
>>>> + " -t, --tx-mask=core mask Run NIC tx on CPUs in core
>> mask\n"
>>>> + " -e --sched-mask=core mask Run scheduler on CPUs in core
>> mask\n"
>>>> + " -c --cq-depth=N Worker CQ depth (default 16)\n"
>>>> + " -W --work-cycles=N Worker cycles (default 0)\n"
>>>> + " -P --queue-priority Enable scheduler queue
>> prioritization\n"
>>>> + " -o, --ordered Use ordered scheduling\n"
>>>> + " -p, --parallel Use parallel scheduling\n"
>>> IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of
>>> operation. It is valid have to any combination. We need to express that in
>>> command like
>>> example:
>>> 3 stage with
>>> O->A->P
>> How about we add an option that specifies the mode of operation for each
>> stage in a string? Maybe have a '-m' option (modes) e.g. '-m appo' for 4
>> stages with atomic, parallel, paralled, ordered. Or maybe reuse your
>> test-eventdev parameter style?
> Any scheme is fine.
>
>>>> + " -q, --quiet Minimize printed output\n"
>>>> + " -D, --dump Print detailed statistics before
>> exit"
>>>> + "\n";
>>>> + fprintf(stderr, "%s", usage_str);
>>>> + exit(1);
>>>> +}
>>>> +
>>> [...]
>>>
>>>> + rx_single = (popcnt == 1);
>>>> + break;
>>>> + case 't':
>>>> + tx_lcore_mask = parse_coremask(optarg);
>>>> + popcnt = __builtin_popcountll(tx_lcore_mask);
>>>> + tx_single = (popcnt == 1);
>>>> + break;
>>>> + case 'e':
>>>> + sched_lcore_mask = parse_coremask(optarg);
>>>> + popcnt = __builtin_popcountll(sched_lcore_mask);
>>>> + sched_single = (popcnt == 1);
>>>> + break;
>>>> + default:
>>>> + usage();
>>>> + }
>>>> + }
>>>> +
>>>> + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
>>>> + sched_lcore_mask == 0 || tx_lcore_mask == 0) {
>>>> +
>>>> + /* Q creation - one load balanced per pipeline stage*/
>>>> +
>>>> + /* set up one port per worker, linking to all stage queues */
>>>> + for (i = 0; i < num_workers; i++) {
>>>> + struct worker_data *w = &worker_data[i];
>>>> + w->dev_id = dev_id;
>>>> + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
>>>> + printf("Error setting up port %d\n", i);
>>>> + return -1;
>>>> + }
>>>> +
>>>> + uint32_t s;
>>>> + for (s = 0; s < num_stages; s++) {
>>>> + if (rte_event_port_link(dev_id, i,
>>>> + &worker_queues[s].queue_id,
>>>> + &worker_queues[s].priority,
>>>> + 1) != 1) {
>>>> + printf("%d: error creating link for port %d\n",
>>>> + __LINE__, i);
>>>> + return -1;
>>>> + }
>>>> + }
>>>> + w->port_id = i;
>>>> + }
>>>> + /* port for consumer, linked to TX queue */
>>>> + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
>>> If ethdev supports MT txq queue support then this port can be linked to
>>> worker too. something to consider for future.
>>>
>> Sure. No change for now.
> OK
Just to add a comment for any remaining comments above, we would hope
that none of them are blockers for the merge of the current version, as
they can be patched in the future as the infrastructure changes.
Rgds,
Dave.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-06-27 13:12 ` Hunt, David
@ 2017-06-29 7:17 ` Jerin Jacob
2017-06-29 12:51 ` Hunt, David
0 siblings, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-06-29 7:17 UTC (permalink / raw)
To: Hunt, David; +Cc: Harry van Haaren, dev, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Tue, 27 Jun 2017 14:12:20 +0100
> From: "Hunt, David" <david.hunt@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: Harry van Haaren <harry.van.haaren@intel.com>, dev@dpdk.org, Gage Eads
> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added
> sample app
> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hi Jerin:
>
>
> On 27/6/2017 10:35 AM, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Mon, 26 Jun 2017 15:46:47 +0100
> > > From: "Hunt, David" <david.hunt@intel.com>
> > > To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, Harry van Haaren
> > > <harry.van.haaren@intel.com>
> > > CC: dev@dpdk.org, Gage Eads <gage.eads@intel.com>, Bruce Richardson
> > > <bruce.richardson@intel.com>
> > > Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added
> > > sample app
> > > User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> > > Thunderbird/45.8.0
> > >
> > > Hi Jerin,
> > Hi David,
> >
> > Looks like you have sent the old version. The below mentioned comments
> > are not addressed in v2.
>
> Oops. Glitch in the Matrix. I've just pushed a V3 with the changes.
>
> > > I'm assisting Harry on the sample app, and have just pushed up a V2 patch
> > > based on your feedback. I've addressed most of your suggestions, comments
> > > below. There may still a couple of outstanding questions that need further
> > > discussion.
> > A few general comments:
> > 1) Nikhil/Gage's proposal on ethdev rx to eventdev adapter will change the major
> > portion(eventdev setup and producer()) of this application
> > 2) Producing one lcore worth of packets, really cant show as example
> > eventdev application as it will be pretty bad in low-end machine.
> > At least application infrastructure should not limit.
> >
> > Considering above points, Should we wait for rx adapter to complete
> > first? I would like to show this as real world application to use eventdev.
> >
> > Thoughts?
> >
> > On the same note:
> > Can we finalize on rx adapter proposal? I can work on v1 of patch and
> > common code if Nikhil or Gage don't have bandwidth. Let me know?
> >
> > last followup:
> > http://dpdk.org/ml/archives/dev/2017-June/068776.html
>
> I had a quick chat with Harry, and wonder if we'd be as well to merge the
> app as it is now, and as the new frameworks become available, the app can be
> updated to make use of them? I feel it would be better to have something out
> there for people to play with than waiting for 17.11.
I agree with your concern.
How about renaming the test and doc specific to SW PMD and then once we
fix the known issues with HW eventdev + ethdev(Rx adapter) integration and then
rename the application to generic eventdev.
>
> Also, if you have bandwidth to patch the app for your desired use cases,
> that would be a good contribution. I'd only be guessing for some of it :)
>
>
> > > Regards,
> > > Dave
> > >
> > > On 10/5/2017 3:12 PM, Jerin Jacob wrote:
> > > > -----Original Message-----
> > > > > Date: Fri, 21 Apr 2017 10:51:37 +0100
> > > > > From: Harry van Haaren <harry.van.haaren@intel.com>
> > > > > To: dev@dpdk.org
> > > > > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren
> > > > > <harry.van.haaren@intel.com>, Gage Eads <gage.eads@intel.com>, Bruce
> > > > > Richardson <bruce.richardson@intel.com>
> > > > > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app
> > > > > X-Mailer: git-send-email 2.7.4
> > > > >
> > > > > This commit adds a sample app for the eventdev library.
> > > > > The app has been tested with DPDK 17.05-rc2, hence this
> > > > > release (or later) is recommended.
> > > > >
> > > > > The sample app showcases a pipeline processing use-case,
> > > > > with event scheduling and processing defined per stage.
> > > > > The application recieves traffic as normal, with each
> > > > > packet traversing the pipeline. Once the packet has
> > > > > been processed by each of the pipeline stages, it is
> > > > > transmitted again.
> > > > >
> > > > > The app provides a framework to utilize cores for a single
> > > > > role or multiple roles. Examples of roles are the RX core,
> > > > > TX core, Scheduling core (in the case of the event/sw PMD),
> > > > > and worker cores.
> > > > >
> > > > > Various flags are available to configure numbers of stages,
> > > > > cycles of work at each stage, type of scheduling, number of
> > > > > worker cores, queue depths etc. For a full explaination,
> > > > > please refer to the documentation.
> > > > >
> > > > > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > > > > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> > > > Thanks for the example application to share the SW view.
> > > > I could make it run on HW after some tweaking(not optimized though)
> > > >
> > > > [...]
> > > > > +#define MAX_NUM_STAGES 8
> > > > > +#define BATCH_SIZE 16
> > > > > +#define MAX_NUM_CORE 64
> > > > How about RTE_MAX_LCORE?
> > > Core usage in the sample app is held in a uint64_t. Adding arrays would be
> > > possible, but I feel that the extra effort would not give that much benefit.
> > > I've left as is for the moment, unless you see any strong requirement to go
> > > beyond 64 cores?
> > I think, it is OK. Again with service core infrastructure this will change.
> >
> > > > > +
> > > > > +static unsigned int active_cores;
> > > > > +static unsigned int num_workers;
> > > > > +static unsigned long num_packets = (1L << 25); /* do ~32M packets */
> > > > > +static unsigned int num_fids = 512;
> > > > > +static unsigned int num_priorities = 1;
> > > > looks like its not used.
> > > Yes, Removed.
> > >
> > > > > +static unsigned int num_stages = 1;
> > > > > +static unsigned int worker_cq_depth = 16;
> > > > > +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> > > > > +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
> > > > > +static int16_t qid[MAX_NUM_STAGES] = {-1};
> > > > Moving all fastpath related variables under a structure with cache
> > > > aligned will help.
> > > I tried a few different combinations of this, and saw no gains, some losses.
> > > So will leave as is for the moment, if that's OK.
> > I think, the one are using in fastpath better to allocate from huge page
> > using rte_malloc()
> >
> > > > > +static int worker_cycles;
> > > > > +static int enable_queue_priorities;
> > > > > +
> > > > > +struct prod_data {
> > > > > + uint8_t dev_id;
> > > > > + uint8_t port_id;
> > > > > + int32_t qid;
> > > > > + unsigned num_nic_ports;
> > > > > +};
> > > Yes, saw a percent or two gain when this plus following two data structs
> > > cache aligned.
> > looks like it not fixed in v2. Looks like you have sent the old
> > version.
> >
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +static int
> > > > > +producer(void)
> > > > > +{
> > > > > + static uint8_t eth_port;
> > > > > + struct rte_mbuf *mbufs[BATCH_SIZE];
> > > > > + struct rte_event ev[BATCH_SIZE];
> > > > > + uint32_t i, num_ports = prod_data.num_nic_ports;
> > > > > + int32_t qid = prod_data.qid;
> > > > > + uint8_t dev_id = prod_data.dev_id;
> > > > > + uint8_t port_id = prod_data.port_id;
> > > > > + uint32_t prio_idx = 0;
> > > > > +
> > > > > + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs,
> > > BATCH_SIZE);
> > > > > + if (++eth_port == num_ports)
> > > > > + eth_port = 0;
> > > > > + if (nb_rx == 0) {
> > > > > + rte_pause();
> > > > > + return 0;
> > > > > + }
> > > > > +
> > > > > + for (i = 0; i < nb_rx; i++) {
> > > > > + ev[i].flow_id = mbufs[i]->hash.rss;
> > > > prefetching the buff[i+1] may help here?
> > > I tried, didn't make much difference.
> > OK.
> >
> > > > > + ev[i].op = RTE_EVENT_OP_NEW;
> > > > > + ev[i].sched_type = queue_type;
> > > > The value of RTE_EVENT_QUEUE_CFG_ORDERED_ONLY != RTE_SCHED_TYPE_ORDERED.
> > > So, we
> > > > cannot assign .sched_type as queue_type.
> > > >
> > > > I think, one option could be to avoid translation in application is to
> > > > - Remove RTE_EVENT_QUEUE_CFG_ALL_TYPES, RTE_EVENT_QUEUE_CFG_*_ONLY
> > > > - Introduce a new RTE_EVENT_DEV_CAP_ to denote
> > > RTE_EVENT_QUEUE_CFG_ALL_TYPES cap
> > > > ability
> > > > - add sched_type in struct rte_event_queue_conf. If capability flag is
> > > > not set then implementation takes sched_type value for the queue.
> > > >
> > > > Any thoughts?
> > >
> > > Not sure here, would it be ok for the moment, and we can work on a patch in
> > > the future?
> > OK
> >
> > > > > +
> > > > > + if (tx_core[lcore_id] && (tx_single ||
> > > > > + rte_atomic32_cmpset(&tx_lock, 0, 1))) {
> > > > > + consumer();
> > > > Should consumer() need to come in this pattern? I am thinking like
> > > > if events is from last stage then call consumer() in worker()
> > > >
> > > > I think, above scheme works better when the _same_ worker code need to run
> > > the
> > > > case where
> > > > 1) ethdev HW is capable to enqueuing the packets to same txq from
> > > > multiple thread
> > > > 2) ethdev is not capable to do so.
> > > >
> > > > So, The above cases can be addressed in configuration time where we link
> > > > the queues to port
> > > > case 1) Link all workers to last queue
> > > > case 2) Link only worker to last queue
> > > >
> > > > and keeping the common worker code.
> > > >
> > > > HW implementation has functional and performance issue if "two" ports are
> > > > assigned to one lcore for dequeue. The above scheme fixes that problem
> > > too.
> > >
> > >
> > > Can we have a bit more discussion on this item? Is this needed for this
> > > sample app, or can we perhaps work a patch for this later? Harry?
> > As explained above, Is there any issue in keeping consumer() for last
> > stage ?
>
> I would probably see this as a future enhancement as per my initial comments
> above. Any hardware or new framework additions are welcome as future patches
> to the app.
>
> > >
> > > > > + rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
> > > > > + }
> > > > > +}
> > > > > +
> > > > > +static int
> > > > > +worker(void *arg)
> > > > > +{
> > > > > + struct rte_event events[BATCH_SIZE];
> > > > > +
> > > > > + struct worker_data *data = (struct worker_data *)arg;
> > > > > + uint8_t dev_id = data->dev_id;
> > > > > + uint8_t port_id = data->port_id;
> > > > > + size_t sent = 0, received = 0;
> > > > > + unsigned lcore_id = rte_lcore_id();
> > > > > +
> > > > > + while (!done) {
> > > > > + uint16_t i;
> > > > > +
> > > > > + schedule_devices(dev_id, lcore_id);
> > > > > +
> > > > > + if (!worker_core[lcore_id]) {
> > > > > + rte_pause();
> > > > > + continue;
> > > > > + }
> > > > > +
> > > > > + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
> > > > > + events, RTE_DIM(events), 0);
> > > > > +
> > > > > + if (nb_rx == 0) {
> > > > > + rte_pause();
> > > > > + continue;
> > > > > + }
> > > > > + received += nb_rx;
> > > > > +
> > > > > + for (i = 0; i < nb_rx; i++) {
> > > > > + struct ether_hdr *eth;
> > > > > + struct ether_addr addr;
> > > > > + struct rte_mbuf *m = events[i].mbuf;
> > > > > +
> > > > > + /* The first worker stage does classification */
> > > > > + if (events[i].queue_id == qid[0])
> > > > > + events[i].flow_id = m->hash.rss % num_fids;
> > > > Not sure why we need do(shrinking the flows) this in worker() in queue
> > > based pipeline.
> > > > If an PMD has any specific requirement on num_fids,I think, we
> > > > can move this configuration stage or PMD can choose optimum fid internally
> > > to
> > > > avoid modulus operation tax in fastpath in all PMD.
> > > >
> > > > Does struct rte_event_queue_conf.nb_atomic_flows help here?
> > > In my tests the modulus makes very little difference in the throughput. And
> > > I think it's good to have a way of varying the number of flows for testing
> > > different scenarios, even if it's not the most performant.
> > Not sure.
> >
> > > > > +
> > > > > + events[i].queue_id = next_qid[events[i].queue_id];
> > > > > + events[i].op = RTE_EVENT_OP_FORWARD;
> > > > missing events[i].sched_type.HW PMD does not work with this.
> > > > I think, we can use similar scheme like next_qid for next_sched_type.
> > > Done. added events[i].sched_type = queue_type.
> > >
> > > > > +
> > > > > + /* change mac addresses on packet (to use mbuf data) */
> > > > > + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
> > > > > + ether_addr_copy(ð->d_addr, &addr);
> > > > > + ether_addr_copy(ð->s_addr, ð->d_addr);
> > > > > + ether_addr_copy(&addr, ð->s_addr);
> > > > IMO, We can make packet processing code code as "static inline function"
> > > so
> > > > different worker types can reuse.
> > > Done. moved out to a work() function.
> > I think, mac swap should do in last stage, not on each forward.
> > ie. With existing code, 2 stage forward makes in original order.
> >
> > > > > +
> > > > > + /* do a number of cycles of work per packet */
> > > > > + volatile uint64_t start_tsc = rte_rdtsc();
> > > > > + while (rte_rdtsc() < start_tsc + worker_cycles)
> > > > > + rte_pause();
> > > > Ditto.
> > > Done. moved out to a work() function.
> > >
> > > > I think, All worker specific variables like "worker_cycles" can moved into
> > > > one structure and use.
> > > >
> > > > > + }
> > > > > + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
> > > > > + events, nb_rx);
> > > > > + while (nb_tx < nb_rx && !done)
> > > > > + nb_tx += rte_event_enqueue_burst(dev_id, port_id,
> > > > > + events + nb_tx,
> > > > > + nb_rx - nb_tx);
> > > > > + sent += nb_tx;
> > > > > + }
> > > > > +
> > > > > + if (!quiet)
> > > > > + printf(" worker %u thread done. RX=%zu TX=%zu\n",
> > > > > + rte_lcore_id(), received, sent);
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +/*
> > > > > + * Parse the coremask given as argument (hexadecimal string) and fill
> > > > > + * the global configuration (core role and core count) with the parsed
> > > > > + * value.
> > > > > + */
> > > > > +static int xdigit2val(unsigned char c)
> > > > multiple instance of "xdigit2val" in DPDK repo. May be we can push this
> > > > as common code.
> > > Sure, that's something we can look at in a separate patch, now that it's
> > > being used more and more.
> > make sense.
> >
> > > > > +{
> > > > > + int val;
> > > > > +
> > > > > + if (isdigit(c))
> > > > > + val = c - '0';
> > > > > + else if (isupper(c))
> > > > > + val = c - 'A' + 10;
> > > > > + else
> > > > > + val = c - 'a' + 10;
> > > > > + return val;
> > > > > +}
> > > > > +
> > > > > +
> > > > > +static void
> > > > > +usage(void)
> > > > > +{
> > > > > + const char *usage_str =
> > > > > + " Usage: eventdev_demo [options]\n"
> > > > > + " Options:\n"
> > > > > + " -n, --packets=N Send N packets (default ~32M), 0
> > > implies no limit\n"
> > > > > + " -f, --atomic-flows=N Use N random flows from 1 to N
> > > (default 16)\n"
> > > > I think, this parameter now, effects the application fast path code.I
> > > think,
> > > > it should eventdev configuration para-mater.
> > > See above comment on num_fids
> > >
> > > > > + " -s, --num_stages=N Use N atomic stages (default
> > > 1)\n"
> > > > > + " -r, --rx-mask=core mask Run NIC rx on CPUs in core
> > > mask\n"
> > > > > + " -w, --worker-mask=core mask Run worker on CPUs in core
> > > mask\n"
> > > > > + " -t, --tx-mask=core mask Run NIC tx on CPUs in core
> > > mask\n"
> > > > > + " -e --sched-mask=core mask Run scheduler on CPUs in core
> > > mask\n"
> > > > > + " -c --cq-depth=N Worker CQ depth (default 16)\n"
> > > > > + " -W --work-cycles=N Worker cycles (default 0)\n"
> > > > > + " -P --queue-priority Enable scheduler queue
> > > prioritization\n"
> > > > > + " -o, --ordered Use ordered scheduling\n"
> > > > > + " -p, --parallel Use parallel scheduling\n"
> > > > IMO, all stage being "parallel" or "ordered" or "atomic" is one mode of
> > > > operation. It is valid have to any combination. We need to express that in
> > > > command like
> > > > example:
> > > > 3 stage with
> > > > O->A->P
> > > How about we add an option that specifies the mode of operation for each
> > > stage in a string? Maybe have a '-m' option (modes) e.g. '-m appo' for 4
> > > stages with atomic, parallel, paralled, ordered. Or maybe reuse your
> > > test-eventdev parameter style?
> > Any scheme is fine.
> >
> > > > > + " -q, --quiet Minimize printed output\n"
> > > > > + " -D, --dump Print detailed statistics before
> > > exit"
> > > > > + "\n";
> > > > > + fprintf(stderr, "%s", usage_str);
> > > > > + exit(1);
> > > > > +}
> > > > > +
> > > > [...]
> > > >
> > > > > + rx_single = (popcnt == 1);
> > > > > + break;
> > > > > + case 't':
> > > > > + tx_lcore_mask = parse_coremask(optarg);
> > > > > + popcnt = __builtin_popcountll(tx_lcore_mask);
> > > > > + tx_single = (popcnt == 1);
> > > > > + break;
> > > > > + case 'e':
> > > > > + sched_lcore_mask = parse_coremask(optarg);
> > > > > + popcnt = __builtin_popcountll(sched_lcore_mask);
> > > > > + sched_single = (popcnt == 1);
> > > > > + break;
> > > > > + default:
> > > > > + usage();
> > > > > + }
> > > > > + }
> > > > > +
> > > > > + if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
> > > > > + sched_lcore_mask == 0 || tx_lcore_mask == 0) {
> > > > > +
> > > > > + /* Q creation - one load balanced per pipeline stage*/
> > > > > +
> > > > > + /* set up one port per worker, linking to all stage queues */
> > > > > + for (i = 0; i < num_workers; i++) {
> > > > > + struct worker_data *w = &worker_data[i];
> > > > > + w->dev_id = dev_id;
> > > > > + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
> > > > > + printf("Error setting up port %d\n", i);
> > > > > + return -1;
> > > > > + }
> > > > > +
> > > > > + uint32_t s;
> > > > > + for (s = 0; s < num_stages; s++) {
> > > > > + if (rte_event_port_link(dev_id, i,
> > > > > + &worker_queues[s].queue_id,
> > > > > + &worker_queues[s].priority,
> > > > > + 1) != 1) {
> > > > > + printf("%d: error creating link for port %d\n",
> > > > > + __LINE__, i);
> > > > > + return -1;
> > > > > + }
> > > > > + }
> > > > > + w->port_id = i;
> > > > > + }
> > > > > + /* port for consumer, linked to TX queue */
> > > > > + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
> > > > If ethdev supports MT txq queue support then this port can be linked to
> > > > worker too. something to consider for future.
> > > >
> > > Sure. No change for now.
> > OK
>
> Just to add a comment for any remaining comments above, we would hope that
> none of them are blockers for the merge of the current version, as they can
> be patched in the future as the infrastructure changes.
>
> Rgds,
> Dave.
>
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added sample app
2017-06-29 7:17 ` Jerin Jacob
@ 2017-06-29 12:51 ` Hunt, David
0 siblings, 0 replies; 64+ messages in thread
From: Hunt, David @ 2017-06-29 12:51 UTC (permalink / raw)
To: Jerin Jacob; +Cc: Harry van Haaren, dev, Gage Eads, Bruce Richardson
On 29/6/2017 8:17 AM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Tue, 27 Jun 2017 14:12:20 +0100
>> From: "Hunt, David" <david.hunt@intel.com>
>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> CC: Harry van Haaren <harry.van.haaren@intel.com>, dev@dpdk.org, Gage Eads
>> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added
>> sample app
>> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
>> Thunderbird/45.8.0
>>
>> Hi Jerin:
>>
>>
>> On 27/6/2017 10:35 AM, Jerin Jacob wrote:
>>> -----Original Message-----
>>>> Date: Mon, 26 Jun 2017 15:46:47 +0100
>>>> From: "Hunt, David" <david.hunt@intel.com>
>>>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, Harry van Haaren
>>>> <harry.van.haaren@intel.com>
>>>> CC: dev@dpdk.org, Gage Eads <gage.eads@intel.com>, Bruce Richardson
>>>> <bruce.richardson@intel.com>
>>>> Subject: Re: [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added
>>>> sample app
>>>> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
>>>> Thunderbird/45.8.0
>>>>
>>>> Hi Jerin,
>>> Hi David,
>>>
>>> Looks like you have sent the old version. The below mentioned comments
>>> are not addressed in v2.
>> Oops. Glitch in the Matrix. I've just pushed a V3 with the changes.
>>
>>>> I'm assisting Harry on the sample app, and have just pushed up a V2 patch
>>>> based on your feedback. I've addressed most of your suggestions, comments
>>>> below. There may still a couple of outstanding questions that need further
>>>> discussion.
>>> A few general comments:
>>> 1) Nikhil/Gage's proposal on ethdev rx to eventdev adapter will change the major
>>> portion(eventdev setup and producer()) of this application
>>> 2) Producing one lcore worth of packets, really cant show as example
>>> eventdev application as it will be pretty bad in low-end machine.
>>> At least application infrastructure should not limit.
>>>
>>> Considering above points, Should we wait for rx adapter to complete
>>> first? I would like to show this as real world application to use eventdev.
>>>
>>> Thoughts?
>>>
>>> On the same note:
>>> Can we finalize on rx adapter proposal? I can work on v1 of patch and
>>> common code if Nikhil or Gage don't have bandwidth. Let me know?
>>>
>>> last followup:
>>> http://dpdk.org/ml/archives/dev/2017-June/068776.html
>> I had a quick chat with Harry, and wonder if we'd be as well to merge the
>> app as it is now, and as the new frameworks become available, the app can be
>> updated to make use of them? I feel it would be better to have something out
>> there for people to play with than waiting for 17.11.
> I agree with your concern.
> How about renaming the test and doc specific to SW PMD and then once we
> fix the known issues with HW eventdev + ethdev(Rx adapter) integration and then
> rename the application to generic eventdev.
>
Sure. I'll rename the app as 'eventdev_pipeline_sw' and push a v4. we
can rename back once it's generic.
Rgds,
Dave.
--snip--
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v4 0/3] next-eventdev: evendev pipeline sample app
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-29 15:49 ` David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 1/3] examples/eventdev_pipeline: added " David Hunt
` (2 more replies)
0 siblings, 3 replies; 64+ messages in thread
From: David Hunt @ 2017-06-29 15:49 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* None, incorrect patch upload
Changes in patch v3:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
Changes in patch v4:
* Re-named the app as eventdev_pipeline_sw, as it's aimed at showing the
functionality of the software PMD.
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
The sample app is presented here as an RFC to the next-eventdev tree
to work towards having eventdev PMD generic sample applications.
[1/3] examples/eventdev_pipeline: added sample app
[2/3] doc: add sw eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v4 1/3] examples/eventdev_pipeline: added sample app
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-06-29 15:49 ` David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-06-29 15:49 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
examples/Makefile | 2 +
examples/eventdev_pipeline/Makefile | 49 ++
examples/eventdev_pipeline/main.c | 999 ++++++++++++++++++++++++++++++++++++
3 files changed, 1050 insertions(+)
create mode 100644 examples/eventdev_pipeline/Makefile
create mode 100644 examples/eventdev_pipeline/main.c
diff --git a/examples/Makefile b/examples/Makefile
index 6298626..a6dcc2b 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
new file mode 100644
index 0000000..4c26e15
--- /dev/null
+++ b/examples/eventdev_pipeline/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline_sw
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c
new file mode 100644
index 0000000..f1386a4
--- /dev/null
+++ b/examples/eventdev_pipeline/main.c
@@ -0,0 +1,999 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+static unsigned int active_cores;
+static unsigned int num_workers;
+static long num_packets = (1L << 25); /* do ~32M packets */
+static unsigned int num_fids = 512;
+static unsigned int num_stages = 1;
+static unsigned int worker_cq_depth = 16;
+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
+static int16_t qid[MAX_NUM_STAGES] = {-1};
+static int worker_cycles;
+static int enable_queue_priorities;
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+} __rte_cache_aligned;
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static unsigned int *enqueue_cnt;
+static unsigned int *dequeue_cnt;
+
+static volatile int done;
+static int quiet;
+static int dump_dev;
+static int dump_dev_signal;
+
+static uint32_t rx_lock;
+static uint32_t tx_lock;
+static uint32_t sched_lock;
+static bool rx_single;
+static bool tx_single;
+static bool sched_single;
+
+static unsigned int rx_core[MAX_NUM_CORE];
+static unsigned int tx_core[MAX_NUM_CORE];
+static unsigned int sched_core[MAX_NUM_CORE];
+static unsigned int worker_core[MAX_NUM_CORE];
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (rx_core[lcore_id] || sched_core[lcore_id] ||
+ tx_core[lcore_id] || worker_core[lcore_id]);
+}
+
+static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+
+static void
+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t received;
+ static uint64_t last_pkts;
+ static uint64_t last_time;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ last_time = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ /* Print out mpps every 1<22 packets */
+ if (!quiet && received >= last_pkts + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t total_ms = (now - start_time) / freq_khz;
+ const uint64_t delta_ms = (now - last_time) / freq_khz;
+ uint64_t delta_pkts = received - last_pkts;
+
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
+ "avg %.3f mpps [current %.3f mpps]\n",
+ received,
+ total_ms,
+ received / (total_ms * 1000.0),
+ delta_pkts / (delta_ms * 1000.0));
+ last_pkts = received;
+ last_time = now;
+ }
+
+ dequeue_cnt[0] += n;
+
+ num_packets -= n;
+ if (num_packets <= 0)
+ done = 1;
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE+2];
+ struct rte_event ev[BATCH_SIZE+2];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+ enqueue_cnt[0] += nb_tx;
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (rx_core[lcore_id] && (rx_single ||
+ rte_atomic32_cmpset(&rx_lock, 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
+ }
+
+ if (sched_core[lcore_id] && (sched_single ||
+ rte_atomic32_cmpset(&sched_lock, 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
+ }
+
+ if (tx_core[lcore_id] && (tx_single ||
+ rte_atomic32_cmpset(&tx_lock, 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
+ }
+}
+
+
+
+static inline void
+work(struct rte_mbuf *m)
+{
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(ð->s_addr, ð->d_addr);
+ ether_addr_copy(&addr, ð->s_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + worker_cycles)
+ rte_pause();
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ const uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == qid[0])
+ events[i].flow_id = events[i].mbuf->hash.rss
+ % num_fids;
+
+ events[i].queue_id = next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+ events[i].sched_type = queue_type;
+
+ work(events[i].mbuf);
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ enable_queue_priorities = 1;
+ break;
+ case 'o':
+ queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ quiet = 1;
+ break;
+ case 'D':
+ dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (worker_core[i])
+ num_workers++;
+ if (core_in_use(i))
+ active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ tx_buf[i] = rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(tx_buf[i],
+ eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = num_workers + 2;
+ struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ if (dev_info.max_event_port_dequeue_depth <
+ config.nb_event_port_dequeue_depth)
+ config.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+ if (dev_info.max_event_port_enqueue_depth <
+ config.nb_event_port_enqueue_depth)
+ config.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0) {
+ printf("%d: Error configuring device\n", __LINE__);
+ return -1;
+ }
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ qid[i] = i;
+ next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ enqueue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
+ sizeof(enqueue_cnt[0]), 0);
+ dequeue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
+ sizeof(dequeue_cnt[0]), 0);
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (done)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = active_cores;
+
+ if (!quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", num_workers);
+ printf("\tpackets: %lu\n", num_packets);
+ printf("\tQueue-prio: %u\n", enable_queue_priorities);
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
+ !tx_core[lcore_id] && !sched_core[lcore_id])
+ continue;
+
+ if (rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!quiet) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < num_workers; i++) {
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx",
+ worker_data[i].port_id);
+ pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
+ dev_id, statname, NULL);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample app ug
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-29 15:49 ` David Hunt
2017-06-30 12:25 ` Mcnamara, John
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-06-29 15:49 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline_sw.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
doc/guides/sample_app_ug/eventdev_pipeline_sw.rst | 190 ++++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
2 files changed, 191 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
new file mode 100644
index 0000000..65c33a8
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
@@ -0,0 +1,190 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline SW Sample Application
+=======================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API using the software PMD. It shows how an
+application can configure a pipeline and assign a set of worker cores to
+perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}/examples/eventdev_pipeline
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline_sw --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets received, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets received and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..11f5781 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to programmers guide
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-06-29 15:49 ` David Hunt
2017-06-30 12:26 ` Mcnamara, John
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-06-29 15:49 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1360 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..4f6088e
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,365 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader. The eventdev framework is provided as a DPDK library, allowing
+applications to use it if they wish, but not require its usage.
+
+The goal of this library is to enable applications to build processing
+pipelines where the load balancing and scheduling is handled by the eventdev.
+Step-by-step instructions of the eventdev design is available in the `API
+Walktrough`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+A queue is a logical "stage" of a packet processing graph, where each stage
+has a specified scheduling type. The application configures each queue for a
+specific type of scheduling, and just enqueues all events to the eventdev.
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order. Parallel allows sending all flows to all CPU cores, without any
+re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walktrough
+--------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walktrough:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walktrough assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walktrough assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+.. code-block:: c
+
+ struct rte_event events[BATCH_SIZE];
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 : now tx or use the packets */
+ n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 is now no longer valid to use in the application, as
+ the eventdev has dropped any locks or released re-ordered packets */
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample app ug
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-06-30 12:25 ` Mcnamara, John
0 siblings, 0 replies; 64+ messages in thread
From: Mcnamara, John @ 2017-06-30 12:25 UTC (permalink / raw)
To: Hunt, David, dev; +Cc: jerin.jacob, Van Haaren, Harry, Hunt, David
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Thursday, June 29, 2017 4:50 PM
> To: dev@dpdk.org
> Cc: jerin.jacob@caviumnetworks.com; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Hunt, David <david.hunt@intel.com>
> Subject: [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample
> app ug
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> Add a new entry in the sample app user-guides, which details the working
> of the eventdev_pipeline_sw.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
Good doc.
> diff --git a/doc/guides/sample_app_ug/index.rst
> b/doc/guides/sample_app_ug/index.rst
> index 02611ef..11f5781 100644
> --- a/doc/guides/sample_app_ug/index.rst
> +++ b/doc/guides/sample_app_ug/index.rst
> @@ -69,6 +69,7 @@ Sample Applications User Guides
> netmap_compatibility
> ip_pipeline
> test_pipeline
> + eventdev_pipeline
Should be eventdev_pipeline_sw. Otherwise there is a build error.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to programmers guide
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to programmers guide David Hunt
@ 2017-06-30 12:26 ` Mcnamara, John
0 siblings, 0 replies; 64+ messages in thread
From: Mcnamara, John @ 2017-06-30 12:26 UTC (permalink / raw)
To: Hunt, David, dev; +Cc: jerin.jacob, Van Haaren, Harry
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Thursday, June 29, 2017 4:50 PM
> To: dev@dpdk.org
> Cc: jerin.jacob@caviumnetworks.com; Van Haaren, Harry
> <harry.van.haaren@intel.com>
> Subject: [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to
> programmers guide
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> This commit adds an entry in the programmers guide explaining the eventdev
> library.
>
> The rte_event struct, queues and ports are explained.
> An API walktrough of a simple two stage atomic pipeline provides the
> reader with a step by step overview of the expected usage of the Eventdev
> API.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> +API Walktrough
> +--------------
Should be Walkthrough or Walk-through. Here and a few other places.
Also make sure to adjust the underline when you fix the typo.
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v5 0/3] next-eventdev: evendev pipeline sample app
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-30 13:51 ` David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added " David Hunt
` (2 more replies)
0 siblings, 3 replies; 64+ messages in thread
From: David Hunt @ 2017-06-30 13:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* None, incorrect patch upload
Changes in patch v3:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
Changes in patch v4:
* Re-named the app as eventdev_pipeline_sw, as it's aimed at showing the
functionality of the software PMD.
Changes in patch v5:
* Fixed make doc. eventdev_pipeline to eventdev_pipeline_sw
* Fixed some typos in the eventdev programmers guide
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
The sample app is presented here as an RFC to the next-eventdev tree
to work towards having eventdev PMD generic sample applications.
[1/3] examples/eventdev_pipeline: added sample app
[2/3] doc: add sw eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-06-30 13:51 ` David Hunt
2017-07-03 3:57 ` Jerin Jacob
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 2 replies; 64+ messages in thread
From: David Hunt @ 2017-06-30 13:51 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
examples/Makefile | 2 +
examples/eventdev_pipeline/Makefile | 49 ++
examples/eventdev_pipeline/main.c | 999 ++++++++++++++++++++++++++++++++++++
3 files changed, 1050 insertions(+)
create mode 100644 examples/eventdev_pipeline/Makefile
create mode 100644 examples/eventdev_pipeline/main.c
diff --git a/examples/Makefile b/examples/Makefile
index 6298626..a6dcc2b 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
new file mode 100644
index 0000000..4c26e15
--- /dev/null
+++ b/examples/eventdev_pipeline/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline_sw
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c
new file mode 100644
index 0000000..f1386a4
--- /dev/null
+++ b/examples/eventdev_pipeline/main.c
@@ -0,0 +1,999 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+static unsigned int active_cores;
+static unsigned int num_workers;
+static long num_packets = (1L << 25); /* do ~32M packets */
+static unsigned int num_fids = 512;
+static unsigned int num_stages = 1;
+static unsigned int worker_cq_depth = 16;
+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
+static int16_t qid[MAX_NUM_STAGES] = {-1};
+static int worker_cycles;
+static int enable_queue_priorities;
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+} __rte_cache_aligned;
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static unsigned int *enqueue_cnt;
+static unsigned int *dequeue_cnt;
+
+static volatile int done;
+static int quiet;
+static int dump_dev;
+static int dump_dev_signal;
+
+static uint32_t rx_lock;
+static uint32_t tx_lock;
+static uint32_t sched_lock;
+static bool rx_single;
+static bool tx_single;
+static bool sched_single;
+
+static unsigned int rx_core[MAX_NUM_CORE];
+static unsigned int tx_core[MAX_NUM_CORE];
+static unsigned int sched_core[MAX_NUM_CORE];
+static unsigned int worker_core[MAX_NUM_CORE];
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (rx_core[lcore_id] || sched_core[lcore_id] ||
+ tx_core[lcore_id] || worker_core[lcore_id]);
+}
+
+static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+
+static void
+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t received;
+ static uint64_t last_pkts;
+ static uint64_t last_time;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ last_time = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ /* Print out mpps every 1<22 packets */
+ if (!quiet && received >= last_pkts + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t total_ms = (now - start_time) / freq_khz;
+ const uint64_t delta_ms = (now - last_time) / freq_khz;
+ uint64_t delta_pkts = received - last_pkts;
+
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
+ "avg %.3f mpps [current %.3f mpps]\n",
+ received,
+ total_ms,
+ received / (total_ms * 1000.0),
+ delta_pkts / (delta_ms * 1000.0));
+ last_pkts = received;
+ last_time = now;
+ }
+
+ dequeue_cnt[0] += n;
+
+ num_packets -= n;
+ if (num_packets <= 0)
+ done = 1;
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE+2];
+ struct rte_event ev[BATCH_SIZE+2];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+ enqueue_cnt[0] += nb_tx;
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (rx_core[lcore_id] && (rx_single ||
+ rte_atomic32_cmpset(&rx_lock, 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&rx_lock);
+ }
+
+ if (sched_core[lcore_id] && (sched_single ||
+ rte_atomic32_cmpset(&sched_lock, 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&sched_lock);
+ }
+
+ if (tx_core[lcore_id] && (tx_single ||
+ rte_atomic32_cmpset(&tx_lock, 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&tx_lock);
+ }
+}
+
+
+
+static inline void
+work(struct rte_mbuf *m)
+{
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(ð->s_addr, ð->d_addr);
+ ether_addr_copy(&addr, ð->s_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + worker_cycles)
+ rte_pause();
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ const uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == qid[0])
+ events[i].flow_id = events[i].mbuf->hash.rss
+ % num_fids;
+
+ events[i].queue_id = next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+ events[i].sched_type = queue_type;
+
+ work(events[i].mbuf);
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ enable_queue_priorities = 1;
+ break;
+ case 'o':
+ queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ quiet = 1;
+ break;
+ case 'D':
+ dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (worker_core[i])
+ num_workers++;
+ if (core_in_use(i))
+ active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ tx_buf[i] = rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(tx_buf[i],
+ eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = num_workers + 2;
+ struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ if (dev_info.max_event_port_dequeue_depth <
+ config.nb_event_port_dequeue_depth)
+ config.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+ if (dev_info.max_event_port_enqueue_depth <
+ config.nb_event_port_enqueue_depth)
+ config.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0) {
+ printf("%d: Error configuring device\n", __LINE__);
+ return -1;
+ }
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ qid[i] = i;
+ next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
+ rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ enqueue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
+ sizeof(enqueue_cnt[0]), 0);
+ dequeue_cnt = rte_calloc(0,
+ RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
+ sizeof(dequeue_cnt[0]), 0);
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (done)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = active_cores;
+
+ if (!quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", num_workers);
+ printf("\tpackets: %lu\n", num_packets);
+ printf("\tQueue-prio: %u\n", enable_queue_priorities);
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!rx_core[lcore_id] && !worker_core[lcore_id] &&
+ !tx_core[lcore_id] && !sched_core[lcore_id])
+ continue;
+
+ if (rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!quiet) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < num_workers; i++) {
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx",
+ worker_data[i].port_id);
+ pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
+ dev_id, statname, NULL);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-06-30 13:51 ` David Hunt
2017-06-30 14:37 ` Mcnamara, John
2017-07-03 5:37 ` Jerin Jacob
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 2 replies; 64+ messages in thread
From: David Hunt @ 2017-06-30 13:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline_sw.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
---
doc/guides/sample_app_ug/eventdev_pipeline_sw.rst | 190 ++++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
2 files changed, 191 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
new file mode 100644
index 0000000..65c33a8
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
@@ -0,0 +1,190 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline SW Sample Application
+=======================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API using the software PMD. It shows how an
+application can configure a pipeline and assign a set of worker cores to
+perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}/examples/eventdev_pipeline
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline_sw --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets received, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets received and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..90be36a 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline_sw
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-06-30 13:51 ` David Hunt
2017-06-30 14:38 ` Mcnamara, John
2017-07-02 12:08 ` Jerin Jacob
2 siblings, 2 replies; 64+ messages in thread
From: David Hunt @ 2017-06-30 13:51 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
---
doc/guides/prog_guide/eventdev.rst | 365 ++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1360 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..7c67f8a
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,365 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader. The eventdev framework is provided as a DPDK library, allowing
+applications to use it if they wish, but not require its usage.
+
+The goal of this library is to enable applications to build processing
+pipelines where the load balancing and scheduling is handled by the eventdev.
+Step-by-step instructions of the eventdev design is available in the `API
+Walk-through`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+A queue is a logical "stage" of a packet processing graph, where each stage
+has a specified scheduling type. The application configures each queue for a
+specific type of scheduling, and just enqueues all events to the eventdev.
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order. Parallel allows sending all flows to all CPU cores, without any
+re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walk-through
+----------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walk-through:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walk-through assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walk-through assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_CPU;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+.. code-block:: c
+
+ struct rte_event events[BATCH_SIZE];
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 : now tx or use the packets */
+ n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
+ /* burst #1 is now no longer valid to use in the application, as
+ the eventdev has dropped any locks or released re-ordered packets */
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-06-30 14:37 ` Mcnamara, John
2017-07-03 5:37 ` Jerin Jacob
1 sibling, 0 replies; 64+ messages in thread
From: Mcnamara, John @ 2017-06-30 14:37 UTC (permalink / raw)
To: Hunt, David, dev; +Cc: jerin.jacob, Van Haaren, Harry, Hunt, David
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 30, 2017 2:51 PM
> To: dev@dpdk.org
> Cc: jerin.jacob@caviumnetworks.com; Van Haaren, Harry
> <harry.van.haaren@intel.com>; Hunt, David <david.hunt@intel.com>
> Subject: [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample
> app ug
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> Add a new entry in the sample app user-guides, which details the working
> of the eventdev_pipeline_sw.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide David Hunt
@ 2017-06-30 14:38 ` Mcnamara, John
2017-07-02 12:08 ` Jerin Jacob
1 sibling, 0 replies; 64+ messages in thread
From: Mcnamara, John @ 2017-06-30 14:38 UTC (permalink / raw)
To: Hunt, David, dev; +Cc: jerin.jacob, Van Haaren, Harry
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of David Hunt
> Sent: Friday, June 30, 2017 2:51 PM
> To: dev@dpdk.org
> Cc: jerin.jacob@caviumnetworks.com; Van Haaren, Harry
> <harry.van.haaren@intel.com>
> Subject: [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to
> programmers guide
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> This commit adds an entry in the programmers guide explaining the eventdev
> library.
>
> The rte_event struct, queues and ports are explained.
> An API walktrough of a simple two stage atomic pipeline provides the
> reader with a step by step overview of the expected usage of the Eventdev
> API.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide David Hunt
2017-06-30 14:38 ` Mcnamara, John
@ 2017-07-02 12:08 ` Jerin Jacob
1 sibling, 0 replies; 64+ messages in thread
From: Jerin Jacob @ 2017-07-02 12:08 UTC (permalink / raw)
To: David Hunt; +Cc: dev, harry.van.haaren
-----Original Message-----
> Date: Fri, 30 Jun 2017 14:51:13 +0100
> From: David Hunt <david.hunt@intel.com>
> To: dev@dpdk.org
> CC: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com
> Subject: [PATCH v5 3/3] doc: add eventdev library to programmers guide
> X-Mailer: git-send-email 2.7.4
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> This commit adds an entry in the programmers guide
> explaining the eventdev library.
>
> The rte_event struct, queues and ports are explained.
> An API walktrough of a simple two stage atomic pipeline
> provides the reader with a step by step overview of the
> expected usage of the Eventdev API.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Thanks for document. Overall it looks good. A few comments below.
> ---
Snip
> + The eventdev framework is provided as a DPDK library, allowing
> +applications to use it if they wish, but not require its usage.
> +
> +The goal of this library is to enable applications to build processing
> +pipelines where the load balancing and scheduling is handled by the eventdev.
Adding the differences between polling and event driver model will be
useful. I propose to change the above two paragraphs and add more
detailed text for the use case. Something like below,
----------Start--------------------
The eventdev framework introduces the event driven programming model.
In a polling model, lcores poll ethdev ports and associated Rx queues
directly to look for a packet. By contrast in an event driven model,
lcores call the scheduler that selects packets for them based on
programmer-specified criteria. The Eventdev library adds support for an
event driven programming model, which offers applications automatic
multicore scaling, dynamic load balancing, pipelining, packet ingress
order maintenance and synchronization services to simplify application
packet processing.
By introducing an event driven programming model, DPDK can support both
polling and event driven programming models for packet processing, and
applications are free to choose whatever model (or combination of the
two) best suits their needs.
-----------End--------------------
> +Step-by-step instructions of the eventdev design is available in the `API
> +Walk-through`_ section later in this document.
> +
> +Event struct
> +------------
> +
> +The eventdev API represents each event with a generic struct, which contains a
> +payload and metadata required for scheduling by an eventdev. The
> +``rte_event`` struct is a 16 byte C structure, defined in
> +``libs/librte_eventdev/rte_eventdev.h``.
> +
Snip
> +Event Payload
> +~~~~~~~~~~~~~
> +
> +The rte_event struct contains a union for payload, allowing flexibility in what
> +the actual event being scheduled is. The payload is a union of the following:
> +
> +* ``uint64_t u64``
> +* ``void *event_ptr``
> +* ``struct rte_mbuf *mbuf``
> +
> +These three items in a union occupy the same 64 bits at the end of the rte_event
> +structure. The application can utilize the 64 bits directly by accessing the
> +u64 variable, while the event_ptr and mbuf are provided as convenience
> +variables. For example the mbuf pointer in the union can used to schedule a
> +DPDK packet.
> +
> +Queues
> +~~~~~~
> +
> +A queue is a logical "stage" of a packet processing graph, where each stage
> +has a specified scheduling type. The application configures each queue for a
> +specific type of scheduling, and just enqueues all events to the eventdev.
> +The Eventdev API supports the following scheduling types per queue:
IMO, The above definition fits nicely if the queue is NOT with
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES.
I think, We can take the queue definition from header file.
http://dpdk.org/browse/next/dpdk-next-eventdev/tree/lib/librte_eventdev/rte_eventdev.h#n113
and tell about the RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES and non
RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability queues here.
> +
> +* Atomic
> +* Ordered
> +* Parallel
> +
> +Atomic, Ordered and Parallel are load-balanced scheduling types: the output
> +of the queue can be spread out over multiple CPU cores.
> +
> +Atomic scheduling on a queue ensures that a single flow is not present on two
> +different CPU cores at the same time. Ordered allows sending all flows to any
> +core, but the scheduler must ensure that on egress the packets are returned to
We can emphasize of when reordering happens ie "On downstream queue enqueue"
> +ingress order. Parallel allows sending all flows to all CPU cores, without any
> +re-ordering guarantees.
> +
> +Single Link Flag
> +^^^^^^^^^^^^^^^^
> +
> +There is a SINGLE_LINK flag which allows an application to indicate that only
> +one port will be connected to a queue. Queues configured with the single-link
> +flag follow a FIFO like structure, maintaining ordering but it is only capable
> +of being linked to a single port (see below for port and queue linking details).
> +
> +
> +Ports
> +~~~~~
> +
> +Ports are the points of contact between worker cores and the eventdev. The
> +general use-case will see one CPU core using one port to enqueue and dequeue
> +events from an eventdev. Ports are linked to queues in order to retrieve events
> +from those queues (more details in `Linking Queues and Ports`_ below).
> +
> +
> +API Walk-through
> +----------------
> +
> +This section will introduce the reader to the eventdev API, showing how to
> +create and configure an eventdev and use it for a two-stage atomic pipeline
> +with a single core for TX. The diagram below shows the final state of the
> +application after this walk-through:
> +
> +.. _figure_eventdev-usage1:
> +
> +.. figure:: img/eventdev_usage.*
> +
> + Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
> +
> +
> +A high level overview of the setup steps are:
> +
> +* rte_event_dev_configure()
> +* rte_event_queue_setup()
> +* rte_event_port_setup()
> +* rte_event_port_link()
> +* rte_event_dev_start()
> +
Good.
> +Ingress of New Events
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +Now that the eventdev is set up, and ready to receive events, the RX core must
> +enqueue some events into the system for it to schedule. The events to be
> +scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
> +The following code shows how those packets can be enqueued into the eventdev:
> +
> +.. code-block:: c
> +
> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
> +
> + for (i = 0; i < nb_rx; i++) {
> + ev[i].flow_id = mbufs[i]->hash.rss;
> + ev[i].op = RTE_EVENT_OP_NEW;
> + ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> + ev[i].queue_id = 0;
> + ev[i].event_type = RTE_EVENT_TYPE_CPU;
ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
> + ev[i].sub_event_type = 0;
> + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
> + ev[i].mbuf = mbufs[i];
> + }
> +
> + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
> + if (nb_tx != nb_rx) {
> + for(i = nb_tx; i < nb_rx; i++)
> + rte_pktmbuf_free(mbufs[i]);
> + }
> +
> +Forwarding of Events
> +~~~~~~~~~~~~~~~~~~~~
> +
> +Now that the RX core has injected events, there is work to be done by the
> +workers. Note that each worker will dequeue as many events as it can in a burst,
> +process each one individually, and then burst the packets back into the
> +eventdev.
> +
> +The worker can lookup the events source from ``event.queue_id``, which should
> +indicate to the worker what workload needs to be performed on the event.
> +Once done, the worker can update the ``event.queue_id`` to a new value, to send
> +the event to the next stage in the pipeline.
> +
> +.. code-block:: c
> +
> + int timeout = 0;
> + struct rte_event events[BATCH_SIZE];
> + uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
> +
> + for (i = 0; i < nb_rx; i++) {
> + /* process mbuf using events[i].queue_id as pipeline stage */
> + struct rte_mbuf *mbuf = events[i].mbuf;
> + /* Send event to next stage in pipeline */
> + events[i].queue_id++;
> + }
> +
> + uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
> +
> +
> +Egress of Events
> +~~~~~~~~~~~~~~~~
> +
> +Finally, when the packet is ready for egress or needs to be dropped, we need
> +to inform the eventdev that the packet is no longer being handled by the
> +application. This can be done by calling dequeue() or dequeue_burst(), which
> +indicates that the previous burst of packets is no longer in use by the
> +application.
Perfect.
> +
> +.. code-block:: c
> +
> + struct rte_event events[BATCH_SIZE];
> + uint16_t n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
> + /* burst #1 : now tx or use the packets */
> + n = rte_event_dequeue_burst(dev_id, port_id, events, BATCH_SIZE, 0);
> + /* burst #1 is now no longer valid to use in the application, as
> + the eventdev has dropped any locks or released re-ordered packets */
This above code snippet is bit confusing(dequeue followed by dequeue).
How about changing it in following theme
http://dpdk.org/browse/next/dpdk-next-eventdev/tree/lib/librte_eventdev/rte_eventdev.h#n226
> +
> +Summary
> +-------
> +
> +The eventdev library allows an application to easily schedule events as it
> +requires, either using a run-to-completion or pipeline processing model. The
> +queues and ports abstract the logical functionality of an eventdev, providing
> +the application with a generic method to schedule events. With the flexible
> +PMD infrastructure applications benefit of improvements in existing eventdevs
> +and additions of new ones without modification.
With above changes:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-07-03 3:57 ` Jerin Jacob
2017-07-04 7:55 ` Hunt, David
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline " David Hunt
1 sibling, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-07-03 3:57 UTC (permalink / raw)
To: David Hunt; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
-----Original Message-----
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> This commit adds a sample app for the eventdev library.
> The app has been tested with DPDK 17.05-rc2, hence this
> release (or later) is recommended.
>
> The sample app showcases a pipeline processing use-case,
> with event scheduling and processing defined per stage.
> The application receives traffic as normal, with each
> packet traversing the pipeline. Once the packet has
> been processed by each of the pipeline stages, it is
> transmitted again.
>
> The app provides a framework to utilize cores for a single
> role or multiple roles. Examples of roles are the RX core,
> TX core, Scheduling core (in the case of the event/sw PMD),
> and worker cores.
>
> Various flags are available to configure numbers of stages,
> cycles of work at each stage, type of scheduling, number of
> worker cores, queue depths etc. For a full explaination,
> please refer to the documentation.
A few comments on bugs and "to avoid the future rework on base code when
HW PMD is introduced". As we agreed, We will keep the functionality intact to
provide an application to test ethdev + eventdev with _SW PMD_ for 17.08
> ---
> examples/Makefile | 2 +
> examples/eventdev_pipeline/Makefile | 49 ++
> examples/eventdev_pipeline/main.c | 999 ++++++++++++++++++++++++++++++++++++
> 3 files changed, 1050 insertions(+)
> create mode 100644 examples/eventdev_pipeline/Makefile
> create mode 100644 examples/eventdev_pipeline/main.c
Do we need to update the MAINTAINERS file?
>
> diff --git a/examples/Makefile b/examples/Makefile
> index 6298626..a6dcc2b 100644
> --- a/examples/Makefile
> +++ b/examples/Makefile
> @@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
> endif
> endif
>
> +DIRS-y += eventdev_pipeline
Can you change to eventdev_pipeline_sw_pmd to emphasis on the scope.
We will rename to eventdev_pipeline once it working effectively on both SW and HW
PMD with ethdev.
> +
> include $(RTE_SDK)/mk/rte.extsubdir.mk
> diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
> new file mode 100644
> index 0000000..4c26e15
> --- /dev/null
> +++ b/examples/eventdev_pipeline/Makefile
> @@ -0,0 +1,49 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2016 Intel Corporation. All rights reserved.
2016-2017
> +#
> +# Redistribution and use in source and binary forms, with or without
> +# modification, are permitted provided that the following conditions
> +# are met:
> +#
> +
> +static unsigned int active_cores;
> +static unsigned int num_workers;
> +static long num_packets = (1L << 25); /* do ~32M packets */
> +static unsigned int num_fids = 512;
> +static unsigned int num_stages = 1;
> +static unsigned int worker_cq_depth = 16;
> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
> +static int16_t qid[MAX_NUM_STAGES] = {-1};
> +static int worker_cycles;
> +static int enable_queue_priorities;
> +
> +struct prod_data {
> + uint8_t dev_id;
> + uint8_t port_id;
> + int32_t qid;
> + unsigned int num_nic_ports;
> +} __rte_cache_aligned;
> +
> +struct cons_data {
> + uint8_t dev_id;
> + uint8_t port_id;
> +} __rte_cache_aligned;
> +
> +static struct prod_data prod_data;
> +static struct cons_data cons_data;
> +
> +struct worker_data {
> + uint8_t dev_id;
> + uint8_t port_id;
> +} __rte_cache_aligned;
> +
> +static unsigned int *enqueue_cnt;
> +static unsigned int *dequeue_cnt;
Not been consumed. Remove it.
> +
> +static volatile int done;
> +static int quiet;
> +static int dump_dev;
> +static int dump_dev_signal;
> +
> +static uint32_t rx_lock;
> +static uint32_t tx_lock;
> +static uint32_t sched_lock;
> +static bool rx_single;
> +static bool tx_single;
> +static bool sched_single;
> +
> +static unsigned int rx_core[MAX_NUM_CORE];
> +static unsigned int tx_core[MAX_NUM_CORE];
> +static unsigned int sched_core[MAX_NUM_CORE];
> +static unsigned int worker_core[MAX_NUM_CORE];
> +
> +static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
Could you please remove this global variable and group under a structure
for "command line parsing specific" and "fast path specific"(anything comes
in producer(), worker() and consumer()). And please
allocate "fast path specific" structure variable from huge page area.
So that we can easily add new parsing and fastpath variable in future.
> +
> +static int
> +consumer(void)
> +{
> + const uint64_t freq_khz = rte_get_timer_hz() / 1000;
> + struct rte_event packets[BATCH_SIZE];
> +
> + static uint64_t received;
> + static uint64_t last_pkts;
> + static uint64_t last_time;
> + static uint64_t start_time;
> + unsigned int i, j;
> + uint8_t dev_id = cons_data.dev_id;
> + uint8_t port_id = cons_data.port_id;
> +
> + uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
> + packets, RTE_DIM(packets), 0);
> +
> + if (n == 0) {
> + for (j = 0; j < rte_eth_dev_count(); j++)
> + rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
> + return 0;
> + }
> + if (start_time == 0)
> + last_time = start_time = rte_get_timer_cycles();
> +
> + received += n;
> + for (i = 0; i < n; i++) {
> + uint8_t outport = packets[i].mbuf->port;
> + rte_eth_tx_buffer(outport, 0, tx_buf[outport],
> + packets[i].mbuf);
> + }
> +
> + /* Print out mpps every 1<22 packets */
> + if (!quiet && received >= last_pkts + (1<<22)) {
> + const uint64_t now = rte_get_timer_cycles();
> + const uint64_t total_ms = (now - start_time) / freq_khz;
> + const uint64_t delta_ms = (now - last_time) / freq_khz;
> + uint64_t delta_pkts = received - last_pkts;
> +
> + printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
> + "avg %.3f mpps [current %.3f mpps]\n",
> + received,
> + total_ms,
> + received / (total_ms * 1000.0),
> + delta_pkts / (delta_ms * 1000.0));
> + last_pkts = received;
> + last_time = now;
> + }
> +
> + dequeue_cnt[0] += n;
Not really used.
> +
> + num_packets -= n;
> + if (num_packets <= 0)
> + done = 1;
> +
> + return 0;
> +}
> +
> +static int
> +producer(void)
> +{
> + static uint8_t eth_port;
> + struct rte_mbuf *mbufs[BATCH_SIZE+2];
> + struct rte_event ev[BATCH_SIZE+2];
> + uint32_t i, num_ports = prod_data.num_nic_ports;
> + int32_t qid = prod_data.qid;
> + uint8_t dev_id = prod_data.dev_id;
> + uint8_t port_id = prod_data.port_id;
> + uint32_t prio_idx = 0;
> +
> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
> + if (++eth_port == num_ports)
> + eth_port = 0;
> + if (nb_rx == 0) {
> + rte_pause();
> + return 0;
> + }
> +
> + for (i = 0; i < nb_rx; i++) {
> + ev[i].flow_id = mbufs[i]->hash.rss;
> + ev[i].op = RTE_EVENT_OP_NEW;
> + ev[i].sched_type = queue_type;
> + ev[i].queue_id = qid;
> + ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
> + ev[i].sub_event_type = 0;
> + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
> + ev[i].mbuf = mbufs[i];
> + RTE_SET_USED(prio_idx);
> + }
> +
> + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
> + if (nb_tx != nb_rx) {
> + for (i = nb_tx; i < nb_rx; i++)
> + rte_pktmbuf_free(mbufs[i]);
> + }
> + enqueue_cnt[0] += nb_tx;
Not really used.
> +
> + return 0;
> +}
> +
> +
> +static inline void
> +work(struct rte_mbuf *m)
> +{
> + struct ether_hdr *eth;
> + struct ether_addr addr;
> +
> + /* change mac addresses on packet (to use mbuf data) */
> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
> + ether_addr_copy(ð->d_addr, &addr);
> + ether_addr_copy(ð->s_addr, ð->d_addr);
> + ether_addr_copy(&addr, ð->s_addr);
If it is even number of stages(say 2), Will mac swap be negated? as we are
swapping on each stage NOT in consumer?
> +
> + /* do a number of cycles of work per packet */
> + volatile uint64_t start_tsc = rte_rdtsc();
> + while (rte_rdtsc() < start_tsc + worker_cycles)
> + rte_pause();
> +}
> +
> +static int
> +worker(void *arg)
Looks good.
> +/*
> + * Initializes a given port using global settings and with the RX buffers
> + * coming from the mbuf_pool passed as a parameter.
> + */
> +static inline int
> +port_init(uint8_t port, struct rte_mempool *mbuf_pool)
Looks good.
> +static int
> +setup_eventdev(struct prod_data *prod_data,
> + struct cons_data *cons_data,
> + struct worker_data *worker_data)
> +{
> + /* final queue for sending to TX core */
> + if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
> + printf("%d: error creating qid %d\n", __LINE__, i);
> + return -1;
> + }
> + tx_queue.queue_id = i;
> + tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
> +
> + if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
> + tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
s/tx_p_conf.dequeue_depth/wkr_p_conf.dequeue_depth
> + if (wkr_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
s/wkr_p_conf.dequeue_depth/wkr_p_conf.enqueue_depth
> + tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
s/tx_p_conf.enqueue_depth/wkr_p_conf.enqueue_depth
> +
> + /* set up one port per worker, linking to all stage queues */
> + for (i = 0; i < num_workers; i++) {
> + struct worker_data *w = &worker_data[i];
> + w->dev_id = dev_id;
> + if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
> + printf("Error setting up port %d\n", i);
> + return -1;
> + }
> +
> + uint32_t s;
> + for (s = 0; s < num_stages; s++) {
> + if (rte_event_port_link(dev_id, i,
> + &worker_queues[s].queue_id,
> + &worker_queues[s].priority,
> + 1) != 1) {
> + printf("%d: error creating link for port %d\n",
> + __LINE__, i);
> + return -1;
> + }
> + }
> + w->port_id = i;
> + }
> +
> + if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
> + tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
> + if (tx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
s/dequeue_depth/enqueue_depth
> + tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
> +
> + /* port for consumer, linked to TX queue */
> + if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
> + printf("Error setting up port %d\n", i);
> + return -1;
> + }
> + if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
> + &tx_queue.priority, 1) != 1) {
> + printf("%d: error creating link for port %d\n",
> + __LINE__, i);
> + return -1;
> + }
> + /* port for producer, no links */
> + struct rte_event_port_conf rx_p_conf = {
> + .dequeue_depth = 8,
> + .enqueue_depth = 8,
> + .new_event_threshold = 1200,
> + };
> +
> + if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
> + rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
> + if (rx_p_conf.dequeue_depth > config.nb_event_port_enqueue_depth)
s/dequeue_depth/enqueue_depth
> + rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
> +
> + if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
> + printf("Error setting up port %d\n", i);
> + return -1;
> + }
> +
> + *prod_data = (struct prod_data){.dev_id = dev_id,
> + .port_id = i + 1,
> + .qid = qid[0] };
> + *cons_data = (struct cons_data){.dev_id = dev_id,
> + .port_id = i };
> +
> + enqueue_cnt = rte_calloc(0,
> + RTE_CACHE_LINE_SIZE/(sizeof(enqueue_cnt[0])),
> + sizeof(enqueue_cnt[0]), 0);
> + dequeue_cnt = rte_calloc(0,
> + RTE_CACHE_LINE_SIZE/(sizeof(dequeue_cnt[0])),
> + sizeof(dequeue_cnt[0]), 0);
Debugging stuff. Remove this
> +
> + if (rte_event_dev_start(dev_id) < 0) {
> + printf("Error starting eventdev\n");
> + return -1;
> + }
> +
> + return dev_id;
> +}
> +
> +static void
> +signal_handler(int signum)
> +{
> + if (done)
> + rte_exit(1, "Exiting on signal %d\n", signum);
> + if (signum == SIGINT || signum == SIGTERM) {
> + printf("\n\nSignal %d received, preparing to exit...\n",
> + signum);
> + done = 1;
> + }
> + if (signum == SIGTSTP)
> + rte_event_dev_dump(0, stdout);
> +}
> +
> +int
> +main(int argc, char **argv)
> +{
> + struct worker_data *worker_data;
> + unsigned int num_ports;
> + int lcore_id;
> + int err;
> +
> + signal(SIGINT, signal_handler);
> + signal(SIGTERM, signal_handler);
> + signal(SIGTSTP, signal_handler);
> +
> + if (!quiet) {
> + printf("\nPort Workload distribution:\n");
> + uint32_t i;
> + uint64_t tot_pkts = 0;
> + uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
> + for (i = 0; i < num_workers; i++) {
> + char statname[64];
> + snprintf(statname, sizeof(statname), "port_%u_rx",
> + worker_data[i].port_id);
> + pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
> + dev_id, statname, NULL);
As discussed, Check the the given xstat supported on the PMD first.
> + tot_pkts += pkts_per_wkr[i];
> + }
> + for (i = 0; i < num_workers; i++) {
> + float pc = pkts_per_wkr[i] * 100 /
> + ((float)tot_pkts);
> + printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
> + i, pc, pkts_per_wkr[i]);
> + }
> +
> + }
> +
> + return 0;
> +}
With above changes,
Jerin Jacob <jerin.jacob@caviumnetworks.com>
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-06-30 14:37 ` Mcnamara, John
@ 2017-07-03 5:37 ` Jerin Jacob
2017-07-03 9:25 ` Hunt, David
1 sibling, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-07-03 5:37 UTC (permalink / raw)
To: David Hunt; +Cc: dev, harry.van.haaren
-----Original Message-----
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
> > Adi a new entry in the sample app user-guides,
> which details the working of the eventdev_pipeline_sw.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> ---
> doc/guides/sample_app_ug/eventdev_pipeline_sw.rst | 190 ++++++++++++++++++++++
> doc/guides/sample_app_ug/index.rst | 1 +
> 2 files changed, 191 insertions(+)
> create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
>
> diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
> new file mode 100644
> index 0000000..65c33a8
> --- /dev/null
> +++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
> @@ -0,0 +1,190 @@
> +
> +
> +Eventdev Pipeline SW Sample Application
Eventdev Pipeline SW PMD Sample Application
> +=======================================
> +
> +The eventdev pipeline sample application is a sample app that demonstrates
> +the usage of the eventdev API using the software PMD. It shows how an
> +application can configure a pipeline and assign a set of worker cores to
> +perform the processing required.
> +
> +The application has a range of command line arguments allowing it to be
> +configured for various numbers worker cores, stages,queue depths and cycles per
> +stage of work. This is useful for performance testing as well as quickly testing
> +a particular pipeline configuration.
> +
> +
> +statistics that the PMD provides. The statistics provided depend on the PMD
> +used, see the Event Device Drivers section for a list of eventdev PMDs.
> diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
> index 02611ef..90be36a 100644
> --- a/doc/guides/sample_app_ug/index.rst
> +++ b/doc/guides/sample_app_ug/index.rst
> @@ -69,6 +69,7 @@ Sample Applications User Guides
> netmap_compatibility
> ip_pipeline
> test_pipeline
> + eventdev_pipeline_sw
eventdev_pipeline_sw_pmd
> dist_app
> vm_power_management
> tep_termination
With above changes:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
2017-07-03 5:37 ` Jerin Jacob
@ 2017-07-03 9:25 ` Hunt, David
2017-07-03 9:32 ` Jerin Jacob
0 siblings, 1 reply; 64+ messages in thread
From: Hunt, David @ 2017-07-03 9:25 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren
Hi Jerin,
On 3/7/2017 6:37 AM, Jerin Jacob wrote:
> -----Original Message-----
>> From: Harry van Haaren <harry.van.haaren@intel.com>
>>> Adi a new entry in the sample app user-guides,
>> which details the working of the eventdev_pipeline_sw.
>>
>> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
>> Signed-off-by: David Hunt <david.hunt@intel.com>
>> ---
>> doc/guides/sample_app_ug/eventdev_pipeline_sw.rst | 190 ++++++++++++++++++++++
>> doc/guides/sample_app_ug/index.rst | 1 +
>> 2 files changed, 191 insertions(+)
>> create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
>>
>> diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
>> new file mode 100644
>> index 0000000..65c33a8
>> --- /dev/null
>> +++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
>> @@ -0,0 +1,190 @@
>> +
>> +
>> +Eventdev Pipeline SW Sample Application
> Eventdev Pipeline SW PMD Sample Application
Sure.
>> +=======================================
>> +
>> +The eventdev pipeline sample application is a sample app that demonstrates
>> +the usage of the eventdev API using the software PMD. It shows how an
>> +application can configure a pipeline and assign a set of worker cores to
>> +perform the processing required.
>> +
>> +The application has a range of command line arguments allowing it to be
>> +configured for various numbers worker cores, stages,queue depths and cycles per
>> +stage of work. This is useful for performance testing as well as quickly testing
>> +a particular pipeline configuration.
>> +
>> +
>> +statistics that the PMD provides. The statistics provided depend on the PMD
>> +used, see the Event Device Drivers section for a list of eventdev PMDs.
>> diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
>> index 02611ef..90be36a 100644
>> --- a/doc/guides/sample_app_ug/index.rst
>> +++ b/doc/guides/sample_app_ug/index.rst
>> @@ -69,6 +69,7 @@ Sample Applications User Guides
>> netmap_compatibility
>> ip_pipeline
>> test_pipeline
>> + eventdev_pipeline_sw
> eventdev_pipeline_sw_pmd
There's no need to change this, it would break 'make doc'
The filename of the .rst is
"./doc/guides/sample_app_ug/eventdev_pipeline_sw.rst"
The app called eventdev_pipeline_sw.
>> dist_app
>> vm_power_management
>> tep_termination
> With above changes:
>
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
2017-07-03 9:25 ` Hunt, David
@ 2017-07-03 9:32 ` Jerin Jacob
2017-07-04 8:20 ` Hunt, David
0 siblings, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-07-03 9:32 UTC (permalink / raw)
To: Hunt, David; +Cc: dev, harry.van.haaren
-----Original Message-----
> Date: Mon, 3 Jul 2017 10:25:17 +0100
> From: "Hunt, David" <david.hunt@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: dev@dpdk.org, harry.van.haaren@intel.com
> Subject: Re: [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hi Jerin,
>
>
> On 3/7/2017 6:37 AM, Jerin Jacob wrote:
> > -----Original Message-----
> > > From: Harry van Haaren <harry.van.haaren@intel.com>
> > > > Adi a new entry in the sample app user-guides,
> > > which details the working of the eventdev_pipeline_sw.
> > >
> > > Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> > > Signed-off-by: David Hunt <david.hunt@intel.com>
> > > ---
> > > doc/guides/sample_app_ug/eventdev_pipeline_sw.rst | 190 ++++++++++++++++++++++
> > > doc/guides/sample_app_ug/index.rst | 1 +
> > > 2 files changed, 191 insertions(+)
> > > create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
> > >
> > > diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
> > > new file mode 100644
> > > index 0000000..65c33a8
> > > --- /dev/null
> > > +++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw.rst
> > > @@ -0,0 +1,190 @@
> > > +
> > > +
> > > +Eventdev Pipeline SW Sample Application
> > Eventdev Pipeline SW PMD Sample Application
>
> Sure.
>
> > > +=======================================
> > > +
> > > +The eventdev pipeline sample application is a sample app that demonstrates
> > > +the usage of the eventdev API using the software PMD. It shows how an
> > > +application can configure a pipeline and assign a set of worker cores to
> > > +perform the processing required.
> > > +
> > > +The application has a range of command line arguments allowing it to be
> > > +configured for various numbers worker cores, stages,queue depths and cycles per
> > > +stage of work. This is useful for performance testing as well as quickly testing
> > > +a particular pipeline configuration.
> > > +
> > > +
> > > +statistics that the PMD provides. The statistics provided depend on the PMD
> > > +used, see the Event Device Drivers section for a list of eventdev PMDs.
> > > diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
> > > index 02611ef..90be36a 100644
> > > --- a/doc/guides/sample_app_ug/index.rst
> > > +++ b/doc/guides/sample_app_ug/index.rst
> > > @@ -69,6 +69,7 @@ Sample Applications User Guides
> > > netmap_compatibility
> > > ip_pipeline
> > > test_pipeline
> > > + eventdev_pipeline_sw
> > eventdev_pipeline_sw_pmd
>
> There's no need to change this, it would break 'make doc'
> The filename of the .rst is
> "./doc/guides/sample_app_ug/eventdev_pipeline_sw.rst"
> The app called eventdev_pipeline_sw.
OK
>
> > > dist_app
> > > vm_power_management
> > > tep_termination
> > With above changes:
> >
> > Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-03 3:57 ` Jerin Jacob
@ 2017-07-04 7:55 ` Hunt, David
2017-07-05 5:30 ` Jerin Jacob
0 siblings, 1 reply; 64+ messages in thread
From: Hunt, David @ 2017-07-04 7:55 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
Hi Jerin,
On 3/7/2017 4:57 AM, Jerin Jacob wrote:
> -----Original Message-----
>> From: Harry van Haaren<harry.van.haaren@intel.com>
>>
>> This commit adds a sample app for the eventdev library.
>> The app has been tested with DPDK 17.05-rc2, hence this
>> release (or later) is recommended.
>>
>> The sample app showcases a pipeline processing use-case,
>> with event scheduling and processing defined per stage.
>> The application receives traffic as normal, with each
>> packet traversing the pipeline. Once the packet has
>> been processed by each of the pipeline stages, it is
>> transmitted again.
>>
>> The app provides a framework to utilize cores for a single
>> role or multiple roles. Examples of roles are the RX core,
>> TX core, Scheduling core (in the case of the event/sw PMD),
>> and worker cores.
>>
>> Various flags are available to configure numbers of stages,
>> cycles of work at each stage, type of scheduling, number of
>> worker cores, queue depths etc. For a full explaination,
>> please refer to the documentation.
> A few comments on bugs and "to avoid the future rework on base code when
> HW PMD is introduced". As we agreed, We will keep the functionality intact to
> provide an application to test ethdev + eventdev with _SW PMD_ for 17.08
>
Sure OK. I will Address.
>> ---
>> examples/Makefile | 2 +
>> examples/eventdev_pipeline/Makefile | 49 ++
>> examples/eventdev_pipeline/main.c | 999 ++++++++++++++++++++++++++++++++++++
>> 3 files changed, 1050 insertions(+)
>> create mode 100644 examples/eventdev_pipeline/Makefile
>> create mode 100644 examples/eventdev_pipeline/main.c
> Do we need to update the MAINTAINERS file?
Updated
>> diff --git a/examples/Makefile b/examples/Makefile
>> index 6298626..a6dcc2b 100644
>> --- a/examples/Makefile
>> +++ b/examples/Makefile
>> @@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
>> endif
>> endif
>>
>> +DIRS-y += eventdev_pipeline
> Can you change to eventdev_pipeline_sw_pmd to emphasis on the scope.
> We will rename to eventdev_pipeline once it working effectively on both SW and HW
> PMD with ethdev.
OK, I've updated the directory, app name and relevant docs across the
board so they're all
eventdev_pipeline_sw_pmd. This should make it clear to anyone using it
that it's for the
sw_pmd only, and an updated version will be provided later.
>> +
>> include $(RTE_SDK)/mk/rte.extsubdir.mk
>> diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
>> new file mode 100644
>> index 0000000..4c26e15
>> --- /dev/null
>> +++ b/examples/eventdev_pipeline/Makefile
>> @@ -0,0 +1,49 @@
>> +# BSD LICENSE
>> +#
>> +# Copyright(c) 2016 Intel Corporation. All rights reserved.
> 2016-2017
Done.
>> +#
>> +# Redistribution and use in source and binary forms, with or without
>> +# modification, are permitted provided that the following conditions
>> +# are met:
>> +#
>> +
>> +static unsigned int active_cores;
>> +static unsigned int num_workers;
>> +static long num_packets = (1L << 25); /* do ~32M packets */
>> +static unsigned int num_fids = 512;
>> +static unsigned int num_stages = 1;
>> +static unsigned int worker_cq_depth = 16;
>> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
>> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
>> +static int16_t qid[MAX_NUM_STAGES] = {-1};
>> +static int worker_cycles;
>> +static int enable_queue_priorities;
>> +
>> +struct prod_data {
>> + uint8_t dev_id;
>> + uint8_t port_id;
>> + int32_t qid;
>> + unsigned int num_nic_ports;
>> +} __rte_cache_aligned;
>> +
>> +struct cons_data {
>> + uint8_t dev_id;
>> + uint8_t port_id;
>> +} __rte_cache_aligned;
>> +
>> +static struct prod_data prod_data;
>> +static struct cons_data cons_data;
>> +
>> +struct worker_data {
>> + uint8_t dev_id;
>> + uint8_t port_id;
>> +} __rte_cache_aligned;
>> +
>> +static unsigned int *enqueue_cnt;
>> +static unsigned int *dequeue_cnt;
> Not been consumed. Remove it.
Done.
>> +
>> +static volatile int done;
>> +static int quiet;
>> +static int dump_dev;
>> +static int dump_dev_signal;
>> +
>> +static uint32_t rx_lock;
>> +static uint32_t tx_lock;
>> +static uint32_t sched_lock;
>> +static bool rx_single;
>> +static bool tx_single;
>> +static bool sched_single;
>> +
>> +static unsigned int rx_core[MAX_NUM_CORE];
>> +static unsigned int tx_core[MAX_NUM_CORE];
>> +static unsigned int sched_core[MAX_NUM_CORE];
>> +static unsigned int worker_core[MAX_NUM_CORE];
>> +
>> +static struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
> Could you please remove this global variable and group under a structure
> for "command line parsing specific" and "fast path specific"(anything comes
> in producer(), worker() and consumer()). And please
> allocate "fast path specific" structure variable from huge page area.
> So that we can easily add new parsing and fastpath variable in future.
>
Done. Fastpath vars now allocated using rte_malloc()
>> +
>> +static int
>> +consumer(void)
>> +{
>> + const uint64_t freq_khz = rte_get_timer_hz() / 1000;
>> + struct rte_event packets[BATCH_SIZE];
>> +
>> + static uint64_t received;
>> + static uint64_t last_pkts;
>> + static uint64_t last_time;
>> + static uint64_t start_time;
>> + unsigned int i, j;
>> + uint8_t dev_id = cons_data.dev_id;
>> + uint8_t port_id = cons_data.port_id;
>> +
>> + uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
>> + packets, RTE_DIM(packets), 0);
>> +
>> + if (n == 0) {
>> + for (j = 0; j < rte_eth_dev_count(); j++)
>> + rte_eth_tx_buffer_flush(j, 0, tx_buf[j]);
>> + return 0;
>> + }
>> + if (start_time == 0)
>> + last_time = start_time = rte_get_timer_cycles();
>> +
>> + received += n;
>> + for (i = 0; i < n; i++) {
>> + uint8_t outport = packets[i].mbuf->port;
>> + rte_eth_tx_buffer(outport, 0, tx_buf[outport],
>> + packets[i].mbuf);
>> + }
>> +
>> + /* Print out mpps every 1<22 packets */
>> + if (!quiet && received >= last_pkts + (1<<22)) {
>> + const uint64_t now = rte_get_timer_cycles();
>> + const uint64_t total_ms = (now - start_time) / freq_khz;
>> + const uint64_t delta_ms = (now - last_time) / freq_khz;
>> + uint64_t delta_pkts = received - last_pkts;
>> +
>> + printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
>> + "avg %.3f mpps [current %.3f mpps]\n",
>> + received,
>> + total_ms,
>> + received / (total_ms * 1000.0),
>> + delta_pkts / (delta_ms * 1000.0));
>> + last_pkts = received;
>> + last_time = now;
>> + }
>> +
>> + dequeue_cnt[0] += n;
> Not really used.
Removed
>> +
>> + num_packets -= n;
>> + if (num_packets <= 0)
>> + done = 1;
>> +
>> + return 0;
>> +}
>> +
>> +static int
>> +producer(void)
>> +{
>> + static uint8_t eth_port;
>> + struct rte_mbuf *mbufs[BATCH_SIZE+2];
>> + struct rte_event ev[BATCH_SIZE+2];
>> + uint32_t i, num_ports = prod_data.num_nic_ports;
>> + int32_t qid = prod_data.qid;
>> + uint8_t dev_id = prod_data.dev_id;
>> + uint8_t port_id = prod_data.port_id;
>> + uint32_t prio_idx = 0;
>> +
>> + const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
>> + if (++eth_port == num_ports)
>> + eth_port = 0;
>> + if (nb_rx == 0) {
>> + rte_pause();
>> + return 0;
>> + }
>> +
>> + for (i = 0; i < nb_rx; i++) {
>> + ev[i].flow_id = mbufs[i]->hash.rss;
>> + ev[i].op = RTE_EVENT_OP_NEW;
>> + ev[i].sched_type = queue_type;
>> + ev[i].queue_id = qid;
>> + ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
>> + ev[i].sub_event_type = 0;
>> + ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
>> + ev[i].mbuf = mbufs[i];
>> + RTE_SET_USED(prio_idx);
>> + }
>> +
>> + const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
>> + if (nb_tx != nb_rx) {
>> + for (i = nb_tx; i < nb_rx; i++)
>> + rte_pktmbuf_free(mbufs[i]);
>> + }
>> + enqueue_cnt[0] += nb_tx;
> Not really used.
Removed
>> +
>> + return 0;
>> +}
>> +
>> +
>> +static inline void
>> +work(struct rte_mbuf *m)
>> +{
>> + struct ether_hdr *eth;
>> + struct ether_addr addr;
>> +
>> + /* change mac addresses on packet (to use mbuf data) */
>> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
>> + ether_addr_copy(ð->d_addr, &addr);
>> + ether_addr_copy(ð->s_addr, ð->d_addr);
>> + ether_addr_copy(&addr, ð->s_addr);
> If it is even number of stages(say 2), Will mac swap be negated? as we are
> swapping on each stage NOT in consumer?
The mac swap is just to touch the mbuf. It does not matter if it is negated.
>> +
>> + /* do a number of cycles of work per packet */
>> + volatile uint64_t start_tsc = rte_rdtsc();
>> + while (rte_rdtsc() < start_tsc + worker_cycles)
>> + rte_pause();
>> +}
>> +
>> +static int
>> +worker(void *arg)
> Looks good.
>
>> +/*
>> + * Initializes a given port using global settings and with the RX buffers
>> + * coming from the mbuf_pool passed as a parameter.
>> + */
>> +static inline int
>> +port_init(uint8_t port, struct rte_mempool *mbuf_pool)
> Looks good.
>
>> +static int
>> +setup_eventdev(struct prod_data *prod_data,
>> + struct cons_data *cons_data,
>> + struct worker_data *worker_data)
>> +{
>> + /* final queue for sending to TX core */
>> + if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
>> + printf("%d: error creating qid %d\n", __LINE__, i);
>> + return -1;
>> + }
>> + tx_queue.queue_id = i;
>> + tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
>> +
>> + if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
>> + tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
> s/tx_p_conf.dequeue_depth/wkr_p_conf.dequeue_depth
done, along with other similar coding errors.
--snip--
>> +
>> +int
>> +main(int argc, char **argv)
>> +{
>> + struct worker_data *worker_data;
>> + unsigned int num_ports;
>> + int lcore_id;
>> + int err;
>> +
>> + signal(SIGINT, signal_handler);
>> + signal(SIGTERM, signal_handler);
>> + signal(SIGTSTP, signal_handler);
>> +
>> + if (!quiet) {
>> + printf("\nPort Workload distribution:\n");
>> + uint32_t i;
>> + uint64_t tot_pkts = 0;
>> + uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
>> + for (i = 0; i < num_workers; i++) {
>> + char statname[64];
>> + snprintf(statname, sizeof(statname), "port_%u_rx",
>> + worker_data[i].port_id);
>> + pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
>> + dev_id, statname, NULL);
> As discussed, Check the the given xstat supported on the PMD first.
Checking has now been implemented. It'd done by calling
rte_event_dev_xstats_by_name_get()
and seeing if the result is -ENOTSUP. However there is a bug in the
function in that it is declared
as a uint64_t, but then returns a -ENOTSUP, so I have to cast the
-ENOTSUP as a uint64_t for
comparison. This will need to be fixed when the function is patched.
retval = rte_event_dev_xstats_by_name_get(
dev_id, statname, NULL);
if (retval != (uint64_t)-ENOTSUP) {
pkts_per_wkr[i] = retval;
tot_pkts += pkts_per_wkr[i];
}
>> + tot_pkts += pkts_per_wkr[i];
>> + }
>> + for (i = 0; i < num_workers; i++) {
>> + float pc = pkts_per_wkr[i] * 100 /
>> + ((float)tot_pkts);
>> + printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
>> + i, pc, pkts_per_wkr[i]);
>> + }
>> +
>> + }
>> +
>> + return 0;
>> +}
> With above changes,
>
> Jerin Jacob<jerin.jacob@caviumnetworks.com>
Thanks for the reviews.
Regards,
Dave.
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline sample app
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added " David Hunt
2017-07-03 3:57 ` Jerin Jacob
@ 2017-07-04 8:14 ` David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added " David Hunt
` (2 more replies)
1 sibling, 3 replies; 64+ messages in thread
From: David Hunt @ 2017-07-04 8:14 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sw pmd sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* None, incorrect patch upload
Changes in patch v3:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
Changes in patch v4:
* Re-named the app as eventdev_pipeline_sw, as it's aimed at showing the
functionality of the software PMD.
Changes in patch v5:
* Fixed make doc. eventdev_pipeline to eventdev_pipeline_sw
* Fixed some typos in the eventdev programmers guide
Changes in patch v6:
* made name of dirs and app consistent - eventdev_pipeline_sw_pmd
* Added new files and dirs to MAINTAINERS
* Updaged libeventdev docs based on Jerin's feedback
* Added some cleanup to eventdev_pipeline sw pmd sample app
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
[1/3] examples/eventdev_pipeline: added sample app
[2/3] doc: add sw eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added sample app
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-07-04 8:14 ` David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-07-04 8:14 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
MAINTAINERS | 1 +
examples/Makefile | 2 +
examples/eventdev_pipeline_sw_pmd/Makefile | 49 ++
examples/eventdev_pipeline_sw_pmd/main.c | 1005 ++++++++++++++++++++++++++++
4 files changed, 1057 insertions(+)
create mode 100644 examples/eventdev_pipeline_sw_pmd/Makefile
create mode 100644 examples/eventdev_pipeline_sw_pmd/main.c
diff --git a/MAINTAINERS b/MAINTAINERS
index d9dbf8f..860621a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -579,6 +579,7 @@ M: Harry van Haaren <harry.van.haaren@intel.com>
F: drivers/event/sw/
F: test/test/test_eventdev_sw.c
F: doc/guides/eventdevs/sw.rst
+F: examples/eventdev_pipeline_sw_pmd/
NXP DPAA2 Eventdev PMD
M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/examples/Makefile b/examples/Makefile
index c0e9c3b..97f12ad 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline_sw_pmd
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline_sw_pmd/Makefile b/examples/eventdev_pipeline_sw_pmd/Makefile
new file mode 100644
index 0000000..de4e22c
--- /dev/null
+++ b/examples/eventdev_pipeline_sw_pmd/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline_sw_pmd
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
new file mode 100644
index 0000000..c62cba2
--- /dev/null
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -0,0 +1,1005 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+static unsigned int active_cores;
+static unsigned int num_workers;
+static long num_packets = (1L << 25); /* do ~32M packets */
+static unsigned int num_fids = 512;
+static unsigned int num_stages = 1;
+static unsigned int worker_cq_depth = 16;
+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
+static int16_t qid[MAX_NUM_STAGES] = {-1};
+static int worker_cycles;
+static int enable_queue_priorities;
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+} __rte_cache_aligned;
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+struct fastpath_data {
+ volatile int done;
+ uint32_t rx_lock;
+ uint32_t tx_lock;
+ uint32_t sched_lock;
+ bool rx_single;
+ bool tx_single;
+ bool sched_single;
+ unsigned int rx_core[MAX_NUM_CORE];
+ unsigned int tx_core[MAX_NUM_CORE];
+ unsigned int sched_core[MAX_NUM_CORE];
+ unsigned int worker_core[MAX_NUM_CORE];
+ struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+};
+
+static struct fastpath_data *fdata;
+
+struct config_data {
+ int quiet;
+ int dump_dev;
+ int dump_dev_signal;
+};
+
+static struct config_data cdata;
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (fdata->rx_core[lcore_id] || fdata->sched_core[lcore_id] ||
+ fdata->tx_core[lcore_id] || fdata->worker_core[lcore_id]);
+}
+
+
+static void
+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t received;
+ static uint64_t last_pkts;
+ static uint64_t last_time;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, fdata->tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ last_time = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ /* Print out mpps every 1<22 packets */
+ if (!cdata.quiet && received >= last_pkts + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t total_ms = (now - start_time) / freq_khz;
+ const uint64_t delta_ms = (now - last_time) / freq_khz;
+ uint64_t delta_pkts = received - last_pkts;
+
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
+ "avg %.3f mpps [current %.3f mpps]\n",
+ received,
+ total_ms,
+ received / (total_ms * 1000.0),
+ delta_pkts / (delta_ms * 1000.0));
+ last_pkts = received;
+ last_time = now;
+ }
+
+ num_packets -= n;
+ if (num_packets <= 0)
+ fdata->done = 1;
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE+2];
+ struct rte_event ev[BATCH_SIZE+2];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
+ rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
+ }
+
+ if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
+ rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (cdata.dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ cdata.dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
+ }
+
+ if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
+ rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->tx_lock));
+ }
+}
+
+
+
+static inline void
+work(struct rte_mbuf *m)
+{
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(ð->s_addr, ð->d_addr);
+ ether_addr_copy(&addr, ð->s_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + worker_cycles)
+ rte_pause();
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!fdata->done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!fdata->worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ const uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == qid[0])
+ events[i].flow_id = events[i].mbuf->hash.rss
+ % num_fids;
+
+ events[i].queue_id = next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+ events[i].sched_type = queue_type;
+
+ work(events[i].mbuf);
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !fdata->done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!cdata.quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ enable_queue_priorities = 1;
+ break;
+ case 'o':
+ queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ cdata.quiet = 1;
+ break;
+ case 'D':
+ cdata.dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ fdata->rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ fdata->tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ fdata->sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (num_stages == 0 || num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ fdata->rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ fdata->tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ fdata->sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ fdata->worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (fdata->worker_core[i])
+ num_workers++;
+ if (core_in_use(i))
+ active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ fdata->tx_buf[i] =
+ rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (fdata->tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(fdata->tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(fdata->tx_buf[i],
+ eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = num_workers + 2;
+ struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ if (dev_info.max_event_port_dequeue_depth <
+ config.nb_event_port_dequeue_depth)
+ config.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+ if (dev_info.max_event_port_enqueue_depth <
+ config.nb_event_port_enqueue_depth)
+ config.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0) {
+ printf("%d: Error configuring device\n", __LINE__);
+ return -1;
+ }
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ qid[i] = i;
+ next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ wkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (wkr_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ wkr_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (tx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (rx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (fdata->done)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ fdata->done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ fdata = rte_malloc(NULL, sizeof(struct fastpath_data), 0);
+ if (fdata == NULL)
+ rte_panic("Out of memory\n");
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = active_cores;
+
+ if (!cdata.quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", num_workers);
+ printf("\tpackets: %lu\n", num_packets);
+ printf("\tQueue-prio: %u\n", enable_queue_priorities);
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!fdata->rx_core[lcore_id] &&
+ !fdata->worker_core[lcore_id] &&
+ !fdata->tx_core[lcore_id] &&
+ !fdata->sched_core[lcore_id])
+ continue;
+
+ if (fdata->rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (fdata->tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (fdata->sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (fdata->worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (fdata->worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (cdata.dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!cdata.quiet) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < num_workers; i++) {
+ char statname[64];
+ uint64_t retval;
+
+ snprintf(statname, sizeof(statname), "port_%u_rx",
+ worker_data[i].port_id);
+ retval = rte_event_dev_xstats_by_name_get(
+ dev_id, statname, NULL);
+ if (retval != (uint64_t)-ENOTSUP) {
+ pkts_per_wkr[i] = retval;
+ tot_pkts += pkts_per_wkr[i];
+ }
+ }
+ for (i = 0; i < num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-07-04 8:14 ` David Hunt
2017-07-05 4:30 ` Jerin Jacob
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-07-04 8:14 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline_sw.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
MAINTAINERS | 1 +
doc/guides/sample_app_ug/index.rst | 1 +
2 files changed, 2 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 860621a..4479906 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -580,6 +580,7 @@ F: drivers/event/sw/
F: test/test/test_eventdev_sw.c
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline_sw_pmd/
+F: doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
NXP DPAA2 Eventdev PMD
M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..f9239e3 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline_sw_pmd
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v6 3/3] doc: add eventdev library to programmers guide
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added " David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-07-04 8:14 ` David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-07-04 8:14 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
doc/guides/prog_guide/eventdev.rst | 394 +++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1389 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..908d123
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,394 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader.
+
+The eventdev framework introduces the event driven programming model. In a
+polling model, lcores poll ethdev ports and associated Rx queues directly
+to look for a packet. By contrast in an event driven model, lcores call the
+scheduler that selects packets for them based on programmer-specified criteria.
+The Eventdev library adds support for an event driven programming model, which
+offers applications automatic multicore scaling, dynamic load balancing,
+pipelining, packet ingress order maintenance and synchronization services to
+simplify application packet processing.
+
+By introducing an event driven programming model, DPDK can support both polling
+and event driven programming models for packet processing, and applications are
+free to choose whatever model (or combination of the two) best suits their
+needs.
+
+Step-by-step instructions of the eventdev design is available in the `API
+Walk-through`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+An event queue is a queue containing events that are scheduled by the event
+device. An event queue contains events of different flows associated with
+scheduling types, such as atomic, ordered, or parallel.
+
+Queue All Types Capable
+^^^^^^^^^^^^^^^^^^^^^^^
+
+If RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability bit is set in the event device,
+then events of any type may be sent to any queue. Otherwise, the queues only
+support events of the type that it was created with.
+
+Queue All Types Incapable
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In this case, each stage has a specified scheduling type. The application
+configures each queue for a specific type of scheduling, and just enqueues all
+events to the eventdev. An example of a PMD of this type is the eventdev
+software PMD.
+
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order on downstream queue enqueue. Parallel allows sending all flows
+to all CPU cores, without any re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walk-through
+----------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walk-through:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walk-through assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walk-through assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+An event driven worker thread has following typical workflow on fastpath:
+
+.. code-block:: c
+
+ while (1) {
+ rte_event_dequeue_burst(...);
+ (event processing)
+ rte_event_enqueue_burst(...);
+ }
+
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug
2017-07-03 9:32 ` Jerin Jacob
@ 2017-07-04 8:20 ` Hunt, David
0 siblings, 0 replies; 64+ messages in thread
From: Hunt, David @ 2017-07-04 8:20 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren
On 3/7/2017 10:32 AM, Jerin Jacob wrote:
> -----Original Message-----
>>
--snip--
>>>> @@ -69,6 +69,7 @@ Sample Applications User Guides
>>>> netmap_compatibility
>>>> ip_pipeline
>>>> test_pipeline
>>>> + eventdev_pipeline_sw
>>> eventdev_pipeline_sw_pmd
>> There's no need to change this, it would break 'make doc'
>> The filename of the .rst is
>> "./doc/guides/sample_app_ug/eventdev_pipeline_sw.rst"
>> The app called eventdev_pipeline_sw.
> OK
Jerin,
I've now changed the app name to eventdev_pipeline_sw_pmd,
including directory name, rst name, etc. So now it's all consistent.
Rgds,
Dave.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-07-05 4:30 ` Jerin Jacob
0 siblings, 0 replies; 64+ messages in thread
From: Jerin Jacob @ 2017-07-05 4:30 UTC (permalink / raw)
To: David Hunt; +Cc: dev, harry.van.haaren
-----Original Message-----
> Date: Tue, 4 Jul 2017 09:14:22 +0100
> From: David Hunt <david.hunt@intel.com>
> To: dev@dpdk.org
> CC: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com, David Hunt
> <david.hunt@intel.com>
> Subject: [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug
> X-Mailer: git-send-email 2.7.4
>
> From: Harry van Haaren <harry.van.haaren@intel.com>
>
> Add a new entry in the sample app user-guides,
> which details the working of the eventdev_pipeline_sw.
>
> Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
> Signed-off-by: David Hunt <david.hunt@intel.com>
> Acked-by: John McNamara <john.mcnamara@intel.com>
> Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
> MAINTAINERS | 1 +
> doc/guides/sample_app_ug/index.rst | 1 +
You forgot to add the actual file.
> 2 files changed, 2 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 860621a..4479906 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -580,6 +580,7 @@ F: drivers/event/sw/
> F: test/test/test_eventdev_sw.c
> F: doc/guides/eventdevs/sw.rst
> F: examples/eventdev_pipeline_sw_pmd/
> +F: doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
>
> NXP DPAA2 Eventdev PMD
> M: Hemant Agrawal <hemant.agrawal@nxp.com>
> diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
> index 02611ef..f9239e3 100644
> --- a/doc/guides/sample_app_ug/index.rst
> +++ b/doc/guides/sample_app_ug/index.rst
> @@ -69,6 +69,7 @@ Sample Applications User Guides
> netmap_compatibility
> ip_pipeline
> test_pipeline
> + eventdev_pipeline_sw_pmd
> dist_app
> vm_power_management
> tep_termination
> --
> 2.7.4
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-04 7:55 ` Hunt, David
@ 2017-07-05 5:30 ` Jerin Jacob
2017-07-05 11:15 ` Hunt, David
0 siblings, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-07-05 5:30 UTC (permalink / raw)
To: Hunt, David; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Tue, 4 Jul 2017 08:55:25 +0100
> From: "Hunt, David" <david.hunt@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
> Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hi Jerin,
Hi David,
I have checked the v6. Some comments below.
>
>
> On 3/7/2017 4:57 AM, Jerin Jacob wrote:
> > -----Original Message-----
> > > From: Harry van Haaren<harry.van.haaren@intel.com>
> > >
> > > This commit adds a sample app for the eventdev library.
> > > The app has been tested with DPDK 17.05-rc2, hence this
> > > release (or later) is recommended.
> > >
> > > The sample app showcases a pipeline processing use-case,
> > > with event scheduling and processing defined per stage.
> > > The application receives traffic as normal, with each
> > > packet traversing the pipeline. Once the packet has
> > > been processed by each of the pipeline stages, it is
> > > transmitted again.
> > >
> > > The app provides a framework to utilize cores for a single
> > > role or multiple roles. Examples of roles are the RX core,
> > > TX core, Scheduling core (in the case of the event/sw PMD),
> > > and worker cores.
> > >
> > > Various flags are available to configure numbers of stages,
> > > cycles of work at each stage, type of scheduling, number of
> > > worker cores, queue depths etc. For a full explaination,
> > > please refer to the documentation.
> > A few comments on bugs and "to avoid the future rework on base code when
> > HW PMD is introduced". As we agreed, We will keep the functionality intact to
> > provide an application to test ethdev + eventdev with _SW PMD_ for 17.08
> >
>
> Sure OK. I will Address.
>
> > > ---
> > > examples/Makefile | 2 +
> > > examples/eventdev_pipeline/Makefile | 49 ++
> > > examples/eventdev_pipeline/main.c | 999 ++++++++++++++++++++++++++++++++++++
> > > 3 files changed, 1050 insertions(+)
> > > create mode 100644 examples/eventdev_pipeline/Makefile
> > > create mode 100644 examples/eventdev_pipeline/main.c
> > Do we need to update the MAINTAINERS file?
>
> Updated
> > > diff --git a/examples/Makefile b/examples/Makefile
> > > index 6298626..a6dcc2b 100644
> > > --- a/examples/Makefile
> > > +++ b/examples/Makefile
> > > @@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
> > > endif
> > > endif
> > > +DIRS-y += eventdev_pipeline
> > Can you change to eventdev_pipeline_sw_pmd to emphasis on the scope.
> > We will rename to eventdev_pipeline once it working effectively on both SW and HW
> > PMD with ethdev.
>
> OK, I've updated the directory, app name and relevant docs across the board
> so they're all
> eventdev_pipeline_sw_pmd. This should make it clear to anyone using it that
> it's for the
> sw_pmd only, and an updated version will be provided later.
>
>
> > > +
> > > include $(RTE_SDK)/mk/rte.extsubdir.mk
> > > diff --git a/examples/eventdev_pipeline/Makefile b/examples/eventdev_pipeline/Makefile
> > > new file mode 100644
> > > index 0000000..4c26e15
> > > --- /dev/null
> > > +++ b/examples/eventdev_pipeline/Makefile
> > > @@ -0,0 +1,49 @@
> > > +# BSD LICENSE
> > > +#
> > > +# Copyright(c) 2016 Intel Corporation. All rights reserved.
> > 2016-2017
>
> Done.
>
> > > +#
> > > +# Redistribution and use in source and binary forms, with or without
> > > +# modification, are permitted provided that the following conditions
> > > +# are met:
> > > +#
> > > +
> > > +static unsigned int active_cores;
> > > +static unsigned int num_workers;
> > > +static long num_packets = (1L << 25); /* do ~32M packets */
> > > +static unsigned int num_fids = 512;
> > > +static unsigned int num_stages = 1;
> > > +static unsigned int worker_cq_depth = 16;
Any reason not move this to struct config_data?
> > > +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> > > +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
> > > +static int16_t qid[MAX_NUM_STAGES] = {-1};
> > > +static int worker_cycles;
used in fastpath move to struct fastpath_data.
> > > +static int enable_queue_priorities;
struct config_data ?
> > > +
> > > +struct prod_data {
> > > + uint8_t dev_id;
> > > + uint8_t port_id;
> > > + int32_t qid;
> > > + unsigned int num_nic_ports;
> > > +} __rte_cache_aligned;
> > > +
> > > +struct cons_data {
> > > + uint8_t dev_id;
> > > + uint8_t port_id;
> > > +} __rte_cache_aligned;
> > > +
> > > +static struct prod_data prod_data;
> > > +static struct cons_data cons_data;
> > > +
> > > +struct worker_data {
> > > + uint8_t dev_id;
> > > + uint8_t port_id;
> > > +} __rte_cache_aligned;
> > > +
> > > +static unsigned int *enqueue_cnt;
> > > +static unsigned int *dequeue_cnt;
> > Not been consumed. Remove it.
>
> Done.
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +
> > > +static inline void
> > > +work(struct rte_mbuf *m)
> > > +{
> > > + struct ether_hdr *eth;
> > > + struct ether_addr addr;
> > > +
> > > + /* change mac addresses on packet (to use mbuf data) */
> > > + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
> > > + ether_addr_copy(ð->d_addr, &addr);
> > > + ether_addr_copy(ð->s_addr, ð->d_addr);
> > > + ether_addr_copy(&addr, ð->s_addr);
> > If it is even number of stages(say 2), Will mac swap be negated? as we are
> > swapping on each stage NOT in consumer?
>
> The mac swap is just to touch the mbuf. It does not matter if it is negated.
Are you sure or I am missing something here? for example,If I add following piece
of code, irrespective number of stages it should send the same packet if
source packet is same. Right ?
But not happening as expected(Check the src and dest mac address)
stage == 1
00000000: 00 0F B7 11 27 2B 00 0F B7 11 27 2C 08 00 45 00 |....'+....',..E.
00000010: 00 2E 00 00 00 00 04 11 D5 B1 C0 A8 12 02 0E 01 |................
00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
stage == 2
00000000: 00 0F B7 11 27 2C 00 0F B7 11 27 2B 08 00 45 00 |....',....'+..E.
00000010: 00 2E 00 00 00 00 04 11 D5 B0 C0 A8 12 03 0E 01 |................
00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index c62cba2..a7aaf37 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -147,8 +147,9 @@ consumer(void)
if (start_time == 0)
@@ -157,6 +158,7 @@ consumer(void)
received += n;
for (i = 0; i < n; i++) {
uint8_t outport = packets[i].mbuf->port;
+ rte_pktmbuf_dump(stdout, packets[i].mbuf, 64);
rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
packets[i].mbuf);
Either fix the mac swap properly or remove it.
> > > +
> > > + if (!quiet) {
> > > + printf("\nPort Workload distribution:\n");
> > > + uint32_t i;
> > > + uint64_t tot_pkts = 0;
> > > + uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
> > > + for (i = 0; i < num_workers; i++) {
> > > + char statname[64];
> > > + snprintf(statname, sizeof(statname), "port_%u_rx",
> > > + worker_data[i].port_id);
> > > + pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
> > > + dev_id, statname, NULL);
> > As discussed, Check the the given xstat supported on the PMD first.
>
> Checking has now been implemented. It'd done by calling
> rte_event_dev_xstats_by_name_get()
> and seeing if the result is -ENOTSUP. However there is a bug in the function
> in that it is declared
> as a uint64_t, but then returns a -ENOTSUP, so I have to cast the -ENOTSUP
> as a uint64_t for
> comparison. This will need to be fixed when the function is patched.
>
> retval = rte_event_dev_xstats_by_name_get(
> dev_id, statname, NULL);
> if (retval != (uint64_t)-ENOTSUP) {
> pkts_per_wkr[i] = retval;
> tot_pkts += pkts_per_wkr[i];
> }
>
>
>
> > > + tot_pkts += pkts_per_wkr[i];
> > > + }
> > > + for (i = 0; i < num_workers; i++) {
> > > + float pc = pkts_per_wkr[i] * 100 /
> > > + ((float)tot_pkts);
This will print NAN.
How about, move the specific xstat as static inline function.
port_stat(...)
{
char statname[64];
uint64_t retval;
snprintf(statname, sizeof(statname),"port_%u_rx",
worker_data[i].port_id);
retval = rte_event_dev_xstats_by_name_get(
dev_id, statname, NULL);
}
and add check in the beginning of the "if" condition.
ie.
if (!cdata.quiet && port_stat(,,) != (uint64_t)-ENOTSUP) {
printf("\nPort Workload distribution:\n");
...
}
> > > + printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
> > > + i, pc, pkts_per_wkr[i]);
> > > + }
> > > +
> > > + }
> > > +
> > > + return 0;
> > > +}
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-05 5:30 ` Jerin Jacob
@ 2017-07-05 11:15 ` Hunt, David
2017-07-06 3:31 ` Jerin Jacob
0 siblings, 1 reply; 64+ messages in thread
From: Hunt, David @ 2017-07-05 11:15 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
Hi Jerin,
On 5/7/2017 6:30 AM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Tue, 4 Jul 2017 08:55:25 +0100
>> From: "Hunt, David" <david.hunt@intel.com>
>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
>> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
>> Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
>> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
>> Thunderbird/45.8.0
>>
>> Hi Jerin,
> Hi David,
>
> I have checked the v6. Some comments below.
>
>>
>> On 3/7/2017 4:57 AM, Jerin Jacob wrote:
>>> -----Original Message-----
>>>
--snip--
>>>> +#
>>>> +# Redistribution and use in source and binary forms, with or without
>>>> +# modification, are permitted provided that the following conditions
>>>> +# are met:
>>>> +#
>>>> +
>>>> +static unsigned int active_cores;
>>>> +static unsigned int num_workers;
>>>> +static long num_packets = (1L << 25); /* do ~32M packets */
>>>> +static unsigned int num_fids = 512;
>>>> +static unsigned int num_stages = 1;
>>>> +static unsigned int worker_cq_depth = 16;
> Any reason not move this to struct config_data?
Sure. All vars now moved to either config_data or fastpath_data.
>>>> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
>>>> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
>>>> +static int16_t qid[MAX_NUM_STAGES] = {-1};
>>>> +static int worker_cycles;
> used in fastpath move to struct fastpath_data.
Performance drops by ~20% when I put these in fastpath_data. No
performace drop when in config_data.
I think we need to leaving in config_data for now.
>>>> +static int enable_queue_priorities;
> struct config_data ?
Done.
>>>> +
>>>> +struct prod_data {
>>>> + uint8_t dev_id;
>>>> + uint8_t port_id;
>>>> + int32_t qid;
>>>> + unsigned int num_nic_ports;
>>>> +} __rte_cache_aligned;
>>>> +
>>>> +struct cons_data {
>>>> + uint8_t dev_id;
>>>> + uint8_t port_id;
>>>> +} __rte_cache_aligned;
>>>> +
>>>> +static struct prod_data prod_data;
>>>> +static struct cons_data cons_data;
>>>> +
>>>> +struct worker_data {
>>>> + uint8_t dev_id;
>>>> + uint8_t port_id;
>>>> +} __rte_cache_aligned;
>>>> +
>>>> +static unsigned int *enqueue_cnt;
>>>> +static unsigned int *dequeue_cnt;
>>> Not been consumed. Remove it.
>> Done.
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +
>>>> +static inline void
>>>> +work(struct rte_mbuf *m)
>>>> +{
>>>> + struct ether_hdr *eth;
>>>> + struct ether_addr addr;
>>>> +
>>>> + /* change mac addresses on packet (to use mbuf data) */
>>>> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
>>>> + ether_addr_copy(ð->d_addr, &addr);
>>>> + ether_addr_copy(ð->s_addr, ð->d_addr);
>>>> + ether_addr_copy(&addr, ð->s_addr);
>>> If it is even number of stages(say 2), Will mac swap be negated? as we are
>>> swapping on each stage NOT in consumer?
>> The mac swap is just to touch the mbuf. It does not matter if it is negated.
> Are you sure or I am missing something here? for example,If I add following piece
> of code, irrespective number of stages it should send the same packet if
> source packet is same. Right ?
> But not happening as expected(Check the src and dest mac address)
>
> stage == 1
> 00000000: 00 0F B7 11 27 2B 00 0F B7 11 27 2C 08 00 45 00 |....'+....',..E.
> 00000010: 00 2E 00 00 00 00 04 11 D5 B1 C0 A8 12 02 0E 01 |................
> 00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
> 00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
>
> stage == 2
> 00000000: 00 0F B7 11 27 2C 00 0F B7 11 27 2B 08 00 45 00 |....',....'+..E.
> 00000010: 00 2E 00 00 00 00 04 11 D5 B0 C0 A8 12 03 0E 01 |................
> 00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
> 00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
>
>
> diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
> index c62cba2..a7aaf37 100644
> --- a/examples/eventdev_pipeline_sw_pmd/main.c
> +++ b/examples/eventdev_pipeline_sw_pmd/main.c
> @@ -147,8 +147,9 @@ consumer(void)
> if (start_time == 0)
> @@ -157,6 +158,7 @@ consumer(void)
> received += n;
> for (i = 0; i < n; i++) {
> uint8_t outport = packets[i].mbuf->port;
> + rte_pktmbuf_dump(stdout, packets[i].mbuf, 64);
> rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
> packets[i].mbuf);
>
> Either fix the mac swap properly or remove it.
Temporary fix added. Now reading in addr and writing it back without
swapping. Not ideal,
will probably need more work in the future. Added a FIXME in the code
with agreement from Jerin.
>
>>>> +
>>>> + if (!quiet) {
>>>> + printf("\nPort Workload distribution:\n");
>>>> + uint32_t i;
>>>> + uint64_t tot_pkts = 0;
>>>> + uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
>>>> + for (i = 0; i < num_workers; i++) {
>>>> + char statname[64];
>>>> + snprintf(statname, sizeof(statname), "port_%u_rx",
>>>> + worker_data[i].port_id);
>>>> + pkts_per_wkr[i] = rte_event_dev_xstats_by_name_get(
>>>> + dev_id, statname, NULL);
>>> As discussed, Check the the given xstat supported on the PMD first.
>> Checking has now been implemented. It'd done by calling
>> rte_event_dev_xstats_by_name_get()
>> and seeing if the result is -ENOTSUP. However there is a bug in the function
>> in that it is declared
>> as a uint64_t, but then returns a -ENOTSUP, so I have to cast the -ENOTSUP
>> as a uint64_t for
>> comparison. This will need to be fixed when the function is patched.
>>
>> retval = rte_event_dev_xstats_by_name_get(
>> dev_id, statname, NULL);
>> if (retval != (uint64_t)-ENOTSUP) {
>> pkts_per_wkr[i] = retval;
>> tot_pkts += pkts_per_wkr[i];
>> }
>>
>>
>>
>>>> + tot_pkts += pkts_per_wkr[i];
>>>> + }
>>>> + for (i = 0; i < num_workers; i++) {
>>>> + float pc = pkts_per_wkr[i] * 100 /
>>>> + ((float)tot_pkts);
> This will print NAN.
>
> How about, move the specific xstat as static inline function.
> port_stat(...)
> {
> char statname[64];
> uint64_t retval;
> snprintf(statname, sizeof(statname),"port_%u_rx",
> worker_data[i].port_id);
> retval = rte_event_dev_xstats_by_name_get(
> dev_id, statname, NULL);
> }
> and add check in the beginning of the "if" condition.
> ie.
>
> if (!cdata.quiet && port_stat(,,) != (uint64_t)-ENOTSUP) {
> printf("\nPort Workload distribution:\n");
> ...
> }
>
Sure. Adding the port_stat simplifies a lot. Will do.
>>>> + printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
>>>> + i, pc, pkts_per_wkr[i]);
>>>> + }
>>>> +
>>>> + }
>>>> +
>>>> + return 0;
>>>> +}
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v7 0/3] next-eventdev: evendev pipeline sample app
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-07-05 12:52 ` David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 1/3] examples/eventdev_pipeline: added " David Hunt
` (2 more replies)
0 siblings, 3 replies; 64+ messages in thread
From: David Hunt @ 2017-07-05 12:52 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* None, incorrect patch upload
Changes in patch v3:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
Changes in patch v4:
* Re-named the app as eventdev_pipeline_sw, as it's aimed at showing the
functionality of the software PMD.
Changes in patch v5:
* Fixed make doc. eventdev_pipeline to eventdev_pipeline_sw
* Fixed some typos in the eventdev programmers guide
Changes in patch v6:
* made name of dirs and app consistent - eventdev_pipeline_sw_pmd
* Added new files and dirs to MAINTAINERS
* Updaged libeventdev docs based on Jerin's feedback
* Added some cleanup to eventdev_pipeline sw pmd sample app
Changes in patch v7:
* Moved remaining global vars into structs
* Changed mac address swap so that packet is not changed
* Cleaned up report on exit if some stats not supported by PMD
* Added .rst file dropped in v6
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
[1/3] examples/eventdev_pipeline: added sample app
[2/3] doc: add sw eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v7 1/3] examples/eventdev_pipeline: added sample app
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-07-05 12:52 ` David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 1 reply; 64+ messages in thread
From: David Hunt @ 2017-07-05 12:52 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
MAINTAINERS | 1 +
examples/Makefile | 2 +
examples/eventdev_pipeline_sw_pmd/Makefile | 49 ++
examples/eventdev_pipeline_sw_pmd/main.c | 1016 ++++++++++++++++++++++++++++
4 files changed, 1068 insertions(+)
create mode 100644 examples/eventdev_pipeline_sw_pmd/Makefile
create mode 100644 examples/eventdev_pipeline_sw_pmd/main.c
diff --git a/MAINTAINERS b/MAINTAINERS
index d9dbf8f..860621a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -579,6 +579,7 @@ M: Harry van Haaren <harry.van.haaren@intel.com>
F: drivers/event/sw/
F: test/test/test_eventdev_sw.c
F: doc/guides/eventdevs/sw.rst
+F: examples/eventdev_pipeline_sw_pmd/
NXP DPAA2 Eventdev PMD
M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/examples/Makefile b/examples/Makefile
index c0e9c3b..97f12ad 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline_sw_pmd
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline_sw_pmd/Makefile b/examples/eventdev_pipeline_sw_pmd/Makefile
new file mode 100644
index 0000000..de4e22c
--- /dev/null
+++ b/examples/eventdev_pipeline_sw_pmd/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline_sw_pmd
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
new file mode 100644
index 0000000..91dddb1
--- /dev/null
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -0,0 +1,1016 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+} __rte_cache_aligned;
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+struct fastpath_data {
+ volatile int done;
+ uint32_t rx_lock;
+ uint32_t tx_lock;
+ uint32_t sched_lock;
+ bool rx_single;
+ bool tx_single;
+ bool sched_single;
+ unsigned int rx_core[MAX_NUM_CORE];
+ unsigned int tx_core[MAX_NUM_CORE];
+ unsigned int sched_core[MAX_NUM_CORE];
+ unsigned int worker_core[MAX_NUM_CORE];
+ struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+};
+
+static struct fastpath_data *fdata;
+
+struct config_data {
+ unsigned int active_cores;
+ unsigned int num_workers;
+ long num_packets;
+ unsigned int num_fids;
+ int queue_type;
+ int worker_cycles;
+ int enable_queue_priorities;
+ int quiet;
+ int dump_dev;
+ int dump_dev_signal;
+ unsigned int num_stages;
+ unsigned int worker_cq_depth;
+ int16_t next_qid[MAX_NUM_STAGES+2];
+ int16_t qid[MAX_NUM_STAGES];
+};
+
+static struct config_data cdata = {
+ .num_packets = (1L << 25), /* do ~32M packets */
+ .num_fids = 512,
+ .queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .next_qid = {-1},
+ .qid = {-1},
+ .num_stages = 1,
+ .worker_cq_depth = 16
+};
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (fdata->rx_core[lcore_id] || fdata->sched_core[lcore_id] ||
+ fdata->tx_core[lcore_id] || fdata->worker_core[lcore_id]);
+}
+
+
+static void
+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t received;
+ static uint64_t last_pkts;
+ static uint64_t last_time;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, fdata->tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ last_time = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ /* Print out mpps every 1<22 packets */
+ if (!cdata.quiet && received >= last_pkts + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t total_ms = (now - start_time) / freq_khz;
+ const uint64_t delta_ms = (now - last_time) / freq_khz;
+ uint64_t delta_pkts = received - last_pkts;
+
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
+ "avg %.3f mpps [current %.3f mpps]\n",
+ received,
+ total_ms,
+ received / (total_ms * 1000.0),
+ delta_pkts / (delta_ms * 1000.0));
+ last_pkts = received;
+ last_time = now;
+ }
+
+ cdata.num_packets -= n;
+ if (cdata.num_packets <= 0)
+ fdata->done = 1;
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE+2];
+ struct rte_event ev[BATCH_SIZE+2];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = cdata.queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
+ rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
+ }
+
+ if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
+ rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (cdata.dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ cdata.dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
+ }
+
+ if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
+ rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->tx_lock));
+ }
+}
+
+static inline void
+work(struct rte_mbuf *m)
+{
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ /*
+ * FIXME Swap mac address properly and also handle the
+ * case for both odd and even number of stages that the
+ * addresses end up the same at the end of the pipeline
+ */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(&addr, ð->d_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + cdata.worker_cycles)
+ rte_pause();
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!fdata->done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!fdata->worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ const uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == cdata.qid[0])
+ events[i].flow_id = events[i].mbuf->hash.rss
+ % cdata.num_fids;
+
+ events[i].queue_id = cdata.next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+ events[i].sched_type = cdata.queue_type;
+
+ work(events[i].mbuf);
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !fdata->done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!cdata.quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ cdata.num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ cdata.num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ cdata.num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ cdata.worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ cdata.worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ cdata.enable_queue_priorities = 1;
+ break;
+ case 'o':
+ cdata.queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ cdata.queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ cdata.quiet = 1;
+ break;
+ case 'D':
+ cdata.dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ fdata->rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ fdata->tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ fdata->sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (cdata.num_stages == 0 || cdata.num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ fdata->rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ fdata->tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ fdata->sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ fdata->worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (fdata->worker_core[i])
+ cdata.num_workers++;
+ if (core_in_use(i))
+ cdata.active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ fdata->tx_buf[i] =
+ rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (fdata->tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(fdata->tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(fdata->tx_buf[i],
+ eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = cdata.num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = cdata.num_workers + 2;
+ struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = cdata.worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = cdata.queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ if (dev_info.max_event_port_dequeue_depth <
+ config.nb_event_port_dequeue_depth)
+ config.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+ if (dev_info.max_event_port_enqueue_depth <
+ config.nb_event_port_enqueue_depth)
+ config.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0) {
+ printf("%d: Error configuring device\n", __LINE__);
+ return -1;
+ }
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < cdata.num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ cdata.qid[i] = i;
+ cdata.next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (cdata.enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ wkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (wkr_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ wkr_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < cdata.num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < cdata.num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (tx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (rx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = cdata.qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (fdata->done)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ fdata->done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+static inline uint64_t
+port_stat(int dev_id, int32_t p)
+{
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx", p);
+ return rte_event_dev_xstats_by_name_get(dev_id, statname, NULL);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ fdata = rte_malloc(NULL, sizeof(struct fastpath_data), 0);
+ if (fdata == NULL)
+ rte_panic("Out of memory\n");
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = cdata.active_cores;
+
+ if (!cdata.quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", cdata.num_workers);
+ printf("\tpackets: %lu\n", cdata.num_packets);
+ printf("\tQueue-prio: %u\n", cdata.enable_queue_priorities);
+ if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, cdata.num_workers,
+ sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!fdata->rx_core[lcore_id] &&
+ !fdata->worker_core[lcore_id] &&
+ !fdata->tx_core[lcore_id] &&
+ !fdata->sched_core[lcore_id])
+ continue;
+
+ if (fdata->rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (fdata->tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (fdata->sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (fdata->worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (fdata->worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (cdata.dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!cdata.quiet && (port_stat(dev_id, worker_data[0].port_id) !=
+ (uint64_t)-ENOTSUP)) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < cdata.num_workers; i++) {
+ pkts_per_wkr[i] =
+ port_stat(dev_id, worker_data[i].port_id);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < cdata.num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v7 2/3] doc: add sw eventdev pipeline to sample app ug
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-07-05 12:52 ` David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 3/3] doc: add eventdev library to programmers guide David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-07-05 12:52 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline_sw.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
MAINTAINERS | 1 +
.../sample_app_ug/eventdev_pipeline_sw_pmd.rst | 190 +++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
3 files changed, 192 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 860621a..4479906 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -580,6 +580,7 @@ F: drivers/event/sw/
F: test/test/test_eventdev_sw.c
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline_sw_pmd/
+F: doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
NXP DPAA2 Eventdev PMD
M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
new file mode 100644
index 0000000..b1b18dd
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
@@ -0,0 +1,190 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline SW PMD Sample Application
+===========================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API using the software PMD. It shows how an
+application can configure a pipeline and assign a set of worker cores to
+perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}/examples/eventdev_pipeline_sw_pmd
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline_sw_pmd --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets received, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets received and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..f9239e3 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline_sw_pmd
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v7 3/3] doc: add eventdev library to programmers guide
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 1/3] examples/eventdev_pipeline: added " David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
@ 2017-07-05 12:52 ` David Hunt
2 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-07-05 12:52 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
doc/guides/prog_guide/eventdev.rst | 394 +++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1389 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..908d123
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,394 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader.
+
+The eventdev framework introduces the event driven programming model. In a
+polling model, lcores poll ethdev ports and associated Rx queues directly
+to look for a packet. By contrast in an event driven model, lcores call the
+scheduler that selects packets for them based on programmer-specified criteria.
+The Eventdev library adds support for an event driven programming model, which
+offers applications automatic multicore scaling, dynamic load balancing,
+pipelining, packet ingress order maintenance and synchronization services to
+simplify application packet processing.
+
+By introducing an event driven programming model, DPDK can support both polling
+and event driven programming models for packet processing, and applications are
+free to choose whatever model (or combination of the two) best suits their
+needs.
+
+Step-by-step instructions of the eventdev design is available in the `API
+Walk-through`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+An event queue is a queue containing events that are scheduled by the event
+device. An event queue contains events of different flows associated with
+scheduling types, such as atomic, ordered, or parallel.
+
+Queue All Types Capable
+^^^^^^^^^^^^^^^^^^^^^^^
+
+If RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability bit is set in the event device,
+then events of any type may be sent to any queue. Otherwise, the queues only
+support events of the type that it was created with.
+
+Queue All Types Incapable
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In this case, each stage has a specified scheduling type. The application
+configures each queue for a specific type of scheduling, and just enqueues all
+events to the eventdev. An example of a PMD of this type is the eventdev
+software PMD.
+
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order on downstream queue enqueue. Parallel allows sending all flows
+to all CPU cores, without any re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walk-through
+----------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walk-through:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walk-through assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walk-through assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+An event driven worker thread has following typical workflow on fastpath:
+
+.. code-block:: c
+
+ while (1) {
+ rte_event_dequeue_burst(...);
+ (event processing)
+ rte_event_enqueue_burst(...);
+ }
+
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-05 11:15 ` Hunt, David
@ 2017-07-06 3:31 ` Jerin Jacob
2017-07-06 10:04 ` Hunt, David
0 siblings, 1 reply; 64+ messages in thread
From: Jerin Jacob @ 2017-07-06 3:31 UTC (permalink / raw)
To: Hunt, David; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Wed, 5 Jul 2017 12:15:51 +0100
> From: "Hunt, David" <david.hunt@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
> Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hi Jerin,
>
>
> On 5/7/2017 6:30 AM, Jerin Jacob wrote:
> > -----Original Message-----
> > > Date: Tue, 4 Jul 2017 08:55:25 +0100
> > > From: "Hunt, David" <david.hunt@intel.com>
> > > To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
> > > <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
> > > Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
> > > User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> > > Thunderbird/45.8.0
> > >
> > > Hi Jerin,
> > Hi David,
> >
> > I have checked the v6. Some comments below.
> >
> > >
> > > On 3/7/2017 4:57 AM, Jerin Jacob wrote:
> > > > -----Original Message-----
> > > >
> --snip--
> > > > > +#
> > > > > +# Redistribution and use in source and binary forms, with or without
> > > > > +# modification, are permitted provided that the following conditions
> > > > > +# are met:
> > > > > +#
> > > > > +
> > > > > +static unsigned int active_cores;
> > > > > +static unsigned int num_workers;
> > > > > +static long num_packets = (1L << 25); /* do ~32M packets */
> > > > > +static unsigned int num_fids = 512;
> > > > > +static unsigned int num_stages = 1;
> > > > > +static unsigned int worker_cq_depth = 16;
> > Any reason not move this to struct config_data?
>
> Sure. All vars now moved to either config_data or fastpath_data.
>
> > > > > +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
> > > > > +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
> > > > > +static int16_t qid[MAX_NUM_STAGES] = {-1};
> > > > > +static int worker_cycles;
> > used in fastpath move to struct fastpath_data.
>
> Performance drops by ~20% when I put these in fastpath_data. No performace
> drop when in config_data.
> I think we need to leaving in config_data for now.
I checked v7 it looks to OK to merge. Can you fix following minor issue with
check patch and check-git-log.sh
check-git-log.sh
-----------------
Wrong headline lowercase:
doc: add sw eventdev pipeline to sample app ug
### examples/eventdev_pipeline: added sample app
Note:
Change application to new name.
checkpatch.sh
-----------------
WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
using 'consumer', this function's name, in a string
#294: FILE: examples/eventdev_pipeline_sw_pmd/main.c:178:
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
using 'worker', this function's name, in a string
#453: FILE: examples/eventdev_pipeline_sw_pmd/main.c:337:
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
total: 0 errors, 2 warnings, 1078 lines checked
I will give pull request Thomas on Friday morning. I will include this change set
in the pull request.
Regarding the performance drop, Can you add __rte_cache_aligned on those
variable which creates regression in moving to rte_malloc area. The
cache line could be shared? If not fixing then its fine we will look into that latter.
and
Can you share your command to test for this regression, I will also try
on x86 box if I get time?
>
> > > > > +static int enable_queue_priorities;
> > struct config_data ?
>
> Done.
>
> > > > > +
> > > > > +struct prod_data {
> > > > > + uint8_t dev_id;
> > > > > + uint8_t port_id;
> > > > > + int32_t qid;
> > > > > + unsigned int num_nic_ports;
> > > > > +} __rte_cache_aligned;
> > > > > +
> > > > > +struct cons_data {
> > > > > + uint8_t dev_id;
> > > > > + uint8_t port_id;
> > > > > +} __rte_cache_aligned;
> > > > > +
> > > > > +static struct prod_data prod_data;
> > > > > +static struct cons_data cons_data;
> > > > > +
> > > > > +struct worker_data {
> > > > > + uint8_t dev_id;
> > > > > + uint8_t port_id;
> > > > > +} __rte_cache_aligned;
> > > > > +
> > > > > +static unsigned int *enqueue_cnt;
> > > > > +static unsigned int *dequeue_cnt;
> > > > Not been consumed. Remove it.
> > > Done.
> > > > > +
> > > > > + return 0;
> > > > > +}
> > > > > +
> > > > > +
> > > > > +static inline void
> > > > > +work(struct rte_mbuf *m)
> > > > > +{
> > > > > + struct ether_hdr *eth;
> > > > > + struct ether_addr addr;
> > > > > +
> > > > > + /* change mac addresses on packet (to use mbuf data) */
> > > > > + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
> > > > > + ether_addr_copy(ð->d_addr, &addr);
> > > > > + ether_addr_copy(ð->s_addr, ð->d_addr);
> > > > > + ether_addr_copy(&addr, ð->s_addr);
> > > > If it is even number of stages(say 2), Will mac swap be negated? as we are
> > > > swapping on each stage NOT in consumer?
> > > The mac swap is just to touch the mbuf. It does not matter if it is negated.
> > Are you sure or I am missing something here? for example,If I add following piece
> > of code, irrespective number of stages it should send the same packet if
> > source packet is same. Right ?
> > But not happening as expected(Check the src and dest mac address)
> >
> > stage == 1
> > 00000000: 00 0F B7 11 27 2B 00 0F B7 11 27 2C 08 00 45 00 |....'+....',..E.
> > 00000010: 00 2E 00 00 00 00 04 11 D5 B1 C0 A8 12 02 0E 01 |................
> > 00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
> > 00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
> >
> > stage == 2
> > 00000000: 00 0F B7 11 27 2C 00 0F B7 11 27 2B 08 00 45 00 |....',....'+..E.
> > 00000010: 00 2E 00 00 00 00 04 11 D5 B0 C0 A8 12 03 0E 01 |................
> > 00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
> > 00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
> >
> >
> > diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
> > index c62cba2..a7aaf37 100644
> > --- a/examples/eventdev_pipeline_sw_pmd/main.c
> > +++ b/examples/eventdev_pipeline_sw_pmd/main.c
> > @@ -147,8 +147,9 @@ consumer(void)
> > if (start_time == 0)
> > @@ -157,6 +158,7 @@ consumer(void)
> > received += n;
> > for (i = 0; i < n; i++) {
> > uint8_t outport = packets[i].mbuf->port;
> > + rte_pktmbuf_dump(stdout, packets[i].mbuf, 64);
> > rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
> > packets[i].mbuf);
> >
> > Either fix the mac swap properly or remove it.
>
> Temporary fix added. Now reading in addr and writing it back without
> swapping. Not ideal,
> will probably need more work in the future. Added a FIXME in the code with
> agreement from Jerin.
On a agreement that, Before moving to generic application it has to be
fixed in SW PMD driver or if its specific behavior of SW PMD then it has
to abstracted in proper way in fastpath.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-06 3:31 ` Jerin Jacob
@ 2017-07-06 10:04 ` Hunt, David
2017-07-06 10:39 ` Hunt, David
2017-07-06 13:26 ` Hunt, David
0 siblings, 2 replies; 64+ messages in thread
From: Hunt, David @ 2017-07-06 10:04 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
On 6/7/2017 4:31 AM, Jerin Jacob wrote:
> -----Original Message-----
>> Date: Wed, 5 Jul 2017 12:15:51 +0100
>> From: "Hunt, David" <david.hunt@intel.com>
>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>> CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
>> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
>> Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
>> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
>> Thunderbird/45.8.0
>>
>> Hi Jerin,
>>
>>
>> On 5/7/2017 6:30 AM, Jerin Jacob wrote:
>>> -----Original Message-----
>>>> Date: Tue, 4 Jul 2017 08:55:25 +0100
>>>> From: "Hunt, David" <david.hunt@intel.com>
>>>> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>> CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
>>>> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
>>>> Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
>>>> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
>>>> Thunderbird/45.8.0
>>>>
>>>> Hi Jerin,
>>> Hi David,
>>>
>>> I have checked the v6. Some comments below.
>>>
>>>> On 3/7/2017 4:57 AM, Jerin Jacob wrote:
>>>>> -----Original Message-----
>>>>>
>> --snip--
>>>>>> +#
>>>>>> +# Redistribution and use in source and binary forms, with or without
>>>>>> +# modification, are permitted provided that the following conditions
>>>>>> +# are met:
>>>>>> +#
>>>>>> +
>>>>>> +static unsigned int active_cores;
>>>>>> +static unsigned int num_workers;
>>>>>> +static long num_packets = (1L << 25); /* do ~32M packets */
>>>>>> +static unsigned int num_fids = 512;
>>>>>> +static unsigned int num_stages = 1;
>>>>>> +static unsigned int worker_cq_depth = 16;
>>> Any reason not move this to struct config_data?
>> Sure. All vars now moved to either config_data or fastpath_data.
>>
>>>>>> +static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
>>>>>> +static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};
>>>>>> +static int16_t qid[MAX_NUM_STAGES] = {-1};
>>>>>> +static int worker_cycles;
>>> used in fastpath move to struct fastpath_data.
>> Performance drops by ~20% when I put these in fastpath_data. No performace
>> drop when in config_data.
>> I think we need to leaving in config_data for now.
> I checked v7 it looks to OK to merge. Can you fix following minor issue with
> check patch and check-git-log.sh
>
> check-git-log.sh
> -----------------
> Wrong headline lowercase:
> doc: add sw eventdev pipeline to sample app ug
Will Do. Change sw to SW
> ### examples/eventdev_pipeline: added sample app
Will Do. Add _sw_pmd
> Note:
> Change application to new name.
>
> checkpatch.sh
> -----------------
>
> WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
> using 'consumer', this function's name, in a string
> #294: FILE: examples/eventdev_pipeline_sw_pmd/main.c:178:
> + printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
>
> WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
> using 'worker', this function's name, in a string
> #453: FILE: examples/eventdev_pipeline_sw_pmd/main.c:337:
> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
>
> total: 0 errors, 2 warnings, 1078 lines checked
These are false positives. The text in the messages are not meant to be
the function name.
If anything, I would prefer to change the function names to have " _thread"?
> I will give pull request Thomas on Friday morning. I will include this change set
> in the pull request.
>
> Regarding the performance drop, Can you add __rte_cache_aligned on those
> variable which creates regression in moving to rte_malloc area. The
> cache line could be shared? If not fixing then its fine we will look into that latter.
I will investigate and post a new patch in a few hours.
> and
>
> Can you share your command to test for this regression, I will also try
> on x86 box if I get time?
Sure. I'm using this:
./examples/eventdev_pipeline_sw_pmd/build/app/eventdev_pipeline_sw_pmd
-c ff0 -w 0000:18:00.0 -w 0000:18:00.1 -w 0000:18:00.2 -w 0000:18:00.3
--vdev=event_sw0 -- -s 4 -r 10 -t 10 -e 20 -w f00
>
>>>>>> +static int enable_queue_priorities;
>>> struct config_data ?
>> Done.
>>
>>>>>> +
>>>>>> +struct prod_data {
>>>>>> + uint8_t dev_id;
>>>>>> + uint8_t port_id;
>>>>>> + int32_t qid;
>>>>>> + unsigned int num_nic_ports;
>>>>>> +} __rte_cache_aligned;
>>>>>> +
>>>>>> +struct cons_data {
>>>>>> + uint8_t dev_id;
>>>>>> + uint8_t port_id;
>>>>>> +} __rte_cache_aligned;
>>>>>> +
>>>>>> +static struct prod_data prod_data;
>>>>>> +static struct cons_data cons_data;
>>>>>> +
>>>>>> +struct worker_data {
>>>>>> + uint8_t dev_id;
>>>>>> + uint8_t port_id;
>>>>>> +} __rte_cache_aligned;
>>>>>> +
>>>>>> +static unsigned int *enqueue_cnt;
>>>>>> +static unsigned int *dequeue_cnt;
>>>>> Not been consumed. Remove it.
>>>> Done.
>>>>>> +
>>>>>> + return 0;
>>>>>> +}
>>>>>> +
>>>>>> +
>>>>>> +static inline void
>>>>>> +work(struct rte_mbuf *m)
>>>>>> +{
>>>>>> + struct ether_hdr *eth;
>>>>>> + struct ether_addr addr;
>>>>>> +
>>>>>> + /* change mac addresses on packet (to use mbuf data) */
>>>>>> + eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
>>>>>> + ether_addr_copy(ð->d_addr, &addr);
>>>>>> + ether_addr_copy(ð->s_addr, ð->d_addr);
>>>>>> + ether_addr_copy(&addr, ð->s_addr);
>>>>> If it is even number of stages(say 2), Will mac swap be negated? as we are
>>>>> swapping on each stage NOT in consumer?
>>>> The mac swap is just to touch the mbuf. It does not matter if it is negated.
>>> Are you sure or I am missing something here? for example,If I add following piece
>>> of code, irrespective number of stages it should send the same packet if
>>> source packet is same. Right ?
>>> But not happening as expected(Check the src and dest mac address)
>>>
>>> stage == 1
>>> 00000000: 00 0F B7 11 27 2B 00 0F B7 11 27 2C 08 00 45 00 |....'+....',..E.
>>> 00000010: 00 2E 00 00 00 00 04 11 D5 B1 C0 A8 12 02 0E 01 |................
>>> 00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
>>> 00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
>>>
>>> stage == 2
>>> 00000000: 00 0F B7 11 27 2C 00 0F B7 11 27 2B 08 00 45 00 |....',....'+..E.
>>> 00000010: 00 2E 00 00 00 00 04 11 D5 B0 C0 A8 12 03 0E 01 |................
>>> 00000020: 00 63 10 00 10 01 00 1A 00 00 61 62 63 64 65 66 |.c........abcdef
>>> 00000030: 67 68 69 6A 6B 6C 6D 6E 6F 70 71 72 | | | | | ghijklmnopqr
>>>
>>>
>>> diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
>>> index c62cba2..a7aaf37 100644
>>> --- a/examples/eventdev_pipeline_sw_pmd/main.c
>>> +++ b/examples/eventdev_pipeline_sw_pmd/main.c
>>> @@ -147,8 +147,9 @@ consumer(void)
>>> if (start_time == 0)
>>> @@ -157,6 +158,7 @@ consumer(void)
>>> received += n;
>>> for (i = 0; i < n; i++) {
>>> uint8_t outport = packets[i].mbuf->port;
>>> + rte_pktmbuf_dump(stdout, packets[i].mbuf, 64);
>>> rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
>>> packets[i].mbuf);
>>>
>>> Either fix the mac swap properly or remove it.
>> Temporary fix added. Now reading in addr and writing it back without
>> swapping. Not ideal,
>> will probably need more work in the future. Added a FIXME in the code with
>> agreement from Jerin.
> On a agreement that, Before moving to generic application it has to be
> fixed in SW PMD driver or if its specific behavior of SW PMD then it has
> to abstracted in proper way in fastpath.
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-06 10:04 ` Hunt, David
@ 2017-07-06 10:39 ` Hunt, David
2017-07-06 13:26 ` Hunt, David
1 sibling, 0 replies; 64+ messages in thread
From: Hunt, David @ 2017-07-06 10:39 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
On 6/7/2017 11:04 AM, Hunt, David wrote:
>
>
> On 6/7/2017 4:31 AM, Jerin Jacob wrote:
>
>> Note:
>> Change application to new name.
>>
>> checkpatch.sh
>> -----------------
>>
>> WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
>> using 'consumer', this function's name, in a string
>> #294: FILE: examples/eventdev_pipeline_sw_pmd/main.c:178:
>> + printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
>>
>> WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
>> using 'worker', this function's name, in a string
>> #453: FILE: examples/eventdev_pipeline_sw_pmd/main.c:337:
>> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
>>
>> total: 0 errors, 2 warnings, 1078 lines checked
>
> These are false positives. The text in the messages are not meant to
> be the function name.
> If anything, I would prefer to change the function names to have "
> _thread"?
>
Or perhaps, better still, change the function names to verbs, i.e.
produce() consume(), do_work().
Regards,
Dave.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-06 10:04 ` Hunt, David
2017-07-06 10:39 ` Hunt, David
@ 2017-07-06 13:26 ` Hunt, David
2017-07-06 13:38 ` Jerin Jacob
1 sibling, 1 reply; 64+ messages in thread
From: Hunt, David @ 2017-07-06 13:26 UTC (permalink / raw)
To: Jerin Jacob; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
Hi Jerin,
On 6/7/2017 11:04 AM, Hunt, David wrote:
>
> On 6/7/2017 4:31 AM, Jerin Jacob wrote:
>> -----Original Message-----
>>>
--snip--
>> I checked v7 it looks to OK to merge. Can you fix following minor
>> issue with
>> check patch and check-git-log.sh
>>
>> check-git-log.sh
>> -----------------
>> Wrong headline lowercase:
>> doc: add sw eventdev pipeline to sample app ug
>
> Will Do. Change sw to SW
>
>> ### examples/eventdev_pipeline: added sample app
>
> Will Do. Add _sw_pmd
>
Both of these will be in next patch.
>> Note:
>> Change application to new name.
>>
>> checkpatch.sh
>> -----------------
>>
>> WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
>> using 'consumer', this function's name, in a string
>> #294: FILE: examples/eventdev_pipeline_sw_pmd/main.c:178:
>> + printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
>>
>> WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to
>> using 'worker', this function's name, in a string
>> #453: FILE: examples/eventdev_pipeline_sw_pmd/main.c:337:
>> + printf(" worker %u thread done. RX=%zu TX=%zu\n",
>>
>> total: 0 errors, 2 warnings, 1078 lines checked
>
> These are false positives. The text in the messages are not meant to
> be the function name.
> If anything, I would prefer to change the function names to have "
> _thread"?
>
Having looked at this a bit more, and unable to reproduce with my
original kernel version checkpatch, and the patchwork version does not
show, and the 4.11.9 stable kernel version does not show, I suggest we
mark these down as false positives, as the string is not intended to
show the function name.
>> I will give pull request Thomas on Friday morning. I will include
>> this change set
>> in the pull request.
>>
>> Regarding the performance drop, Can you add __rte_cache_aligned on those
>> variable which creates regression in moving to rte_malloc area. The
>> cache line could be shared? If not fixing then its fine we will look
>> into that latter.
>
> I will investigate and post a new patch in a few hours.
>
Of the 4 variables I am attempting to move into fastpath structure, no
matter whether I move them one at a time or all at once, with
__rte_cache_align or not, I still see a significant performance
degradation. I suggest looking into this later.
I will push a patch in the next couple of hours with the first two
changes mentioned above. OK with you?
Regards,
Dave.
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
2017-07-06 13:26 ` Hunt, David
@ 2017-07-06 13:38 ` Jerin Jacob
0 siblings, 0 replies; 64+ messages in thread
From: Jerin Jacob @ 2017-07-06 13:38 UTC (permalink / raw)
To: Hunt, David; +Cc: dev, harry.van.haaren, Gage Eads, Bruce Richardson
-----Original Message-----
> Date: Thu, 6 Jul 2017 14:26:47 +0100
> From: "Hunt, David" <david.hunt@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: dev@dpdk.org, harry.van.haaren@intel.com, Gage Eads
> <gage.eads@intel.com>, Bruce Richardson <bruce.richardson@intel.com>
> Subject: Re: [PATCH v5 1/3] examples/eventdev_pipeline: added sample app
> User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101
> Thunderbird/45.8.0
>
> Hi Jerin,
>
> On 6/7/2017 11:04 AM, Hunt, David wrote:
> >
> > On 6/7/2017 4:31 AM, Jerin Jacob wrote:
>
> Having looked at this a bit more, and unable to reproduce with my original
> kernel version checkpatch, and the patchwork version does not show, and the
> 4.11.9 stable kernel version does not show, I suggest we mark these down as
> false positives, as the string is not intended to show the function name.
>
>
> > > I will give pull request Thomas on Friday morning. I will include
> > > this change set
> > > in the pull request.
> > >
> > > Regarding the performance drop, Can you add __rte_cache_aligned on those
> > > variable which creates regression in moving to rte_malloc area. The
> > > cache line could be shared? If not fixing then its fine we will look
> > > into that latter.
> >
> > I will investigate and post a new patch in a few hours.
> >
>
> Of the 4 variables I am attempting to move into fastpath structure, no
> matter whether I move them one at a time or all at once, with
> __rte_cache_align or not, I still see a significant performance degradation.
> I suggest looking into this later.
>
> I will push a patch in the next couple of hours with the first two changes
> mentioned above. OK with you?
OK.
>
> Regards,
> Dave.
>
>
>
>
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline sample app
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 1/3] examples/eventdev_pipeline: added " David Hunt
@ 2017-07-06 14:35 ` David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 1/3] examples/eventdev_pipeline_sw_pmd: add " David Hunt
` (3 more replies)
0 siblings, 4 replies; 64+ messages in thread
From: David Hunt @ 2017-07-06 14:35 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren
This patchset introduces a sample application that demonstrates
a pipeline model for packet processing. Running this sample app
with 17.05-rc2 or later is recommended.
Changes in patch v2:
* None, incorrect patch upload
Changes in patch v3:
* Re-work based on comments on mailing list. No major functional changes.
* Checkpatch cleanup of a couple of typos
Changes in patch v4:
* Re-named the app as eventdev_pipeline_sw, as it's aimed at showing the
functionality of the software PMD.
Changes in patch v5:
* Fixed make doc. eventdev_pipeline to eventdev_pipeline_sw
* Fixed some typos in the eventdev programmers guide
Changes in patch v6:
* made name of dirs and app consistent - eventdev_pipeline_sw_pmd
* Added new files and dirs to MAINTAINERS
* Updaged libeventdev docs based on Jerin's feedback
* Added some cleanup to eventdev_pipeline sw pmd sample app
Changes in patch v7:
* Cleaned global vars into structs
* Changed mac address swap so that packet is not changed
* Cleaned up report on exit if some stats not supported
* Added .rst file dropped in v6
Changes in patch v8:
* Changed 'sw' to 'SW' in git log msg
* Changed 'eventdev_pipeline' to 'eventdev_pipeline_sw_pmd' in git log msg
The sample app itself allows configuration of various pipelines using
command line arguments. Parameters like number of stages, number of
worker cores, which cores are assigned to specific tasks, and work-
cycles per-stage in the pipeline can be configured.
Documentation for eventdev is added for the programmers guide and
sample app user guide, providing sample commands to run the app with,
and expected output.
[1/3] examples/eventdev_pipeline_sw_pmd: add sample app
[2/3] doc: add SW eventdev pipeline to sample app ug
[3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v8 1/3] examples/eventdev_pipeline_sw_pmd: add sample app
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline " David Hunt
@ 2017-07-06 14:35 ` David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 2/3] doc: add SW eventdev pipeline to sample app ug David Hunt
` (2 subsequent siblings)
3 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-07-06 14:35 UTC (permalink / raw)
To: dev
Cc: jerin.jacob, harry.van.haaren, Gage Eads, Bruce Richardson, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds a sample app for the eventdev library.
The app has been tested with DPDK 17.05-rc2, hence this
release (or later) is recommended.
The sample app showcases a pipeline processing use-case,
with event scheduling and processing defined per stage.
The application receives traffic as normal, with each
packet traversing the pipeline. Once the packet has
been processed by each of the pipeline stages, it is
transmitted again.
The app provides a framework to utilize cores for a single
role or multiple roles. Examples of roles are the RX core,
TX core, Scheduling core (in the case of the event/sw PMD),
and worker cores.
Various flags are available to configure numbers of stages,
cycles of work at each stage, type of scheduling, number of
worker cores, queue depths etc. For a full explaination,
please refer to the documentation.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
MAINTAINERS | 1 +
examples/Makefile | 2 +
examples/eventdev_pipeline_sw_pmd/Makefile | 49 ++
examples/eventdev_pipeline_sw_pmd/main.c | 1016 ++++++++++++++++++++++++++++
4 files changed, 1068 insertions(+)
create mode 100644 examples/eventdev_pipeline_sw_pmd/Makefile
create mode 100644 examples/eventdev_pipeline_sw_pmd/main.c
diff --git a/MAINTAINERS b/MAINTAINERS
index d9dbf8f..860621a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -579,6 +579,7 @@ M: Harry van Haaren <harry.van.haaren@intel.com>
F: drivers/event/sw/
F: test/test/test_eventdev_sw.c
F: doc/guides/eventdevs/sw.rst
+F: examples/eventdev_pipeline_sw_pmd/
NXP DPAA2 Eventdev PMD
M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/examples/Makefile b/examples/Makefile
index c0e9c3b..97f12ad 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)
endif
endif
+DIRS-y += eventdev_pipeline_sw_pmd
+
include $(RTE_SDK)/mk/rte.extsubdir.mk
diff --git a/examples/eventdev_pipeline_sw_pmd/Makefile b/examples/eventdev_pipeline_sw_pmd/Makefile
new file mode 100644
index 0000000..de4e22c
--- /dev/null
+++ b/examples/eventdev_pipeline_sw_pmd/Makefile
@@ -0,0 +1,49 @@
+# BSD LICENSE
+#
+# Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, can be overridden by command line or environment
+RTE_TARGET ?= x86_64-native-linuxapp-gcc
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# binary name
+APP = eventdev_pipeline_sw_pmd
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
new file mode 100644
index 0000000..91dddb1
--- /dev/null
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -0,0 +1,1016 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <getopt.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <signal.h>
+#include <sched.h>
+#include <stdbool.h>
+
+#include <rte_eal.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_launch.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#define MAX_NUM_STAGES 8
+#define BATCH_SIZE 16
+#define MAX_NUM_CORE 64
+
+struct prod_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+ int32_t qid;
+ unsigned int num_nic_ports;
+} __rte_cache_aligned;
+
+struct cons_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+static struct prod_data prod_data;
+static struct cons_data cons_data;
+
+struct worker_data {
+ uint8_t dev_id;
+ uint8_t port_id;
+} __rte_cache_aligned;
+
+struct fastpath_data {
+ volatile int done;
+ uint32_t rx_lock;
+ uint32_t tx_lock;
+ uint32_t sched_lock;
+ bool rx_single;
+ bool tx_single;
+ bool sched_single;
+ unsigned int rx_core[MAX_NUM_CORE];
+ unsigned int tx_core[MAX_NUM_CORE];
+ unsigned int sched_core[MAX_NUM_CORE];
+ unsigned int worker_core[MAX_NUM_CORE];
+ struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];
+};
+
+static struct fastpath_data *fdata;
+
+struct config_data {
+ unsigned int active_cores;
+ unsigned int num_workers;
+ long num_packets;
+ unsigned int num_fids;
+ int queue_type;
+ int worker_cycles;
+ int enable_queue_priorities;
+ int quiet;
+ int dump_dev;
+ int dump_dev_signal;
+ unsigned int num_stages;
+ unsigned int worker_cq_depth;
+ int16_t next_qid[MAX_NUM_STAGES+2];
+ int16_t qid[MAX_NUM_STAGES];
+};
+
+static struct config_data cdata = {
+ .num_packets = (1L << 25), /* do ~32M packets */
+ .num_fids = 512,
+ .queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .next_qid = {-1},
+ .qid = {-1},
+ .num_stages = 1,
+ .worker_cq_depth = 16
+};
+
+static bool
+core_in_use(unsigned int lcore_id) {
+ return (fdata->rx_core[lcore_id] || fdata->sched_core[lcore_id] ||
+ fdata->tx_core[lcore_id] || fdata->worker_core[lcore_id]);
+}
+
+
+static void
+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,
+ void *userdata)
+{
+ int port_id = (uintptr_t) userdata;
+ unsigned int _sent = 0;
+
+ do {
+ /* Note: hard-coded TX queue */
+ _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],
+ unsent - _sent);
+ } while (_sent != unsent);
+}
+
+static int
+consumer(void)
+{
+ const uint64_t freq_khz = rte_get_timer_hz() / 1000;
+ struct rte_event packets[BATCH_SIZE];
+
+ static uint64_t received;
+ static uint64_t last_pkts;
+ static uint64_t last_time;
+ static uint64_t start_time;
+ unsigned int i, j;
+ uint8_t dev_id = cons_data.dev_id;
+ uint8_t port_id = cons_data.port_id;
+
+ uint16_t n = rte_event_dequeue_burst(dev_id, port_id,
+ packets, RTE_DIM(packets), 0);
+
+ if (n == 0) {
+ for (j = 0; j < rte_eth_dev_count(); j++)
+ rte_eth_tx_buffer_flush(j, 0, fdata->tx_buf[j]);
+ return 0;
+ }
+ if (start_time == 0)
+ last_time = start_time = rte_get_timer_cycles();
+
+ received += n;
+ for (i = 0; i < n; i++) {
+ uint8_t outport = packets[i].mbuf->port;
+ rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],
+ packets[i].mbuf);
+ }
+
+ /* Print out mpps every 1<22 packets */
+ if (!cdata.quiet && received >= last_pkts + (1<<22)) {
+ const uint64_t now = rte_get_timer_cycles();
+ const uint64_t total_ms = (now - start_time) / freq_khz;
+ const uint64_t delta_ms = (now - last_time) / freq_khz;
+ uint64_t delta_pkts = received - last_pkts;
+
+ printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, "
+ "avg %.3f mpps [current %.3f mpps]\n",
+ received,
+ total_ms,
+ received / (total_ms * 1000.0),
+ delta_pkts / (delta_ms * 1000.0));
+ last_pkts = received;
+ last_time = now;
+ }
+
+ cdata.num_packets -= n;
+ if (cdata.num_packets <= 0)
+ fdata->done = 1;
+
+ return 0;
+}
+
+static int
+producer(void)
+{
+ static uint8_t eth_port;
+ struct rte_mbuf *mbufs[BATCH_SIZE+2];
+ struct rte_event ev[BATCH_SIZE+2];
+ uint32_t i, num_ports = prod_data.num_nic_ports;
+ int32_t qid = prod_data.qid;
+ uint8_t dev_id = prod_data.dev_id;
+ uint8_t port_id = prod_data.port_id;
+ uint32_t prio_idx = 0;
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+ if (++eth_port == num_ports)
+ eth_port = 0;
+ if (nb_rx == 0) {
+ rte_pause();
+ return 0;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = cdata.queue_type;
+ ev[i].queue_id = qid;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ RTE_SET_USED(prio_idx);
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for (i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+ return 0;
+}
+
+static inline void
+schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+{
+ if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
+ rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
+ producer();
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
+ }
+
+ if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
+ rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
+ rte_event_schedule(dev_id);
+ if (cdata.dump_dev_signal) {
+ rte_event_dev_dump(0, stdout);
+ cdata.dump_dev_signal = 0;
+ }
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
+ }
+
+ if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
+ rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
+ consumer();
+ rte_atomic32_clear((rte_atomic32_t *)&(fdata->tx_lock));
+ }
+}
+
+static inline void
+work(struct rte_mbuf *m)
+{
+ struct ether_hdr *eth;
+ struct ether_addr addr;
+
+ /* change mac addresses on packet (to use mbuf data) */
+ /*
+ * FIXME Swap mac address properly and also handle the
+ * case for both odd and even number of stages that the
+ * addresses end up the same at the end of the pipeline
+ */
+ eth = rte_pktmbuf_mtod(m, struct ether_hdr *);
+ ether_addr_copy(ð->d_addr, &addr);
+ ether_addr_copy(&addr, ð->d_addr);
+
+ /* do a number of cycles of work per packet */
+ volatile uint64_t start_tsc = rte_rdtsc();
+ while (rte_rdtsc() < start_tsc + cdata.worker_cycles)
+ rte_pause();
+}
+
+static int
+worker(void *arg)
+{
+ struct rte_event events[BATCH_SIZE];
+
+ struct worker_data *data = (struct worker_data *)arg;
+ uint8_t dev_id = data->dev_id;
+ uint8_t port_id = data->port_id;
+ size_t sent = 0, received = 0;
+ unsigned int lcore_id = rte_lcore_id();
+
+ while (!fdata->done) {
+ uint16_t i;
+
+ schedule_devices(dev_id, lcore_id);
+
+ if (!fdata->worker_core[lcore_id]) {
+ rte_pause();
+ continue;
+ }
+
+ const uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,
+ events, RTE_DIM(events), 0);
+
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+ received += nb_rx;
+
+ for (i = 0; i < nb_rx; i++) {
+
+ /* The first worker stage does classification */
+ if (events[i].queue_id == cdata.qid[0])
+ events[i].flow_id = events[i].mbuf->hash.rss
+ % cdata.num_fids;
+
+ events[i].queue_id = cdata.next_qid[events[i].queue_id];
+ events[i].op = RTE_EVENT_OP_FORWARD;
+ events[i].sched_type = cdata.queue_type;
+
+ work(events[i].mbuf);
+ }
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,
+ events, nb_rx);
+ while (nb_tx < nb_rx && !fdata->done)
+ nb_tx += rte_event_enqueue_burst(dev_id, port_id,
+ events + nb_tx,
+ nb_rx - nb_tx);
+ sent += nb_tx;
+ }
+
+ if (!cdata.quiet)
+ printf(" worker %u thread done. RX=%zu TX=%zu\n",
+ rte_lcore_id(), received, sent);
+
+ return 0;
+}
+
+/*
+ * Parse the coremask given as argument (hexadecimal string) and fill
+ * the global configuration (core role and core count) with the parsed
+ * value.
+ */
+static int xdigit2val(unsigned char c)
+{
+ int val;
+
+ if (isdigit(c))
+ val = c - '0';
+ else if (isupper(c))
+ val = c - 'A' + 10;
+ else
+ val = c - 'a' + 10;
+ return val;
+}
+
+static uint64_t
+parse_coremask(const char *coremask)
+{
+ int i, j, idx = 0;
+ unsigned int count = 0;
+ char c;
+ int val;
+ uint64_t mask = 0;
+ const int32_t BITS_HEX = 4;
+
+ if (coremask == NULL)
+ return -1;
+ /* Remove all blank characters ahead and after .
+ * Remove 0x/0X if exists.
+ */
+ while (isblank(*coremask))
+ coremask++;
+ if (coremask[0] == '0' && ((coremask[1] == 'x')
+ || (coremask[1] == 'X')))
+ coremask += 2;
+ i = strlen(coremask);
+ while ((i > 0) && isblank(coremask[i - 1]))
+ i--;
+ if (i == 0)
+ return -1;
+
+ for (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {
+ c = coremask[i];
+ if (isxdigit(c) == 0) {
+ /* invalid characters */
+ return -1;
+ }
+ val = xdigit2val(c);
+ for (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {
+ if ((1 << j) & val) {
+ mask |= (1UL << idx);
+ count++;
+ }
+ }
+ }
+ for (; i >= 0; i--)
+ if (coremask[i] != '0')
+ return -1;
+ if (count == 0)
+ return -1;
+ return mask;
+}
+
+static struct option long_options[] = {
+ {"workers", required_argument, 0, 'w'},
+ {"packets", required_argument, 0, 'n'},
+ {"atomic-flows", required_argument, 0, 'f'},
+ {"num_stages", required_argument, 0, 's'},
+ {"rx-mask", required_argument, 0, 'r'},
+ {"tx-mask", required_argument, 0, 't'},
+ {"sched-mask", required_argument, 0, 'e'},
+ {"cq-depth", required_argument, 0, 'c'},
+ {"work-cycles", required_argument, 0, 'W'},
+ {"queue-priority", no_argument, 0, 'P'},
+ {"parallel", no_argument, 0, 'p'},
+ {"ordered", no_argument, 0, 'o'},
+ {"quiet", no_argument, 0, 'q'},
+ {"dump", no_argument, 0, 'D'},
+ {0, 0, 0, 0}
+};
+
+static void
+usage(void)
+{
+ const char *usage_str =
+ " Usage: eventdev_demo [options]\n"
+ " Options:\n"
+ " -n, --packets=N Send N packets (default ~32M), 0 implies no limit\n"
+ " -f, --atomic-flows=N Use N random flows from 1 to N (default 16)\n"
+ " -s, --num_stages=N Use N atomic stages (default 1)\n"
+ " -r, --rx-mask=core mask Run NIC rx on CPUs in core mask\n"
+ " -w, --worker-mask=core mask Run worker on CPUs in core mask\n"
+ " -t, --tx-mask=core mask Run NIC tx on CPUs in core mask\n"
+ " -e --sched-mask=core mask Run scheduler on CPUs in core mask\n"
+ " -c --cq-depth=N Worker CQ depth (default 16)\n"
+ " -W --work-cycles=N Worker cycles (default 0)\n"
+ " -P --queue-priority Enable scheduler queue prioritization\n"
+ " -o, --ordered Use ordered scheduling\n"
+ " -p, --parallel Use parallel scheduling\n"
+ " -q, --quiet Minimize printed output\n"
+ " -D, --dump Print detailed statistics before exit"
+ "\n";
+ fprintf(stderr, "%s", usage_str);
+ exit(1);
+}
+
+static void
+parse_app_args(int argc, char **argv)
+{
+ /* Parse cli options*/
+ int option_index;
+ int c;
+ opterr = 0;
+ uint64_t rx_lcore_mask = 0;
+ uint64_t tx_lcore_mask = 0;
+ uint64_t sched_lcore_mask = 0;
+ uint64_t worker_lcore_mask = 0;
+ int i;
+
+ for (;;) {
+ c = getopt_long(argc, argv, "r:t:e:c:w:n:f:s:poPqDW:",
+ long_options, &option_index);
+ if (c == -1)
+ break;
+
+ int popcnt = 0;
+ switch (c) {
+ case 'n':
+ cdata.num_packets = (unsigned long)atol(optarg);
+ break;
+ case 'f':
+ cdata.num_fids = (unsigned int)atoi(optarg);
+ break;
+ case 's':
+ cdata.num_stages = (unsigned int)atoi(optarg);
+ break;
+ case 'c':
+ cdata.worker_cq_depth = (unsigned int)atoi(optarg);
+ break;
+ case 'W':
+ cdata.worker_cycles = (unsigned int)atoi(optarg);
+ break;
+ case 'P':
+ cdata.enable_queue_priorities = 1;
+ break;
+ case 'o':
+ cdata.queue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;
+ break;
+ case 'p':
+ cdata.queue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;
+ break;
+ case 'q':
+ cdata.quiet = 1;
+ break;
+ case 'D':
+ cdata.dump_dev = 1;
+ break;
+ case 'w':
+ worker_lcore_mask = parse_coremask(optarg);
+ break;
+ case 'r':
+ rx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(rx_lcore_mask);
+ fdata->rx_single = (popcnt == 1);
+ break;
+ case 't':
+ tx_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(tx_lcore_mask);
+ fdata->tx_single = (popcnt == 1);
+ break;
+ case 'e':
+ sched_lcore_mask = parse_coremask(optarg);
+ popcnt = __builtin_popcountll(sched_lcore_mask);
+ fdata->sched_single = (popcnt == 1);
+ break;
+ default:
+ usage();
+ }
+ }
+
+ if (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||
+ sched_lcore_mask == 0 || tx_lcore_mask == 0) {
+ printf("Core part of pipeline was not assigned any cores. "
+ "This will stall the pipeline, please check core masks "
+ "(use -h for details on setting core masks):\n"
+ "\trx: %"PRIu64"\n\ttx: %"PRIu64"\n\tsched: %"PRIu64
+ "\n\tworkers: %"PRIu64"\n",
+ rx_lcore_mask, tx_lcore_mask, sched_lcore_mask,
+ worker_lcore_mask);
+ rte_exit(-1, "Fix core masks\n");
+ }
+ if (cdata.num_stages == 0 || cdata.num_stages > MAX_NUM_STAGES)
+ usage();
+
+ for (i = 0; i < MAX_NUM_CORE; i++) {
+ fdata->rx_core[i] = !!(rx_lcore_mask & (1UL << i));
+ fdata->tx_core[i] = !!(tx_lcore_mask & (1UL << i));
+ fdata->sched_core[i] = !!(sched_lcore_mask & (1UL << i));
+ fdata->worker_core[i] = !!(worker_lcore_mask & (1UL << i));
+
+ if (fdata->worker_core[i])
+ cdata.num_workers++;
+ if (core_in_use(i))
+ cdata.active_cores++;
+ }
+}
+
+/*
+ * Initializes a given port using global settings and with the RX buffers
+ * coming from the mbuf_pool passed as a parameter.
+ */
+static inline int
+port_init(uint8_t port, struct rte_mempool *mbuf_pool)
+{
+ static const struct rte_eth_conf port_conf_default = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = ETHER_MAX_LEN
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_hf = ETH_RSS_IP |
+ ETH_RSS_TCP |
+ ETH_RSS_UDP,
+ }
+ }
+ };
+ const uint16_t rx_rings = 1, tx_rings = 1;
+ const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+ struct rte_eth_conf port_conf = port_conf_default;
+ int retval;
+ uint16_t q;
+
+ if (port >= rte_eth_dev_count())
+ return -1;
+
+ /* Configure the Ethernet device. */
+ retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
+ if (retval != 0)
+ return retval;
+
+ /* Allocate and set up 1 RX queue per Ethernet port. */
+ for (q = 0; q < rx_rings; q++) {
+ retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+ rte_eth_dev_socket_id(port), NULL, mbuf_pool);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Allocate and set up 1 TX queue per Ethernet port. */
+ for (q = 0; q < tx_rings; q++) {
+ retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+ rte_eth_dev_socket_id(port), NULL);
+ if (retval < 0)
+ return retval;
+ }
+
+ /* Start the Ethernet port. */
+ retval = rte_eth_dev_start(port);
+ if (retval < 0)
+ return retval;
+
+ /* Display the port MAC address. */
+ struct ether_addr addr;
+ rte_eth_macaddr_get(port, &addr);
+ printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+ " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+ (unsigned int)port,
+ addr.addr_bytes[0], addr.addr_bytes[1],
+ addr.addr_bytes[2], addr.addr_bytes[3],
+ addr.addr_bytes[4], addr.addr_bytes[5]);
+
+ /* Enable RX in promiscuous mode for the Ethernet device. */
+ rte_eth_promiscuous_enable(port);
+
+ return 0;
+}
+
+static int
+init_ports(unsigned int num_ports)
+{
+ uint8_t portid;
+ unsigned int i;
+
+ struct rte_mempool *mp = rte_pktmbuf_pool_create("packet_pool",
+ /* mbufs */ 16384 * num_ports,
+ /* cache_size */ 512,
+ /* priv_size*/ 0,
+ /* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+
+ for (portid = 0; portid < num_ports; portid++)
+ if (port_init(portid, mp) != 0)
+ rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n",
+ portid);
+
+ for (i = 0; i < num_ports; i++) {
+ void *userdata = (void *)(uintptr_t) i;
+ fdata->tx_buf[i] =
+ rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);
+ if (fdata->tx_buf[i] == NULL)
+ rte_panic("Out of memory\n");
+ rte_eth_tx_buffer_init(fdata->tx_buf[i], 32);
+ rte_eth_tx_buffer_set_err_callback(fdata->tx_buf[i],
+ eth_tx_buffer_retry,
+ userdata);
+ }
+
+ return 0;
+}
+
+struct port_link {
+ uint8_t queue_id;
+ uint8_t priority;
+};
+
+static int
+setup_eventdev(struct prod_data *prod_data,
+ struct cons_data *cons_data,
+ struct worker_data *worker_data)
+{
+ const uint8_t dev_id = 0;
+ /* +1 stages is for a SINGLE_LINK TX stage */
+ const uint8_t nb_queues = cdata.num_stages + 1;
+ /* + 2 is one port for producer and one for consumer */
+ const uint8_t nb_ports = cdata.num_workers + 2;
+ struct rte_event_dev_config config = {
+ .nb_event_queues = nb_queues,
+ .nb_event_ports = nb_ports,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ struct rte_event_port_conf wkr_p_conf = {
+ .dequeue_depth = cdata.worker_cq_depth,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_queue_conf wkr_q_conf = {
+ .event_queue_cfg = cdata.queue_type,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ struct rte_event_port_conf tx_p_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ const struct rte_event_queue_conf tx_q_conf = {
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ .event_queue_cfg =
+ RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |
+ RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+
+ struct port_link worker_queues[MAX_NUM_STAGES];
+ struct port_link tx_queue;
+ unsigned int i;
+
+ int ret, ndev = rte_event_dev_count();
+ if (ndev < 1) {
+ printf("%d: No Eventdev Devices Found\n", __LINE__);
+ return -1;
+ }
+
+ struct rte_event_dev_info dev_info;
+ ret = rte_event_dev_info_get(dev_id, &dev_info);
+ printf("\tEventdev %d: %s\n", dev_id, dev_info.driver_name);
+
+ if (dev_info.max_event_port_dequeue_depth <
+ config.nb_event_port_dequeue_depth)
+ config.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+ if (dev_info.max_event_port_enqueue_depth <
+ config.nb_event_port_enqueue_depth)
+ config.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ ret = rte_event_dev_configure(dev_id, &config);
+ if (ret < 0) {
+ printf("%d: Error configuring device\n", __LINE__);
+ return -1;
+ }
+
+ /* Q creation - one load balanced per pipeline stage*/
+ printf(" Stages:\n");
+ for (i = 0; i < cdata.num_stages; i++) {
+ if (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ cdata.qid[i] = i;
+ cdata.next_qid[i] = i+1;
+ worker_queues[i].queue_id = i;
+ if (cdata.enable_queue_priorities) {
+ /* calculate priority stepping for each stage, leaving
+ * headroom of 1 for the SINGLE_LINK TX below
+ */
+ const uint32_t prio_delta =
+ (RTE_EVENT_DEV_PRIORITY_LOWEST-1) / nb_queues;
+
+ /* higher priority for queues closer to tx */
+ wkr_q_conf.priority =
+ RTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;
+ }
+
+ const char *type_str = "Atomic";
+ switch (wkr_q_conf.event_queue_cfg) {
+ case RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:
+ type_str = "Ordered";
+ break;
+ case RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:
+ type_str = "Parallel";
+ break;
+ }
+ printf("\tStage %d, Type %s\tPriority = %d\n", i, type_str,
+ wkr_q_conf.priority);
+ }
+ printf("\n");
+
+ /* final queue for sending to TX core */
+ if (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {
+ printf("%d: error creating qid %d\n", __LINE__, i);
+ return -1;
+ }
+ tx_queue.queue_id = i;
+ tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;
+
+ if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ wkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (wkr_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ wkr_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* set up one port per worker, linking to all stage queues */
+ for (i = 0; i < cdata.num_workers; i++) {
+ struct worker_data *w = &worker_data[i];
+ w->dev_id = dev_id;
+ if (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ uint32_t s;
+ for (s = 0; s < cdata.num_stages; s++) {
+ if (rte_event_port_link(dev_id, i,
+ &worker_queues[s].queue_id,
+ &worker_queues[s].priority,
+ 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ }
+ w->port_id = i;
+ }
+
+ if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (tx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ /* port for consumer, linked to TX queue */
+ if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+ if (rte_event_port_link(dev_id, i, &tx_queue.queue_id,
+ &tx_queue.priority, 1) != 1) {
+ printf("%d: error creating link for port %d\n",
+ __LINE__, i);
+ return -1;
+ }
+ /* port for producer, no links */
+ struct rte_event_port_conf rx_p_conf = {
+ .dequeue_depth = 8,
+ .enqueue_depth = 8,
+ .new_event_threshold = 1200,
+ };
+
+ if (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)
+ rx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;
+ if (rx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)
+ rx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;
+
+ if (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {
+ printf("Error setting up port %d\n", i);
+ return -1;
+ }
+
+ *prod_data = (struct prod_data){.dev_id = dev_id,
+ .port_id = i + 1,
+ .qid = cdata.qid[0] };
+ *cons_data = (struct cons_data){.dev_id = dev_id,
+ .port_id = i };
+
+ if (rte_event_dev_start(dev_id) < 0) {
+ printf("Error starting eventdev\n");
+ return -1;
+ }
+
+ return dev_id;
+}
+
+static void
+signal_handler(int signum)
+{
+ if (fdata->done)
+ rte_exit(1, "Exiting on signal %d\n", signum);
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ fdata->done = 1;
+ }
+ if (signum == SIGTSTP)
+ rte_event_dev_dump(0, stdout);
+}
+
+static inline uint64_t
+port_stat(int dev_id, int32_t p)
+{
+ char statname[64];
+ snprintf(statname, sizeof(statname), "port_%u_rx", p);
+ return rte_event_dev_xstats_by_name_get(dev_id, statname, NULL);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct worker_data *worker_data;
+ unsigned int num_ports;
+ int lcore_id;
+ int err;
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+ signal(SIGTSTP, signal_handler);
+
+ err = rte_eal_init(argc, argv);
+ if (err < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= err;
+ argv += err;
+
+ fdata = rte_malloc(NULL, sizeof(struct fastpath_data), 0);
+ if (fdata == NULL)
+ rte_panic("Out of memory\n");
+
+ /* Parse cli options*/
+ parse_app_args(argc, argv);
+
+ num_ports = rte_eth_dev_count();
+ if (num_ports == 0)
+ rte_panic("No ethernet ports found\n");
+
+ const unsigned int cores_needed = cdata.active_cores;
+
+ if (!cdata.quiet) {
+ printf(" Config:\n");
+ printf("\tports: %u\n", num_ports);
+ printf("\tworkers: %u\n", cdata.num_workers);
+ printf("\tpackets: %lu\n", cdata.num_packets);
+ printf("\tQueue-prio: %u\n", cdata.enable_queue_priorities);
+ if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+ printf("\tqid0 type: ordered\n");
+ if (cdata.queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+ printf("\tqid0 type: atomic\n");
+ printf("\tCores available: %u\n", rte_lcore_count());
+ printf("\tCores used: %u\n", cores_needed);
+ }
+
+ if (rte_lcore_count() < cores_needed)
+ rte_panic("Too few cores (%d < %d)\n", rte_lcore_count(),
+ cores_needed);
+
+ const unsigned int ndevs = rte_event_dev_count();
+ if (ndevs == 0)
+ rte_panic("No dev_id devs found. Pasl in a --vdev eventdev.\n");
+ if (ndevs > 1)
+ fprintf(stderr, "Warning: More than one eventdev, using idx 0");
+
+ worker_data = rte_calloc(0, cdata.num_workers,
+ sizeof(worker_data[0]), 0);
+ if (worker_data == NULL)
+ rte_panic("rte_calloc failed\n");
+
+ int dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);
+ if (dev_id < 0)
+ rte_exit(EXIT_FAILURE, "Error setting up eventdev\n");
+
+ prod_data.num_nic_ports = num_ports;
+ init_ports(num_ports);
+
+ int worker_idx = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (lcore_id >= MAX_NUM_CORE)
+ break;
+
+ if (!fdata->rx_core[lcore_id] &&
+ !fdata->worker_core[lcore_id] &&
+ !fdata->tx_core[lcore_id] &&
+ !fdata->sched_core[lcore_id])
+ continue;
+
+ if (fdata->rx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Rx, and using eventdev port %u\n",
+ __func__, lcore_id, prod_data.port_id);
+
+ if (fdata->tx_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
+ __func__, lcore_id, cons_data.port_id);
+
+ if (fdata->sched_core[lcore_id])
+ printf("[%s()] lcore %d executing scheduler\n",
+ __func__, lcore_id);
+
+ if (fdata->worker_core[lcore_id])
+ printf(
+ "[%s()] lcore %d executing worker, using eventdev port %u\n",
+ __func__, lcore_id,
+ worker_data[worker_idx].port_id);
+
+ err = rte_eal_remote_launch(worker, &worker_data[worker_idx],
+ lcore_id);
+ if (err) {
+ rte_panic("Failed to launch worker on core %d\n",
+ lcore_id);
+ continue;
+ }
+ if (fdata->worker_core[lcore_id])
+ worker_idx++;
+ }
+
+ lcore_id = rte_lcore_id();
+
+ if (core_in_use(lcore_id))
+ worker(&worker_data[worker_idx++]);
+
+ rte_eal_mp_wait_lcore();
+
+ if (cdata.dump_dev)
+ rte_event_dev_dump(dev_id, stdout);
+
+ if (!cdata.quiet && (port_stat(dev_id, worker_data[0].port_id) !=
+ (uint64_t)-ENOTSUP)) {
+ printf("\nPort Workload distribution:\n");
+ uint32_t i;
+ uint64_t tot_pkts = 0;
+ uint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};
+ for (i = 0; i < cdata.num_workers; i++) {
+ pkts_per_wkr[i] =
+ port_stat(dev_id, worker_data[i].port_id);
+ tot_pkts += pkts_per_wkr[i];
+ }
+ for (i = 0; i < cdata.num_workers; i++) {
+ float pc = pkts_per_wkr[i] * 100 /
+ ((float)tot_pkts);
+ printf("worker %i :\t%.1f %% (%"PRIu64" pkts)\n",
+ i, pc, pkts_per_wkr[i]);
+ }
+
+ }
+
+ return 0;
+}
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v8 2/3] doc: add SW eventdev pipeline to sample app ug
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 1/3] examples/eventdev_pipeline_sw_pmd: add " David Hunt
@ 2017-07-06 14:35 ` David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 3/3] doc: add eventdev library to programmers guide David Hunt
2017-07-07 4:50 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline sample app Jerin Jacob
3 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-07-06 14:35 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
Add a new entry in the sample app user-guides,
which details the working of the eventdev_pipeline_sw.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
MAINTAINERS | 1 +
.../sample_app_ug/eventdev_pipeline_sw_pmd.rst | 190 +++++++++++++++++++++
doc/guides/sample_app_ug/index.rst | 1 +
3 files changed, 192 insertions(+)
create mode 100644 doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 860621a..4479906 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -580,6 +580,7 @@ F: drivers/event/sw/
F: test/test/test_eventdev_sw.c
F: doc/guides/eventdevs/sw.rst
F: examples/eventdev_pipeline_sw_pmd/
+F: doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
NXP DPAA2 Eventdev PMD
M: Hemant Agrawal <hemant.agrawal@nxp.com>
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst b/doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
new file mode 100644
index 0000000..b1b18dd
--- /dev/null
+++ b/doc/guides/sample_app_ug/eventdev_pipeline_sw_pmd.rst
@@ -0,0 +1,190 @@
+
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+ All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Eventdev Pipeline SW PMD Sample Application
+===========================================
+
+The eventdev pipeline sample application is a sample app that demonstrates
+the usage of the eventdev API using the software PMD. It shows how an
+application can configure a pipeline and assign a set of worker cores to
+perform the processing required.
+
+The application has a range of command line arguments allowing it to be
+configured for various numbers worker cores, stages,queue depths and cycles per
+stage of work. This is useful for performance testing as well as quickly testing
+a particular pipeline configuration.
+
+
+Compiling the Application
+-------------------------
+
+To compile the application:
+
+#. Go to the sample application directory:
+
+ .. code-block:: console
+
+ export RTE_SDK=/path/to/rte_sdk
+ cd ${RTE_SDK}/examples/eventdev_pipeline_sw_pmd
+
+#. Set the target (a default target is used if not specified). For example:
+
+ .. code-block:: console
+
+ export RTE_TARGET=x86_64-native-linuxapp-gcc
+
+ See the *DPDK Getting Started Guide* for possible RTE_TARGET values.
+
+#. Build the application:
+
+ .. code-block:: console
+
+ make
+
+Running the Application
+-----------------------
+
+The application has a lot of command line options. This allows specification of
+the eventdev PMD to use, and a number of attributes of the processing pipeline
+options.
+
+An example eventdev pipeline running with the software eventdev PMD using
+these settings is shown below:
+
+ * ``-r1``: core mask 0x1 for RX
+ * ``-t1``: core mask 0x1 for TX
+ * ``-e4``: core mask 0x4 for the software scheduler
+ * ``-w FF00``: core mask for worker cores, 8 cores from 8th to 16th
+ * ``-s4``: 4 atomic stages
+ * ``-n0``: process infinite packets (run forever)
+ * ``-c32``: worker dequeue depth of 32
+ * ``-W1000``: do 1000 cycles of work per packet in each stage
+ * ``-D``: dump statistics on exit
+
+.. code-block:: console
+
+ ./build/eventdev_pipeline_sw_pmd --vdev event_sw0 -- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
+
+The application has some sanity checking built-in, so if there is a function
+(eg; the RX core) which doesn't have a cpu core mask assigned, the application
+will print an error message:
+
+.. code-block:: console
+
+ Core part of pipeline was not assigned any cores. This will stall the
+ pipeline, please check core masks (use -h for details on setting core masks):
+ rx: 0
+ tx: 1
+
+Configuration of the eventdev is covered in detail in the programmers guide,
+see the Event Device Library section.
+
+
+Observing the Application
+-------------------------
+
+At runtime the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit the
+worker statistics are printed, along with a full dump of the PMD statistics if
+required. The following sections show sample output for each of the output
+types.
+
+Configuration
+~~~~~~~~~~~~~
+
+This provides an overview of the pipeline,
+scheduling type at each stage, and parameters to options such as how many
+flows to use and what eventdev PMD is in use. See the following sample output
+for details:
+
+.. code-block:: console
+
+ Config:
+ ports: 2
+ workers: 8
+ packets: 0
+ priorities: 1
+ Queue-prio: 0
+ qid0 type: atomic
+ Cores available: 44
+ Cores used: 10
+ Eventdev 0: event_sw
+ Stages:
+ Stage 0, Type Atomic Priority = 128
+ Stage 1, Type Atomic Priority = 128
+ Stage 2, Type Atomic Priority = 128
+ Stage 3, Type Atomic Priority = 128
+
+Runtime
+~~~~~~~
+
+At runtime, the statistics of the consumer are printed, stating the number of
+packets received, runtime in milliseconds, average mpps, and current mpps.
+
+.. code-block:: console
+
+ # consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
+
+Shutdown
+~~~~~~~~
+
+At shutdown, the application prints the number of packets received and
+transmitted, and an overview of the distribution of work across worker cores.
+
+.. code-block:: console
+
+ Signal 2 received, preparing to exit...
+ worker 12 thread done. RX=4966581 TX=4966581
+ worker 13 thread done. RX=4963329 TX=4963329
+ worker 14 thread done. RX=4953614 TX=4953614
+ worker 0 thread done. RX=0 TX=0
+ worker 11 thread done. RX=4970549 TX=4970549
+ worker 10 thread done. RX=4986391 TX=4986391
+ worker 9 thread done. RX=4970528 TX=4970528
+ worker 15 thread done. RX=4974087 TX=4974087
+ worker 8 thread done. RX=4979908 TX=4979908
+ worker 2 thread done. RX=0 TX=0
+
+ Port Workload distribution:
+ worker 0 : 12.5 % (4979876 pkts)
+ worker 1 : 12.5 % (4970497 pkts)
+ worker 2 : 12.5 % (4986359 pkts)
+ worker 3 : 12.5 % (4970517 pkts)
+ worker 4 : 12.5 % (4966566 pkts)
+ worker 5 : 12.5 % (4963297 pkts)
+ worker 6 : 12.5 % (4953598 pkts)
+ worker 7 : 12.5 % (4974055 pkts)
+
+To get a full dump of the state of the eventdev PMD, pass the ``-D`` flag to
+this application. When the app is terminated using ``Ctrl+C``, the
+``rte_event_dev_dump()`` function is called, resulting in a dump of the
+statistics that the PMD provides. The statistics provided depend on the PMD
+used, see the Event Device Drivers section for a list of eventdev PMDs.
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index 02611ef..f9239e3 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -69,6 +69,7 @@ Sample Applications User Guides
netmap_compatibility
ip_pipeline
test_pipeline
+ eventdev_pipeline_sw_pmd
dist_app
vm_power_management
tep_termination
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* [dpdk-dev] [PATCH v8 3/3] doc: add eventdev library to programmers guide
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 1/3] examples/eventdev_pipeline_sw_pmd: add " David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 2/3] doc: add SW eventdev pipeline to sample app ug David Hunt
@ 2017-07-06 14:35 ` David Hunt
2017-07-07 4:50 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline sample app Jerin Jacob
3 siblings, 0 replies; 64+ messages in thread
From: David Hunt @ 2017-07-06 14:35 UTC (permalink / raw)
To: dev; +Cc: jerin.jacob, harry.van.haaren, David Hunt
From: Harry van Haaren <harry.van.haaren@intel.com>
This commit adds an entry in the programmers guide
explaining the eventdev library.
The rte_event struct, queues and ports are explained.
An API walktrough of a simple two stage atomic pipeline
provides the reader with a step by step overview of the
expected usage of the Eventdev API.
Signed-off-by: Harry van Haaren <harry.van.haaren@intel.com>
Signed-off-by: David Hunt <david.hunt@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
doc/guides/prog_guide/eventdev.rst | 394 +++++++++++
doc/guides/prog_guide/img/eventdev_usage.svg | 994 +++++++++++++++++++++++++++
doc/guides/prog_guide/index.rst | 1 +
3 files changed, 1389 insertions(+)
create mode 100644 doc/guides/prog_guide/eventdev.rst
create mode 100644 doc/guides/prog_guide/img/eventdev_usage.svg
diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
new file mode 100644
index 0000000..908d123
--- /dev/null
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -0,0 +1,394 @@
+.. BSD LICENSE
+ Copyright(c) 2017 Intel Corporation. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+Event Device Library
+====================
+
+The DPDK Event device library is an abstraction that provides the application
+with features to schedule events. This is achieved using the PMD architecture
+similar to the ethdev or cryptodev APIs, which may already be familiar to the
+reader.
+
+The eventdev framework introduces the event driven programming model. In a
+polling model, lcores poll ethdev ports and associated Rx queues directly
+to look for a packet. By contrast in an event driven model, lcores call the
+scheduler that selects packets for them based on programmer-specified criteria.
+The Eventdev library adds support for an event driven programming model, which
+offers applications automatic multicore scaling, dynamic load balancing,
+pipelining, packet ingress order maintenance and synchronization services to
+simplify application packet processing.
+
+By introducing an event driven programming model, DPDK can support both polling
+and event driven programming models for packet processing, and applications are
+free to choose whatever model (or combination of the two) best suits their
+needs.
+
+Step-by-step instructions of the eventdev design is available in the `API
+Walk-through`_ section later in this document.
+
+Event struct
+------------
+
+The eventdev API represents each event with a generic struct, which contains a
+payload and metadata required for scheduling by an eventdev. The
+``rte_event`` struct is a 16 byte C structure, defined in
+``libs/librte_eventdev/rte_eventdev.h``.
+
+Event Metadata
+~~~~~~~~~~~~~~
+
+The rte_event structure contains the following metadata fields, which the
+application fills in to have the event scheduled as required:
+
+* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
+* ``event_type`` - The source of this event, eg RTE_EVENT_TYPE_ETHDEV or CPU.
+* ``sub_event_type`` - Distinguishes events inside the application, that have
+ the same event_type (see above)
+* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
+ eventdev about the status of the event - valid values are NEW, FORWARD or
+ RELEASE.
+* ``sched_type`` - Represents the type of scheduling that should be performed
+ on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
+ PARALLEL.
+* ``queue_id`` - The identifier for the event queue that the event is sent to.
+* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
+
+Event Payload
+~~~~~~~~~~~~~
+
+The rte_event struct contains a union for payload, allowing flexibility in what
+the actual event being scheduled is. The payload is a union of the following:
+
+* ``uint64_t u64``
+* ``void *event_ptr``
+* ``struct rte_mbuf *mbuf``
+
+These three items in a union occupy the same 64 bits at the end of the rte_event
+structure. The application can utilize the 64 bits directly by accessing the
+u64 variable, while the event_ptr and mbuf are provided as convenience
+variables. For example the mbuf pointer in the union can used to schedule a
+DPDK packet.
+
+Queues
+~~~~~~
+
+An event queue is a queue containing events that are scheduled by the event
+device. An event queue contains events of different flows associated with
+scheduling types, such as atomic, ordered, or parallel.
+
+Queue All Types Capable
+^^^^^^^^^^^^^^^^^^^^^^^
+
+If RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability bit is set in the event device,
+then events of any type may be sent to any queue. Otherwise, the queues only
+support events of the type that it was created with.
+
+Queue All Types Incapable
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In this case, each stage has a specified scheduling type. The application
+configures each queue for a specific type of scheduling, and just enqueues all
+events to the eventdev. An example of a PMD of this type is the eventdev
+software PMD.
+
+The Eventdev API supports the following scheduling types per queue:
+
+* Atomic
+* Ordered
+* Parallel
+
+Atomic, Ordered and Parallel are load-balanced scheduling types: the output
+of the queue can be spread out over multiple CPU cores.
+
+Atomic scheduling on a queue ensures that a single flow is not present on two
+different CPU cores at the same time. Ordered allows sending all flows to any
+core, but the scheduler must ensure that on egress the packets are returned to
+ingress order on downstream queue enqueue. Parallel allows sending all flows
+to all CPU cores, without any re-ordering guarantees.
+
+Single Link Flag
+^^^^^^^^^^^^^^^^
+
+There is a SINGLE_LINK flag which allows an application to indicate that only
+one port will be connected to a queue. Queues configured with the single-link
+flag follow a FIFO like structure, maintaining ordering but it is only capable
+of being linked to a single port (see below for port and queue linking details).
+
+
+Ports
+~~~~~
+
+Ports are the points of contact between worker cores and the eventdev. The
+general use-case will see one CPU core using one port to enqueue and dequeue
+events from an eventdev. Ports are linked to queues in order to retrieve events
+from those queues (more details in `Linking Queues and Ports`_ below).
+
+
+API Walk-through
+----------------
+
+This section will introduce the reader to the eventdev API, showing how to
+create and configure an eventdev and use it for a two-stage atomic pipeline
+with a single core for TX. The diagram below shows the final state of the
+application after this walk-through:
+
+.. _figure_eventdev-usage1:
+
+.. figure:: img/eventdev_usage.*
+
+ Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
+
+
+A high level overview of the setup steps are:
+
+* rte_event_dev_configure()
+* rte_event_queue_setup()
+* rte_event_port_setup()
+* rte_event_port_link()
+* rte_event_dev_start()
+
+
+Init and Config
+~~~~~~~~~~~~~~~
+
+The eventdev library uses vdev options to add devices to the DPDK application.
+The ``--vdev`` EAL option allows adding eventdev instances to your DPDK
+application, using the name of the eventdev PMD as an argument.
+
+For example, to create an instance of the software eventdev scheduler, the
+following vdev arguments should be provided to the application EAL command line:
+
+.. code-block:: console
+
+ ./dpdk_application --vdev="event_sw0"
+
+In the following code, we configure eventdev instance with 3 queues
+and 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
+while the 6 ports consist of 4 workers, 1 RX and 1 TX.
+
+.. code-block:: c
+
+ const struct rte_event_dev_config config = {
+ .nb_event_queues = 3,
+ .nb_event_ports = 6,
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128,
+ };
+ int err = rte_event_dev_configure(dev_id, &config);
+
+The remainder of this walk-through assumes that dev_id is 0.
+
+Setting up Queues
+~~~~~~~~~~~~~~~~~
+
+Once the eventdev itself is configured, the next step is to configure queues.
+This is done by setting the appropriate values in a queue_conf structure, and
+calling the setup function. Repeat this step for each queue, starting from
+0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf atomic_conf = {
+ .event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ };
+ int dev_id = 0;
+ int queue_id = 0;
+ int err = rte_event_queue_setup(dev_id, queue_id, &atomic_conf);
+
+The remainder of this walk-through assumes that the queues are configured as
+follows:
+
+ * id 0, atomic queue #1
+ * id 1, atomic queue #2
+ * id 2, single-link queue
+
+Setting up Ports
+~~~~~~~~~~~~~~~~
+
+Once queues are set up successfully, create the ports as required. Each port
+should be set up with its corresponding port_conf type, worker for worker cores,
+rx and tx for the RX and TX cores:
+
+.. code-block:: c
+
+ struct rte_event_port_conf rx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 1024,
+ };
+ struct rte_event_port_conf worker_conf = {
+ .dequeue_depth = 16,
+ .enqueue_depth = 64,
+ .new_event_threshold = 4096,
+ };
+ struct rte_event_port_conf tx_conf = {
+ .dequeue_depth = 128,
+ .enqueue_depth = 128,
+ .new_event_threshold = 4096,
+ };
+ int dev_id = 0;
+ int port_id = 0;
+ int err = rte_event_port_setup(dev_id, port_id, &CORE_FUNCTION_conf);
+
+It is now assumed that:
+
+ * port 0: RX core
+ * ports 1,2,3,4: Workers
+ * port 5: TX core
+
+Linking Queues and Ports
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The final step is to "wire up" the ports to the queues. After this, the
+eventdev is capable of scheduling events, and when cores request work to do,
+the correct events are provided to that core. Note that the RX core takes input
+from eg: a NIC so it is not linked to any eventdev queues.
+
+Linking all workers to atomic queues, and the TX core to the single-link queue
+can be achieved like this:
+
+.. code-block:: c
+
+ uint8_t port_id = 0;
+ uint8_t atomic_qs[] = {0, 1};
+ uint8_t single_link_q = 2;
+ uint8_t tx_port_id = 5;
+ uin8t_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+ for(int i = 0; i < 4; i++) {
+ int worker_port = i + 1;
+ int links_made = rte_event_port_link(dev_id, worker_port, atomic_qs, NULL, 2);
+ }
+ int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
+
+Starting the EventDev
+~~~~~~~~~~~~~~~~~~~~~
+
+A single function call tells the eventdev instance to start processing
+events. Note that all queues must be linked to for the instance to start, as
+if any queue is not linked to, enqueuing to that queue will cause the
+application to backpressure and eventually stall due to no space in the
+eventdev.
+
+.. code-block:: c
+
+ int err = rte_event_dev_start(dev_id);
+
+Ingress of New Events
+~~~~~~~~~~~~~~~~~~~~~
+
+Now that the eventdev is set up, and ready to receive events, the RX core must
+enqueue some events into the system for it to schedule. The events to be
+scheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
+The following code shows how those packets can be enqueued into the eventdev:
+
+.. code-block:: c
+
+ const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
+
+ for (i = 0; i < nb_rx; i++) {
+ ev[i].flow_id = mbufs[i]->hash.rss;
+ ev[i].op = RTE_EVENT_OP_NEW;
+ ev[i].sched_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;
+ ev[i].queue_id = 0;
+ ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
+ ev[i].sub_event_type = 0;
+ ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+ ev[i].mbuf = mbufs[i];
+ }
+
+ const int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);
+ if (nb_tx != nb_rx) {
+ for(i = nb_tx; i < nb_rx; i++)
+ rte_pktmbuf_free(mbufs[i]);
+ }
+
+Forwarding of Events
+~~~~~~~~~~~~~~~~~~~~
+
+Now that the RX core has injected events, there is work to be done by the
+workers. Note that each worker will dequeue as many events as it can in a burst,
+process each one individually, and then burst the packets back into the
+eventdev.
+
+The worker can lookup the events source from ``event.queue_id``, which should
+indicate to the worker what workload needs to be performed on the event.
+Once done, the worker can update the ``event.queue_id`` to a new value, to send
+the event to the next stage in the pipeline.
+
+.. code-block:: c
+
+ int timeout = 0;
+ struct rte_event events[BATCH_SIZE];
+ uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
+
+ for (i = 0; i < nb_rx; i++) {
+ /* process mbuf using events[i].queue_id as pipeline stage */
+ struct rte_mbuf *mbuf = events[i].mbuf;
+ /* Send event to next stage in pipeline */
+ events[i].queue_id++;
+ }
+
+ uint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id, events, nb_rx);
+
+
+Egress of Events
+~~~~~~~~~~~~~~~~
+
+Finally, when the packet is ready for egress or needs to be dropped, we need
+to inform the eventdev that the packet is no longer being handled by the
+application. This can be done by calling dequeue() or dequeue_burst(), which
+indicates that the previous burst of packets is no longer in use by the
+application.
+
+An event driven worker thread has following typical workflow on fastpath:
+
+.. code-block:: c
+
+ while (1) {
+ rte_event_dequeue_burst(...);
+ (event processing)
+ rte_event_enqueue_burst(...);
+ }
+
+
+Summary
+-------
+
+The eventdev library allows an application to easily schedule events as it
+requires, either using a run-to-completion or pipeline processing model. The
+queues and ports abstract the logical functionality of an eventdev, providing
+the application with a generic method to schedule events. With the flexible
+PMD infrastructure applications benefit of improvements in existing eventdevs
+and additions of new ones without modification.
diff --git a/doc/guides/prog_guide/img/eventdev_usage.svg b/doc/guides/prog_guide/img/eventdev_usage.svg
new file mode 100644
index 0000000..7765649
--- /dev/null
+++ b/doc/guides/prog_guide/img/eventdev_usage.svg
@@ -0,0 +1,994 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+
+<svg
+ xmlns:v="http://schemas.microsoft.com/visio/2003/SVGExtensions/"
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:xlink="http://www.w3.org/1999/xlink"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="683.12061"
+ height="184.672"
+ viewBox="0 0 546.49648 147.7376"
+ xml:space="preserve"
+ color-interpolation-filters="sRGB"
+ class="st9"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="eventdev_usage.svg"
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"><metadata
+ id="metadata214"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview212"
+ showgrid="false"
+ fit-margin-top="2"
+ fit-margin-left="2"
+ fit-margin-right="2"
+ fit-margin-bottom="2"
+ inkscape:zoom="1.2339869"
+ inkscape:cx="501.15554"
+ inkscape:cy="164.17693"
+ inkscape:window-x="-8"
+ inkscape:window-y="406"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="g17" />
+ <v:documentProperties
+ v:langID="1033"
+ v:viewMarkup="false">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvSubprocessMaster"
+ v:prompt=""
+ v:val="VT4(Rectangle)" />
+ <v:ud
+ v:nameU="msvNoAutoConnect"
+ v:val="VT0(1):26" />
+ </v:userDefs>
+ </v:documentProperties>
+
+ <style
+ type="text/css"
+ id="style4">
+
+ .st1 {visibility:visible}
+ .st2 {fill:#5b9bd5;fill-opacity:0.22;filter:url(#filter_2);stroke:#5b9bd5;stroke-opacity:0.22}
+ .st3 {fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25}
+ .st4 {fill:#feffff;font-family:Calibri;font-size:0.833336em}
+ .st5 {font-size:1em}
+ .st6 {fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25}
+ .st7 {marker-end:url(#mrkr4-33);stroke:#5b9bd5;stroke-linecap:round;stroke-linejoin:round;stroke-width:1}
+ .st8 {fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-opacity:1;stroke-width:0.28409090909091}
+ .st9 {fill:none;fill-rule:evenodd;font-size:12px;overflow:visible;stroke-linecap:square;stroke-miterlimit:3}
+
+ </style>
+
+ <defs
+ id="Markers">
+ <g
+ id="lend4">
+ <path
+ d="M 2,1 0,0 2,-1 2,1"
+ style="stroke:none"
+ id="path8"
+ inkscape:connector-curvature="0" />
+ </g>
+ <marker
+ id="mrkr4-33"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible">
+ <use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11"
+ x="0"
+ y="0"
+ width="3"
+ height="3" />
+ </marker>
+ <filter
+ id="filter_2-7"
+ color-interpolation-filters="sRGB"><feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15-1" /></filter><marker
+ id="mrkr4-33-2"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-3"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker><marker
+ id="mrkr4-33-6"
+ class="st8"
+ v:arrowType="4"
+ v:arrowSize="2"
+ v:setback="7.04"
+ refX="-7.04"
+ orient="auto"
+ markerUnits="strokeWidth"
+ overflow="visible"
+ style="fill:#5b9bd5;fill-opacity:1;stroke:#5b9bd5;stroke-width:0.28409091;stroke-opacity:1;overflow:visible"><use
+ xlink:href="#lend4"
+ transform="scale(-3.52,-3.52)"
+ id="use11-8"
+ x="0"
+ y="0"
+ width="3"
+ height="3" /></marker></defs>
+ <defs
+ id="Filters">
+ <filter
+ id="filter_2"
+ color-interpolation-filters="sRGB">
+ <feGaussianBlur
+ stdDeviation="2"
+ id="feGaussianBlur15" />
+ </filter>
+ </defs>
+ <g
+ v:mID="0"
+ v:index="1"
+ v:groupContext="foregroundPage"
+ id="g17"
+ transform="translate(-47.323579,-90.784072)">
+ <v:userDefs>
+ <v:ud
+ v:nameU="msvThemeOrder"
+ v:val="VT0(0):26" />
+ </v:userDefs>
+ <title
+ id="title19">Page-1</title>
+ <v:pageProperties
+ v:drawingScale="1"
+ v:pageScale="1"
+ v:drawingUnits="0"
+ v:shadowOffsetX="9"
+ v:shadowOffsetY="-9" />
+ <v:layer
+ v:name="Connector"
+ v:index="0" />
+ <g
+ id="shape1-1"
+ v:mID="1"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,128.62352,-288.18843)">
+ <title
+ id="title22">Square</title>
+ <desc
+ id="desc24">Atomic Queue #1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow1-2"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect27"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect29"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape3-8"
+ v:mID="3"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,297.37175,-288.18843)">
+ <title
+ id="title36">Square.3</title>
+ <desc
+ id="desc38">Atomic Queue #2</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow3-9"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect41"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect43"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape4-15"
+ v:mID="4"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,466.1192,-288.18843)">
+ <title
+ id="title50">Square.4</title>
+ <desc
+ id="desc52">Single Link Queue # 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="30.75"
+ cy="581.25"
+ width="61.5"
+ height="61.5" />
+ <g
+ id="shadow4-16"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st2"
+ id="rect55"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <rect
+ x="0"
+ y="550.5"
+ width="61.5"
+ height="61.5"
+ class="st3"
+ id="rect57"
+ style="fill:#5b9bd5;stroke:#c7c8c8;stroke-width:0.25" />
+
+ </g>
+ <g
+ id="shape5-22"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,52.208527,-296.14701)">
+ <title
+ id="title64">Circle</title>
+ <desc
+ id="desc66">RX</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow5-23"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="15.19"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text73"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />RX</text>
+
+ </g>
+ <g
+ id="shape6-28"
+ v:mID="6"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,84.042834,-305.07614)">
+ <title
+ id="title76">Dynamic connector</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path78"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape7-34"
+ v:mID="7"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-296.14701)">
+ <title
+ id="title81">Circle.7</title>
+ <desc
+ id="desc83">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow7-35"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path86"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path88"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text90"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape9-40"
+ v:mID="9"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-243.34865)">
+ <title
+ id="title93">Circle.9</title>
+ <desc
+ id="desc95">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow9-41"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path98"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path100"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text102"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape10-46"
+ v:mID="10"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,220.95621,-348.94537)">
+ <title
+ id="title105">Circle.10</title>
+ <desc
+ id="desc107">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow10-47"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path110"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path112"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text114"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape11-52"
+ v:mID="11"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,195.91581,-312.06416)">
+ <title
+ id="title117">Dynamic connector.11</title>
+ <path
+ d="m 0,612 0,-68 25.21,0"
+ class="st7"
+ id="path119"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape12-57"
+ v:mID="12"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-305.07614)">
+ <title
+ id="title122">Dynamic connector.12</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path124"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape13-62"
+ v:mID="13"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,176.37498,-312.06416)">
+ <title
+ id="title127">Dynamic connector.13</title>
+ <path
+ d="m 0,612 25.17,0 0,68 25.21,0"
+ class="st7"
+ id="path129"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape14-67"
+ v:mID="14"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-259.2658)">
+ <title
+ id="title132">Dynamic connector.14</title>
+ <path
+ d="m 0,612 26.88,0 0,-68 23.5,0"
+ class="st7"
+ id="path134"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape15-72"
+ v:mID="15"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-305.07614)">
+ <title
+ id="title137">Dynamic connector.15</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path139"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape19-77"
+ v:mID="19"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-296.14701)">
+ <title
+ id="title142">Circle.19</title>
+ <desc
+ id="desc144">W ..</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow19-78"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path147"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path149"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.4"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text151"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W ..</text>
+
+ </g>
+ <g
+ id="shape20-83"
+ v:mID="20"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-243.34865)">
+ <title
+ id="title154">Circle.20</title>
+ <desc
+ id="desc156">W N</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow20-84"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path159"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path161"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="11.69"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text163"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W N</text>
+
+ </g>
+ <g
+ id="shape21-89"
+ v:mID="21"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,389.70366,-348.94537)">
+ <title
+ id="title166">Circle.21</title>
+ <desc
+ id="desc168">W 1</desc>
+ <v:userDefs>
+ <v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" />
+ </v:userDefs>
+ <v:textBlock
+ v:margins="rect(4,4,4,4)" />
+ <v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" />
+ <g
+ id="shadow21-90"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible">
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path171"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2)" />
+ </g>
+ <path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path173"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" />
+ <text
+ x="12.39"
+ y="594.5"
+ class="st4"
+ v:langID="1033"
+ id="text175"
+ style="fill:#feffff;font-family:Calibri"><v:paragraph
+ v:horizAlign="1" /><v:tabList />W 1</text>
+
+ </g>
+ <g
+ id="shape28-95"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-305.07614)">
+ <title
+ id="title178">Dynamic connector.28</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape29-100"
+ v:mID="29"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title183">Dynamic connector.29</title>
+ <path
+ d="m 0,612 28.33,0 0,-68 22.05,0"
+ class="st7"
+ id="path185"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape30-105"
+ v:mID="30"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,345.12321,-312.06416)">
+ <title
+ id="title188">Dynamic connector.30</title>
+ <path
+ d="m 0,612 28.33,0 0,68 22.05,0"
+ class="st7"
+ id="path190"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape31-110"
+ v:mID="31"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-259.2658)">
+ <title
+ id="title193">Dynamic connector.31</title>
+ <path
+ d="m 0,612 24.42,0 0,-68 25.96,0"
+ class="st7"
+ id="path195"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape32-115"
+ v:mID="32"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-305.07614)">
+ <title
+ id="title198">Dynamic connector.32</title>
+ <path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path200"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape33-120"
+ v:mID="33"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,421.53797,-364.86253)">
+ <title
+ id="title203">Dynamic connector.33</title>
+ <path
+ d="m 0,612 24.42,0 0,68 25.96,0"
+ class="st7"
+ id="path205"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <g
+ id="shape34-125"
+ v:mID="34"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,252.79052,-364.86253)">
+ <title
+ id="title208">Dynamic connector.34</title>
+ <path
+ d="m 0,612 26.88,0 0,68 23.5,0"
+ class="st7"
+ id="path210"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" />
+ </g>
+ <text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans"
+ x="153.38116"
+ y="165.90149"
+ id="text3106"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="153.38116"
+ y="165.90149"
+ id="tspan3110"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #1</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="322.12939"
+ y="165.90149"
+ id="text3106-1"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="322.12939"
+ y="165.90149"
+ id="tspan3110-4"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Atomic #2</tspan></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.82089"
+ y="172.79289"
+ id="text3106-0"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.82089"
+ y="172.79289"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans"
+ id="tspan3923" /></text>
+<text
+ xml:space="preserve"
+ style="font-size:24.84628868px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;overflow:visible;font-family:Sans"
+ x="491.02899"
+ y="165.03951"
+ id="text3106-8-5"
+ sodipodi:linespacing="125%"><tspan
+ sodipodi:role="line"
+ x="491.02899"
+ y="165.03951"
+ id="tspan3110-2-1"
+ style="font-size:8.69620132px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;font-family:Sans;-inkscape-font-specification:Sans">Single Link</tspan></text>
+<g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape5-22-1"
+ v:mID="5"
+ v:groupContext="shape"
+ transform="matrix(0.77644652,0,0,0.77644652,556.00223,-296.89447)"><title
+ id="title64-5">Circle</title><desc
+ id="desc66-2">RX</desc><v:userDefs><v:ud
+ v:nameU="visVersion"
+ v:val="VT0(15):26" /></v:userDefs><v:textBlock
+ v:margins="rect(4,4,4,4)" /><v:textRect
+ cx="20.5"
+ cy="591.5"
+ width="35.88"
+ height="30.75" /><g
+ id="shadow5-23-7"
+ v:groupContext="shadow"
+ v:shadowOffsetX="0.345598"
+ v:shadowOffsetY="-1.97279"
+ v:shadowType="1"
+ transform="translate(0.345598,1.97279)"
+ class="st1"
+ style="visibility:visible"><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st2"
+ id="path69-6"
+ inkscape:connector-curvature="0"
+ style="fill:#5b9bd5;fill-opacity:0.22000002;stroke:#5b9bd5;stroke-opacity:0.22000002;filter:url(#filter_2-7)" /></g><path
+ d="m 0,591.5 a 20.5,20.5 0 0 1 41,0 20.5,20.5 0 1 1 -41,0 z"
+ class="st6"
+ id="path71-1"
+ inkscape:connector-curvature="0"
+ style="fill:#ffd965;stroke:#c7c8c8;stroke-width:0.25" /><text
+ x="11.06866"
+ y="596.56067"
+ class="st4"
+ v:langID="1033"
+ id="text73-4"
+ style="fill:#feffff;font-family:Calibri"> TX</text>
+</g><g
+ style="font-size:12px;fill:none;stroke-linecap:square;stroke-miterlimit:3;overflow:visible"
+ id="shape28-95-5"
+ v:mID="28"
+ v:groupContext="shape"
+ v:layerMember="0"
+ transform="matrix(0.77644652,0,0,0.77644652,512.00213,-305.42637)"><title
+ id="title178-7">Dynamic connector.28</title><path
+ d="m 0,603 50.38,0"
+ class="st7"
+ id="path180-6"
+ inkscape:connector-curvature="0"
+ style="stroke:#5b9bd5;stroke-width:1;stroke-linecap:round;stroke-linejoin:round;marker-end:url(#mrkr4-33)" /></g></g>
+</svg>
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ef5a02a..7578395 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -57,6 +57,7 @@ Programmer's Guide
multi_proc_support
kernel_nic_interface
thread_safety_dpdk_functions
+ eventdev
qos_framework
power_man
packet_classif_access_ctrl
--
2.7.4
^ permalink raw reply [flat|nested] 64+ messages in thread
* Re: [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline sample app
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline " David Hunt
` (2 preceding siblings ...)
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 3/3] doc: add eventdev library to programmers guide David Hunt
@ 2017-07-07 4:50 ` Jerin Jacob
3 siblings, 0 replies; 64+ messages in thread
From: Jerin Jacob @ 2017-07-07 4:50 UTC (permalink / raw)
To: David Hunt; +Cc: dev, harry.van.haaren
-----Original Message-----
> Date: Thu, 6 Jul 2017 15:35:13 +0100
> From: David Hunt <david.hunt@intel.com>
> To: dev@dpdk.org
> CC: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com
> Subject: [PATCH v8 0/3] next-eventdev: evendev pipeline sample app
> X-Mailer: git-send-email 2.7.4
>
> This patchset introduces a sample application that demonstrates
> a pipeline model for packet processing. Running this sample app
> with 17.05-rc2 or later is recommended.
>
> Changes in patch v2:
> * None, incorrect patch upload
>
> Changes in patch v3:
> * Re-work based on comments on mailing list. No major functional changes.
> * Checkpatch cleanup of a couple of typos
>
> Changes in patch v4:
> * Re-named the app as eventdev_pipeline_sw, as it's aimed at showing the
> functionality of the software PMD.
>
> Changes in patch v5:
> * Fixed make doc. eventdev_pipeline to eventdev_pipeline_sw
> * Fixed some typos in the eventdev programmers guide
>
> Changes in patch v6:
> * made name of dirs and app consistent - eventdev_pipeline_sw_pmd
> * Added new files and dirs to MAINTAINERS
> * Updaged libeventdev docs based on Jerin's feedback
> * Added some cleanup to eventdev_pipeline sw pmd sample app
>
> Changes in patch v7:
> * Cleaned global vars into structs
> * Changed mac address swap so that packet is not changed
> * Cleaned up report on exit if some stats not supported
> * Added .rst file dropped in v6
>
> Changes in patch v8:
> * Changed 'sw' to 'SW' in git log msg
> * Changed 'eventdev_pipeline' to 'eventdev_pipeline_sw_pmd' in git log msg
Series applied to dpdk-next-eventdev/master. Thanks.
>
> The sample app itself allows configuration of various pipelines using
> command line arguments. Parameters like number of stages, number of
> worker cores, which cores are assigned to specific tasks, and work-
> cycles per-stage in the pipeline can be configured.
>
> Documentation for eventdev is added for the programmers guide and
> sample app user guide, providing sample commands to run the app with,
> and expected output.
>
> [1/3] examples/eventdev_pipeline_sw_pmd: add sample app
> [2/3] doc: add SW eventdev pipeline to sample app ug
> [3/3] doc: add eventdev library to programmers guide
^ permalink raw reply [flat|nested] 64+ messages in thread
end of thread, other threads:[~2017-07-07 4:50 UTC | newest]
Thread overview: 64+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-21 9:51 [dpdk-dev] [PATCH 0/3] next-eventdev: RFC evendev pipeline sample app Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 1/3] examples/eventdev_pipeline: added " Harry van Haaren
2017-05-10 14:12 ` Jerin Jacob
2017-05-10 16:40 ` Eads, Gage
2017-05-10 20:16 ` Eads, Gage
2017-06-26 14:46 ` Hunt, David
2017-06-27 9:35 ` Jerin Jacob
2017-06-27 13:12 ` Hunt, David
2017-06-29 7:17 ` Jerin Jacob
2017-06-29 12:51 ` Hunt, David
2017-05-17 18:03 ` Jerin Jacob
2017-05-18 10:13 ` Bruce Richardson
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 1/3] examples/eventdev_pipeline: added " David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 0/3] next-eventdev: evendev pipeline " David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 1/3] examples/eventdev_pipeline: added " David Hunt
2017-07-03 3:57 ` Jerin Jacob
2017-07-04 7:55 ` Hunt, David
2017-07-05 5:30 ` Jerin Jacob
2017-07-05 11:15 ` Hunt, David
2017-07-06 3:31 ` Jerin Jacob
2017-07-06 10:04 ` Hunt, David
2017-07-06 10:39 ` Hunt, David
2017-07-06 13:26 ` Hunt, David
2017-07-06 13:38 ` Jerin Jacob
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added " David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 1/3] examples/eventdev_pipeline: added " David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline " David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 1/3] examples/eventdev_pipeline_sw_pmd: add " David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 2/3] doc: add SW eventdev pipeline to sample app ug David Hunt
2017-07-06 14:35 ` [dpdk-dev] [PATCH v8 3/3] doc: add eventdev library to programmers guide David Hunt
2017-07-07 4:50 ` [dpdk-dev] [PATCH v8 0/3] next-eventdev: evendev pipeline sample app Jerin Jacob
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-07-05 12:52 ` [dpdk-dev] [PATCH v7 3/3] doc: add eventdev library to programmers guide David Hunt
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-07-05 4:30 ` Jerin Jacob
2017-07-04 8:14 ` [dpdk-dev] [PATCH v6 3/3] doc: add eventdev library to programmers guide David Hunt
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-06-30 14:37 ` Mcnamara, John
2017-07-03 5:37 ` Jerin Jacob
2017-07-03 9:25 ` Hunt, David
2017-07-03 9:32 ` Jerin Jacob
2017-07-04 8:20 ` Hunt, David
2017-06-30 13:51 ` [dpdk-dev] [PATCH v5 3/3] doc: add eventdev library to programmers guide David Hunt
2017-06-30 14:38 ` Mcnamara, John
2017-07-02 12:08 ` Jerin Jacob
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 2/3] doc: add sw eventdev pipeline to sample app ug David Hunt
2017-06-30 12:25 ` Mcnamara, John
2017-06-29 15:49 ` [dpdk-dev] [PATCH v4 3/3] doc: add eventdev library to programmers guide David Hunt
2017-06-30 12:26 ` Mcnamara, John
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 2/3] doc: add eventdev pipeline to sample app ug David Hunt
2017-06-27 12:54 ` [dpdk-dev] [PATCH v3 3/3] doc: add eventdev library to programmers guide David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 2/3] doc: add eventdev pipeline to sample app ug David Hunt
2017-06-26 14:41 ` [dpdk-dev] [PATCH v2 3/3] doc: add eventdev library to programmers guide David Hunt
2017-04-21 9:51 ` [dpdk-dev] [PATCH 2/3] doc: add eventdev pipeline to sample app ug Harry van Haaren
2017-04-21 9:51 ` [dpdk-dev] [PATCH 3/3] doc: add eventdev library to programmers guide Harry van Haaren
2017-04-21 11:14 ` Bruce Richardson
2017-04-21 14:00 ` Jerin Jacob
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).