* [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example
@ 2019-09-19 9:25 pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
` (8 more replies)
0 siblings, 9 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:25 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal
Cc: dev, Pavan Nikhilesh, Sunil Kumar Kori
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds a new application to demonstrate the usage of event
mode. The poll mode is also available to help with the transition.
The following new command line parameters are added:
--mode: Dictates the mode of operation either poll or event.
--eventq_sync: Dictates event synchronization mode either atomic or
ordered.
Based on event device capability the configuration is done as follows:
- A single event device is enabled.
- The number of event ports is equal to the number of worker
cores enabled in the core mask. Additional event ports might
be configured based on Rx/Tx adapter capability.
- The number of event queues is equal to the number of ethernet
ports. If Tx adapter doesn't have internal port capability then
an additional single link event queue is used to enqueue events
to Tx adapter.
- Each event port is linked to all existing event queues.
- Dedicated Rx/Tx adapters for each Ethernet port.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Pavan Nikhilesh (5):
examples/l2fwd-event: add infra to split eventdev framework
examples/l2fwd-event: add eventdev queue and port setup
examples/l2fwd-event: add event Rx/Tx adapter setup
examples/l2fwd-event: add eventdev main loop
examples/l2fwd-event: add graceful teardown
Sunil Kumar Kori (5):
examples/l2fwd-event: add default poll mode routines
examples/l2fwd-event: add infra for eventdev
examples/l2fwd-event: add eth port setup for eventdev
examples/l2fwd-event: add service core setup
doc: add application usage guide for l2fwd-event
MAINTAINERS | 5 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
.../l2_forward_event_real_virtual.rst | 799 +++++++++++++++++
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 60 ++
examples/l2fwd-event/l2fwd_common.h | 26 +
examples/l2fwd-event/l2fwd_eventdev.c | 536 ++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 118 +++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 349 ++++++++
.../l2fwd_eventdev_internal_port.c | 281 ++++++
examples/l2fwd-event/main.c | 804 ++++++++++++++++++
examples/l2fwd-event/meson.build | 15 +
examples/l2fwd/main.c | 10 +-
14 files changed, 3005 insertions(+), 5 deletions(-)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
@ 2019-09-19 9:25 ` pbhagavatula
2019-09-19 9:43 ` Sunil Kumar Kori
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
` (7 subsequent siblings)
8 siblings, 2 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:25 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add the default l2fwd poll mode routines similar to examples/l2fwd.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 57 +++
examples/l2fwd-event/l2fwd_common.h | 26 +
examples/l2fwd-event/main.c | 737 ++++++++++++++++++++++++++++
examples/l2fwd-event/meson.build | 12 +
examples/l2fwd/main.c | 10 +-
6 files changed, 838 insertions(+), 5 deletions(-)
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
diff --git a/examples/Makefile b/examples/Makefile
index de11dd487..d18504bd2 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -34,6 +34,7 @@ endif
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni
DIRS-y += l2fwd
+DIRS-y += l2fwd-event
ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
new file mode 100644
index 000000000..a156c4162
--- /dev/null
+++ b/examples/l2fwd-event/Makefile
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# binary name
+APP = l2fwd-event
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
new file mode 100644
index 000000000..b0ef49144
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_COMMON_H__
+#define __L2FWD_COMMON_H__
+
+#define MAX_PKT_BURST 32
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+ uint64_t dropped;
+ uint64_t tx;
+ uint64_t rx;
+} __rte_cache_aligned;
+
+void print_stats(void);
+
+#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
new file mode 100644
index 000000000..cc47fa203
--- /dev/null
+++ b/examples/l2fwd-event/main.c
@@ -0,0 +1,737 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+static volatile bool force_quit;
+
+/* MAC updating enabled by default */
+static int mac_updating = 1;
+
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define MEMPOOL_CACHE_SIZE 256
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct rte_ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint32_t l2fwd_enabled_port_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+static unsigned int l2fwd_rx_queue_per_lcore = 1;
+
+struct lcore_queue_conf {
+ uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ uint32_t n_rx_port;
+} __rte_cache_aligned;
+
+static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .split_hdr_size = 0,
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+};
+
+static struct rte_mempool *l2fwd_pktmbuf_pool;
+
+static struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+/* A tsc-based timer responsible for triggering statistics printout */
+static uint64_t timer_period = 10; /* default period is 10 seconds */
+
+/* Print out statistics on packets dropped */
+void print_stats(void)
+{
+ uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
+ uint32_t portid;
+
+ total_packets_dropped = 0;
+ total_packets_tx = 0;
+ total_packets_rx = 0;
+
+ const char clr[] = {27, '[', '2', 'J', '\0' };
+ const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("\nPort statistics ====================================");
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+ /* skip disabled ports */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("\nStatistics for port %u ------------------------------"
+ "\nPackets sent: %24"PRIu64
+ "\nPackets received: %20"PRIu64
+ "\nPackets dropped: %21"PRIu64,
+ portid,
+ port_statistics[portid].tx,
+ port_statistics[portid].rx,
+ port_statistics[portid].dropped);
+
+ total_packets_dropped += port_statistics[portid].dropped;
+ total_packets_tx += port_statistics[portid].tx;
+ total_packets_rx += port_statistics[portid].rx;
+ }
+ printf("\nAggregate statistics ==============================="
+ "\nTotal packets sent: %18"PRIu64
+ "\nTotal packets received: %14"PRIu64
+ "\nTotal packets dropped: %15"PRIu64,
+ total_packets_tx,
+ total_packets_rx,
+ total_packets_dropped);
+ printf("\n====================================================\n");
+}
+
+static void
+l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_portid)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->s_addr);
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+{
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+}
+
+/* main processing loop */
+static void l2fwd_main_loop(void)
+{
+ uint64_t prev_tsc, diff_tsc, cur_tsc, timer_tsc, drain_tsc;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_tx_buffer *buffer;
+ struct lcore_queue_conf *qconf;
+ uint32_t i, j, portid, nb_rx;
+ struct rte_mbuf *m;
+ uint32_t lcore_id;
+ int32_t sent;
+
+ drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S *
+ BURST_TX_DRAIN_US;
+ prev_tsc = 0;
+ timer_tsc = 0;
+
+ lcore_id = rte_lcore_id();
+ qconf = &lcore_queue_conf[lcore_id];
+
+ if (qconf->n_rx_port == 0) {
+ RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+ return;
+ }
+
+ RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ portid = qconf->rx_port_list[i];
+ RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+ portid);
+
+ }
+
+ while (!force_quit) {
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid =
+ l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id ==
+ rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+ /*
+ * Read packet from RX queues
+ */
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst(portid, 0,
+ pkts_burst, MAX_PKT_BURST);
+
+ port_statistics[portid].rx += nb_rx;
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+ }
+}
+
+static int
+l2fwd_launch_one_lcore(void *args)
+{
+ RTE_SET_USED(args);
+ l2fwd_main_loop();
+
+ return 0;
+}
+
+/* display usage */
+static void
+l2fwd_usage(const char *prgname)
+{
+ printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
+ " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+ " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds "
+ " (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ prgname);
+}
+
+static int
+l2fwd_parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static unsigned int
+l2fwd_parse_nqueue(const char *q_arg)
+{
+ char *end = NULL;
+ unsigned long n;
+
+ /* parse hexadecimal string */
+ n = strtoul(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+ if (n == 0)
+ return 0;
+ if (n >= MAX_RX_QUEUE_PER_LCORE)
+ return 0;
+
+ return n;
+}
+
+static int
+l2fwd_parse_timer_period(const char *q_arg)
+{
+ char *end = NULL;
+ int n;
+
+ /* parse number string */
+ n = strtol(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+ if (n >= MAX_TIMER_PERIOD)
+ return -1;
+
+ return n;
+}
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "q:" /* number of queues */
+ "T:" /* timer period */
+ ;
+
+#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
+#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options
+ */
+ CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+static const struct option lgopts[] = {
+ { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
+ { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ {NULL, 0, 0, 0}
+};
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_parse_args(int argc, char **argv)
+{
+ int opt, ret, timer_secs;
+ char *prgname = argv[0];
+ char **argvopt;
+ int option_index;
+
+ argvopt = argv;
+
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ l2fwd_enabled_port_mask = l2fwd_parse_portmask(optarg);
+ if (l2fwd_enabled_port_mask == 0) {
+ printf("invalid portmask\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* nqueue */
+ case 'q':
+ l2fwd_rx_queue_per_lcore = l2fwd_parse_nqueue(optarg);
+ if (l2fwd_rx_queue_per_lcore == 0) {
+ printf("invalid queue number\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* timer period */
+ case 'T':
+ timer_secs = l2fwd_parse_timer_period(optarg);
+ if (timer_secs < 0) {
+ printf("invalid timer period\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ timer_period = timer_secs;
+ break;
+
+ /* long options */
+ case 0:
+ break;
+
+ default:
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ }
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status...");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (force_quit)
+ return;
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(portid, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps - %s\n",
+ portid, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", portid);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+static void
+signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ force_quit = true;
+ }
+}
+
+int
+main(int argc, char **argv)
+{
+ uint16_t nb_ports_available = 0;
+ struct lcore_queue_conf *qconf;
+ uint32_t nb_ports_in_mask = 0;
+ uint16_t portid, last_port;
+ uint32_t nb_lcores = 0;
+ uint32_t rx_lcore_id;
+ uint32_t nb_mbufs;
+ uint16_t nb_ports;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+ argc -= ret;
+ argv += ret;
+
+ force_quit = false;
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* parse application arguments (after the EAL ones) */
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+
+ printf("MAC updating %s\n", mac_updating ? "enabled" : "disabled");
+
+ /* convert to number of cycles */
+ timer_period *= rte_get_timer_hz();
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports == 0)
+ rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
+
+ /* check port mask to possible port mask */
+ if (l2fwd_enabled_port_mask & ~((1 << nb_ports) - 1))
+ rte_exit(EXIT_FAILURE, "Invalid portmask; possible (0x%x)\n",
+ (1 << nb_ports) - 1);
+
+ /* reset l2fwd_dst_ports */
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ } else {
+ last_port = portid;
+ }
+
+ nb_ports_in_mask++;
+ }
+ if (nb_ports_in_mask % 2) {
+ printf("Notice: odd number of ports in portmask.\n");
+ l2fwd_dst_ports[last_port] = last_port;
+ }
+
+
+ rx_lcore_id = 0;
+ qconf = NULL;
+
+ nb_mbufs = RTE_MAX(nb_ports * (nb_rxd + nb_txd + MAX_PKT_BURST +
+ nb_lcores * MEMPOOL_CACHE_SIZE), 8192U);
+
+ /* create the mbuf pool */
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs,
+ MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+
+ /* Initialize the port/queue configuration of each logical core */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ /* get the lcore_id for this port */
+ while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+ lcore_queue_conf[rx_lcore_id].n_rx_port ==
+ l2fwd_rx_queue_per_lcore) {
+ rx_lcore_id++;
+ if (rx_lcore_id >= RTE_MAX_LCORE)
+ rte_exit(EXIT_FAILURE, "Not enough cores\n");
+ }
+
+ if (qconf != &lcore_queue_conf[rx_lcore_id]) {
+ /* Assigned a new logical core in the loop above. */
+ qconf = &lcore_queue_conf[rx_lcore_id];
+ nb_lcores++;
+ }
+
+ qconf->rx_port_list[qconf->n_rx_port] = portid;
+ qconf->n_rx_port++;
+ printf("Lcore %u: RX port %u\n", rx_lcore_id, portid);
+ }
+
+
+ /* Initialise each port */
+ RTE_ETH_FOREACH_DEV(portid) {
+ struct rte_eth_rxconf rxq_conf;
+ struct rte_eth_txconf txq_conf;
+ struct rte_eth_conf local_port_conf = port_conf;
+ struct rte_eth_dev_info dev_info;
+
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) {
+ printf("Skipping disabled port %u\n", portid);
+ continue;
+ }
+ nb_ports_available++;
+
+ /* init port */
+ printf("Initializing port %u... ", portid);
+ fflush(stdout);
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n",
+ ret, portid);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, port=%u\n",
+ ret, portid);
+
+ rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+ /* init one RX queue */
+ fflush(stdout);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+ rte_eth_dev_socket_id(portid),
+ &rxq_conf,
+ l2fwd_pktmbuf_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n",
+ ret, portid);
+
+ /* init one TX queue on each port */
+ fflush(stdout);
+ txq_conf = dev_info.default_txconf;
+ txq_conf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid),
+ &txq_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, portid);
+
+ /* Initialize TX buffers */
+ tx_buffer[portid] = rte_zmalloc_socket("tx_buffer",
+ RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
+ rte_eth_dev_socket_id(portid));
+ if (tx_buffer[portid] == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot allocate buffer for tx on port %u\n",
+ portid);
+
+ rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST);
+
+ ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid],
+ rte_eth_tx_buffer_count_callback,
+ &port_statistics[portid].dropped);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot set error callback for tx buffer on port %u\n",
+ portid);
+
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, portid);
+
+ printf("done:\n");
+
+ rte_eth_promiscuous_enable(portid);
+
+ printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ portid,
+ l2fwd_ports_eth_addr[portid].addr_bytes[0],
+ l2fwd_ports_eth_addr[portid].addr_bytes[1],
+ l2fwd_ports_eth_addr[portid].addr_bytes[2],
+ l2fwd_ports_eth_addr[portid].addr_bytes[3],
+ l2fwd_ports_eth_addr[portid].addr_bytes[4],
+ l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+ /* initialize port stats */
+ memset(&port_statistics, 0, sizeof(port_statistics));
+ }
+
+ if (!nb_ports_available) {
+ rte_exit(EXIT_FAILURE,
+ "All available ports are disabled. Please set portmask.\n");
+ }
+
+ check_all_ports_link_status(l2fwd_enabled_port_mask);
+
+ ret = 0;
+ /* launch per-lcore init on every lcore */
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL,
+ CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return ret;
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
new file mode 100644
index 000000000..16eadb0b4
--- /dev/null
+++ b/examples/l2fwd-event/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+sources = files(
+ 'main.c'
+)
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 1e2b14297..f6d3d2cd7 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -294,11 +294,11 @@ l2fwd_usage(const char *prgname)
printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
" -p PORTMASK: hexadecimal bitmask of ports to configure\n"
" -q NQ: number of queue (=ports) per lcore (default is 1)\n"
- " -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default, 86400 maximum)\n"
- " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
- " When enabled:\n"
- " - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
prgname);
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 02/10] examples/l2fwd-event: add infra for eventdev
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-09-19 9:25 ` pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
` (6 subsequent siblings)
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:25 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add infra to select event device as a mode to process packets through
command line arguments. Also, allow the user to select the schedule type
to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
Usage:
`--mode="eventdev"` or `--mode="poll"`
`--eventq-sync="ordered"` or `--eventq-sync="atomic"`
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/Makefile | 1 +
examples/l2fwd-event/l2fwd_eventdev.c | 107 ++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 62 +++++++++++++++
examples/l2fwd-event/main.c | 22 +++++-
examples/l2fwd-event/meson.build | 3 +-
5 files changed, 192 insertions(+), 3 deletions(-)
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index a156c4162..bfe0058a2 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -7,6 +7,7 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
+SRCS-y += l2fwd_eventdev.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
new file mode 100644
index 000000000..19efb6d1e
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+static void
+parse_mode(const char *optarg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+
+ if (!strncmp(optarg, "poll", 4))
+ eventdev_rsrc->enabled = false;
+ else if (!strncmp(optarg, "eventdev", 8))
+ eventdev_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sync(const char *optarg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+
+ if (!strncmp(optarg, "ordered", 7))
+ eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ORDERED;
+ else if (!strncmp(optarg, "atomic", 6))
+ eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ATOMIC;
+}
+
+static int
+parse_eventdev_args(char **argv, int argc)
+{
+ const struct option eventdev_lgopts[] = {
+ {CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+ {CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+ {NULL, 0, 0, 0}
+ };
+ char **argvopt = argv;
+ int32_t option_index;
+ int32_t opt;
+
+ while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts,
+ &option_index)) != EOF) {
+ switch (opt) {
+ case CMD_LINE_OPT_MODE_NUM:
+ parse_mode(optarg);
+ break;
+
+ case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+ parse_eventq_sync(optarg);
+ break;
+
+ case '?':
+ /* skip other parameters except eventdev specific */
+ break;
+
+ default:
+ printf("Invalid eventdev parameter\n");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+void
+eventdev_resource_setup(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t service_id;
+ int32_t ret;
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(eventdev_rsrc->args, eventdev_rsrc->nb_args);
+ if (ret < 0)
+ return;
+
+ if (!rte_event_dev_count())
+ rte_exit(EXIT_FAILURE, "No Eventdev found");
+ /* Start event device service */
+ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->service_id = service_id;
+
+ /* Start event device */
+ ret = rte_event_dev_start(eventdev_rsrc->event_d_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+}
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
new file mode 100644
index 000000000..f823cf6e9
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_EVENTDEV_H__
+#define __L2FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sync"
+
+enum {
+ CMD_LINE_OPT_MODE_NUM = 265,
+ CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+};
+
+struct eventdev_resources {
+ struct l2fwd_port_statistics *stats;
+ struct rte_mempool *pkt_pool;
+ uint64_t timer_period;
+ uint32_t *dst_ports;
+ uint32_t service_id;
+ uint32_t port_mask;
+ volatile bool *done;
+ uint8_t event_d_id;
+ uint8_t sync_mode;
+ uint8_t tx_mode_q;
+ uint8_t mac_updt;
+ uint8_t enabled;
+ uint8_t nb_args;
+ char **args;
+};
+
+static inline struct eventdev_resources *
+get_eventdev_rsrc(void)
+{
+ const char name[RTE_MEMZONE_NAMESIZE] = "l2fwd_event_rsrc";
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+
+ if (mz != NULL)
+ return mz->addr;
+
+ mz = rte_memzone_reserve(name, sizeof(struct eventdev_resources), 0, 0);
+ if (mz != NULL) {
+ memset(mz->addr, 0, sizeof(struct eventdev_resources));
+ return mz->addr;
+ }
+
+ rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+ return NULL;
+}
+
+void eventdev_resource_setup(void);
+
+#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index cc47fa203..661f0833f 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -42,6 +42,7 @@
#include <rte_spinlock.h>
#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
static volatile bool force_quit;
@@ -288,7 +289,12 @@ l2fwd_usage(const char *prgname)
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
" When enabled:\n"
" - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n"
+ " --mode: Packet transfer mode for I/O, poll or eventdev\n"
+ " Default mode = eventdev\n"
+ " --eventq-sync:Event queue synchronization method,\n"
+ " ordered or atomic.\nDefault: atomic\n"
+ " Valid only if --mode=eventdev\n\n",
prgname);
}
@@ -503,6 +509,7 @@ signal_handler(int signum)
int
main(int argc, char **argv)
{
+ struct eventdev_resources *eventdev_rsrc;
uint16_t nb_ports_available = 0;
struct lcore_queue_conf *qconf;
uint32_t nb_ports_in_mask = 0;
@@ -524,6 +531,7 @@ main(int argc, char **argv)
signal(SIGINT, signal_handler);
signal(SIGTERM, signal_handler);
+ eventdev_rsrc = get_eventdev_rsrc();
/* parse application arguments (after the EAL ones) */
ret = l2fwd_parse_args(argc, argv);
if (ret < 0)
@@ -584,6 +592,17 @@ main(int argc, char **argv)
if (l2fwd_pktmbuf_pool == NULL)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+ eventdev_rsrc->port_mask = l2fwd_enabled_port_mask;
+ eventdev_rsrc->pkt_pool = l2fwd_pktmbuf_pool;
+ eventdev_rsrc->dst_ports = l2fwd_dst_ports;
+ eventdev_rsrc->timer_period = timer_period;
+ eventdev_rsrc->mac_updt = mac_updating;
+ eventdev_rsrc->stats = port_statistics;
+ eventdev_rsrc->done = &force_quit;
+
+ /* Configure eventdev parameters if user has requested */
+ eventdev_resource_setup();
+
/* Initialize the port/queue configuration of each logical core */
RTE_ETH_FOREACH_DEV(portid) {
/* skip ports that are not enabled */
@@ -610,7 +629,6 @@ main(int argc, char **argv)
printf("Lcore %u: RX port %u\n", rx_lcore_id, portid);
}
-
/* Initialise each port */
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_rxconf rxq_conf;
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index 16eadb0b4..b1ad48cc5 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -8,5 +8,6 @@
# DPDK instance, use 'make'
sources = files(
- 'main.c'
+ 'main.c',
+ 'l2fwd_eventdev.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 03/10] examples/l2fwd-event: add infra to split eventdev framework
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-09-19 9:25 ` pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
` (5 subsequent siblings)
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:25 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 2 ++
examples/l2fwd-event/l2fwd_eventdev.c | 29 +++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 20 +++++++++++++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 24 +++++++++++++++
.../l2fwd_eventdev_internal_port.c | 24 +++++++++++++++
examples/l2fwd-event/meson.build | 4 ++-
6 files changed, 102 insertions(+), 1 deletion(-)
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index bfe0058a2..c1f700a65 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -8,6 +8,8 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
SRCS-y += l2fwd_eventdev.c
+SRCS-y += l2fwd_eventdev_internal_port.c
+SRCS-y += l2fwd_eventdev_generic.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 19efb6d1e..df76f1c1f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -76,6 +76,31 @@ parse_eventdev_args(char **argv, int argc)
return 0;
}
+static void
+eventdev_capability_setup(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t caps = 0;
+ uint16_t i;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(i) {
+ ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Invalid capability for Tx adptr port %d\n",
+ i);
+
+ eventdev_rsrc->tx_mode_q |= !(caps &
+ RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+ }
+
+ if (eventdev_rsrc->tx_mode_q)
+ eventdev_set_generic_ops(&eventdev_rsrc->ops);
+ else
+ eventdev_set_internal_port_ops(&eventdev_rsrc->ops);
+}
+
void
eventdev_resource_setup(void)
{
@@ -90,6 +115,10 @@ eventdev_resource_setup(void)
if (!rte_event_dev_count())
rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+ /* Setup eventdev capability callbacks */
+ eventdev_capability_setup();
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index f823cf6e9..717f688ce 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -18,8 +18,26 @@ enum {
CMD_LINE_OPT_EVENTQ_SYNC_NUM,
};
+typedef void (*event_queue_setup_cb)(uint16_t ethdev_count,
+ uint32_t event_queue_cfg);
+typedef uint32_t (*eventdev_setup_cb)(uint16_t ethdev_count);
+typedef void (*adapter_setup_cb)(uint16_t ethdev_count);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*service_setup_cb)(void);
+typedef void (*event_loop_cb)(void);
+
+struct eventdev_setup_ops {
+ event_queue_setup_cb event_queue_setup;
+ event_port_setup_cb event_port_setup;
+ eventdev_setup_cb eventdev_setup;
+ adapter_setup_cb adapter_setup;
+ service_setup_cb service_setup;
+ event_loop_cb l2fwd_event_loop;
+};
+
struct eventdev_resources {
struct l2fwd_port_statistics *stats;
+ struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
uint64_t timer_period;
uint32_t *dst_ports;
@@ -58,5 +76,7 @@ get_eventdev_rsrc(void)
}
void eventdev_resource_setup(void);
+void eventdev_set_generic_ops(struct eventdev_setup_ops *ops);
+void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops);
#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
new file mode 100644
index 000000000..e3990f8b0
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+void
+eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
new file mode 100644
index 000000000..a0d2111f9
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+void
+eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index b1ad48cc5..38560840c 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -9,5 +9,7 @@
sources = files(
'main.c',
- 'l2fwd_eventdev.c'
+ 'l2fwd_eventdev.c',
+ 'l2fwd_eventdev_internal_port.c',
+ 'l2fwd_eventdev_generic.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 04/10] examples/l2fwd-event: add eth port setup for eventdev
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (2 preceding siblings ...)
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
@ 2019-09-19 9:25 ` pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
` (4 subsequent siblings)
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:25 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 114 ++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 1 +
examples/l2fwd-event/main.c | 17 ++++
3 files changed, 132 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index df76f1c1f..0d0d3b8b9 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -18,6 +18,14 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
static void
parse_mode(const char *optarg)
{
@@ -76,6 +84,108 @@ parse_eventdev_args(char **argv, int argc)
return 0;
}
+static void
+eth_dev_port_setup(uint16_t ethdev_count __rte_unused)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ static struct rte_eth_conf port_config = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_key = NULL,
+ .rss_hf = ETH_RSS_IP,
+ }
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ }
+ };
+ struct rte_eth_conf local_port_conf;
+ struct rte_eth_dev_info dev_info;
+ struct rte_eth_txconf txconf;
+ struct rte_eth_rxconf rxconf;
+ uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+ uint16_t port_id;
+ int32_t ret;
+
+ /* initialize all ports */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ local_port_conf = port_config;
+ /* skip ports that are not enabled */
+ if ((eventdev_rsrc->port_mask & (1 << port_id)) == 0) {
+ printf("\nSkipping disabled port %d\n", port_id);
+ continue;
+ }
+
+ /* init port */
+ printf("Initializing port %d ... ", port_id);
+ fflush(stdout);
+ rte_eth_dev_info_get(port_id, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+ dev_info.flow_type_rss_offloads;
+ if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+ port_config.rx_adv_conf.rss_conf.rss_hf) {
+ printf("Port %u modified RSS hash function "
+ "based on hardware support,"
+ "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+ port_id,
+ port_config.rx_adv_conf.rss_conf.rss_hf,
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+ }
+
+ ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ rte_eth_macaddr_get(port_id,
+ &eventdev_rsrc->ports_eth_addr[port_id]);
+ print_ethaddr(" Address:",
+ &eventdev_rsrc->ports_eth_addr[port_id]);
+ printf("\n");
+
+
+ /* init one Rx queue per port */
+ rxconf = dev_info.default_rxconf;
+ rxconf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0, &rxconf,
+ eventdev_rsrc->pkt_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_rx_queue_setup: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ /* init one Tx queue per port */
+ txconf = dev_info.default_txconf;
+ txconf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_tx_queue_setup: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ rte_eth_promiscuous_enable(port_id);
+ }
+}
+
static void
eventdev_capability_setup(void)
{
@@ -105,6 +215,7 @@ void
eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint16_t ethdev_count = rte_eth_dev_count_avail();
uint32_t service_id;
int32_t ret;
@@ -119,6 +230,9 @@ eventdev_resource_setup(void)
/* Setup eventdev capability callbacks */
eventdev_capability_setup();
+ /* Ethernet device configuration */
+ eth_dev_port_setup(ethdev_count);
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 717f688ce..cc0bdd1ad 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -51,6 +51,7 @@ struct eventdev_resources {
uint8_t enabled;
uint8_t nb_args;
char **args;
+ struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
};
static inline struct eventdev_resources *
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 661f0833f..f24bdd4a4 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -602,6 +602,22 @@ main(int argc, char **argv)
/* Configure eventdev parameters if user has requested */
eventdev_resource_setup();
+ if (eventdev_rsrc->enabled) {
+ /* All settings are done. Now enable eth devices */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, portid);
+ }
+
+ goto skip_port_config;
+ }
/* Initialize the port/queue configuration of each logical core */
RTE_ETH_FOREACH_DEV(portid) {
@@ -733,6 +749,7 @@ main(int argc, char **argv)
"All available ports are disabled. Please set portmask.\n");
}
+skip_port_config:
check_all_ports_link_status(l2fwd_enabled_port_mask);
ret = 0;
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (3 preceding siblings ...)
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
@ 2019-09-19 9:25 ` pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
` (3 subsequent siblings)
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:25 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event device queue and port setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 10 +
examples/l2fwd-event/l2fwd_eventdev.h | 18 ++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 179 +++++++++++++++++-
.../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++-
4 files changed, 378 insertions(+), 2 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 0d0d3b8b9..7a3d077ae 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -216,6 +216,7 @@ eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
uint16_t ethdev_count = rte_eth_dev_count_avail();
+ uint32_t event_queue_cfg = 0;
uint32_t service_id;
int32_t ret;
@@ -233,6 +234,15 @@ eventdev_resource_setup(void)
/* Ethernet device configuration */
eth_dev_port_setup(ethdev_count);
+ /* Event device configuration */
+ event_queue_cfg = eventdev_rsrc->ops.eventdev_setup(ethdev_count);
+
+ /* Event queue configuration */
+ eventdev_rsrc->ops.event_queue_setup(ethdev_count, event_queue_cfg);
+
+ /* Event port configuration */
+ eventdev_rsrc->ops.event_port_setup();
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index cc0bdd1ad..7646ef29f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -26,6 +26,17 @@ typedef void (*event_port_setup_cb)(void);
typedef void (*service_setup_cb)(void);
typedef void (*event_loop_cb)(void);
+struct eventdev_queues {
+ uint8_t *event_q_id;
+ uint8_t nb_queues;
+};
+
+struct eventdev_ports {
+ uint8_t *event_p_id;
+ uint8_t nb_ports;
+ rte_spinlock_t lock;
+};
+
struct eventdev_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
@@ -36,9 +47,14 @@ struct eventdev_setup_ops {
};
struct eventdev_resources {
+ struct rte_event_port_conf def_p_conf;
struct l2fwd_port_statistics *stats;
+ /* Default port config. */
+ uint8_t disable_implicit_release;
struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
+ struct eventdev_queues evq;
+ struct eventdev_ports evp;
uint64_t timer_period;
uint32_t *dst_ports;
uint32_t service_id;
@@ -47,6 +63,8 @@ struct eventdev_resources {
uint8_t event_d_id;
uint8_t sync_mode;
uint8_t tx_mode_q;
+ uint8_t deq_depth;
+ uint8_t has_burst;
uint8_t mac_updt;
uint8_t enabled;
uint8_t nb_args;
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index e3990f8b0..65166fded 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -17,8 +17,185 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static uint32_t
+eventdev_setup_generic(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t num_workers = 0;
+ int ret;
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+ eventdev_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ /* One queue for each ethdev port + one Tx adapter Single link queue. */
+ event_d_conf.nb_event_queues = ethdev_count + 1;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count() - rte_service_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ eventdev_rsrc->evp.nb_ports = num_workers;
+
+ eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ eventdev_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
+static void
+event_port_setup_generic(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ /* Service cores are not used to run worker thread */
+ eventdev_rsrc->evp.nb_ports = eventdev_rsrc->evp.nb_ports;
+ eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evp.nb_ports);
+ if (!eventdev_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE, " No space is available");
+
+ memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ eventdev_rsrc->disable_implicit_release;
+ eventdev_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+ for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id,
+ eventdev_rsrc->evq.event_q_id,
+ NULL,
+ eventdev_rsrc->evq.nb_queues - 1);
+ if (ret != (eventdev_rsrc->evq.nb_queues - 1)) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+ }
+ /* init spinlock */
+ rte_spinlock_init(&eventdev_rsrc->evp.lock);
+
+ eventdev_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id;
+ int32_t ret;
+
+ event_q_conf.schedule_type = eventdev_rsrc->sync_mode;
+ eventdev_rsrc->evq.nb_queues = ethdev_count + 1;
+ eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evq.nb_queues);
+ if (!eventdev_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ for (event_q_id = 0; event_q_id < (eventdev_rsrc->evq.nb_queues - 1);
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+
+ event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+ event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue for Tx adapter");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
void
eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->eventdev_setup = eventdev_setup_generic;
+ ops->event_queue_setup = event_queue_setup_generic;
+ ops->event_port_setup = event_port_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index a0d2111f9..52cb07707 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -17,8 +17,179 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static uint32_t
+eventdev_setup_internal_port(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ uint8_t disable_implicit_release;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t num_workers = 0;
+ int ret;
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+
+ disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+ eventdev_rsrc->disable_implicit_release =
+ disable_implicit_release;
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ event_d_conf.nb_event_queues = ethdev_count;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ eventdev_rsrc->evp.nb_ports = num_workers;
+ eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ eventdev_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
+static void
+event_port_setup_internal_port(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evp.nb_ports);
+ if (!eventdev_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE,
+ "Failed to allocate memory for Event Ports");
+
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ eventdev_rsrc->disable_implicit_release;
+
+ for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+ /* init spinlock */
+ rte_spinlock_init(&eventdev_rsrc->evp.lock);
+ }
+
+ eventdev_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id = 0;
+ int32_t ret;
+
+ rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ if (def_q_conf.nb_atomic_order_sequences <
+ event_q_conf.nb_atomic_order_sequences)
+ event_q_conf.nb_atomic_order_sequences =
+ def_q_conf.nb_atomic_order_sequences;
+
+ event_q_conf.event_queue_cfg = event_queue_cfg;
+ event_q_conf.schedule_type = eventdev_rsrc->sync_mode;
+ eventdev_rsrc->evq.nb_queues = ethdev_count;
+ eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evq.nb_queues);
+ if (!eventdev_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->eventdev_setup = eventdev_setup_internal_port;
+ ops->event_queue_setup = event_queue_setup_internal_port;
+ ops->event_port_setup = event_port_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (4 preceding siblings ...)
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-09-19 9:26 ` pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 07/10] examples/l2fwd-event: add service core setup pbhagavatula
` (2 subsequent siblings)
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:26 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 3 +
examples/l2fwd-event/l2fwd_eventdev.h | 17 +++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 117 ++++++++++++++++++
.../l2fwd_eventdev_internal_port.c | 80 ++++++++++++
4 files changed, 217 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 7a3d077ae..f964c69d6 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -243,6 +243,9 @@ eventdev_resource_setup(void)
/* Event port configuration */
eventdev_rsrc->ops.event_port_setup();
+ /* Rx/Tx adapters configuration */
+ eventdev_rsrc->ops.adapter_setup(ethdev_count);
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 7646ef29f..95ff37294 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -6,6 +6,9 @@
#define __L2FWD_EVENTDEV_H__
#include <rte_common.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
#include <rte_spinlock.h>
#include "l2fwd_common.h"
@@ -37,6 +40,18 @@ struct eventdev_ports {
rte_spinlock_t lock;
};
+struct eventdev_rx_adptr {
+ uint32_t service_id;
+ uint8_t nb_rx_adptr;
+ uint8_t *rx_adptr;
+};
+
+struct eventdev_tx_adptr {
+ uint32_t service_id;
+ uint8_t nb_tx_adptr;
+ uint8_t *tx_adptr;
+};
+
struct eventdev_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
@@ -50,6 +65,8 @@ struct eventdev_resources {
struct rte_event_port_conf def_p_conf;
struct l2fwd_port_statistics *stats;
/* Default port config. */
+ struct eventdev_rx_adptr rx_adptr;
+ struct eventdev_tx_adptr tx_adptr;
uint8_t disable_implicit_release;
struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index 65166fded..68b63279a 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -192,10 +192,127 @@ event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)
eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
}
+static void
+rx_tx_adapter_setup_generic(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ uint8_t rx_adptr_id = 0;
+ uint8_t tx_adptr_id = 0;
+ uint8_t tx_port_id = 0;
+ uint32_t service_id;
+ int32_t ret, i;
+
+ /* Rx adapter setup */
+ eventdev_rsrc->rx_adptr.nb_rx_adptr = 1;
+ eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->rx_adptr.nb_rx_adptr);
+ if (!eventdev_rsrc->rx_adptr.rx_adptr) {
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create rx adapter");
+
+ eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode;
+ for (i = 0; i < ethdev_count; i++) {
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i];
+ ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, i, -1,
+ ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->rx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed",
+ rx_adptr_id);
+
+ eventdev_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+ /* Tx adapter setup */
+ eventdev_rsrc->tx_adptr.nb_tx_adptr = 1;
+ eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->tx_adptr.nb_tx_adptr);
+ if (!eventdev_rsrc->tx_adptr.tx_adptr) {
+ free(eventdev_rsrc->rx_adptr.rx_adptr);
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create tx adapter");
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->tx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+ ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed",
+ tx_adptr_id);
+
+ eventdev_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
void
eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
{
ops->eventdev_setup = eventdev_setup_generic;
ops->event_queue_setup = event_queue_setup_generic;
ops->event_port_setup = event_port_setup_generic;
+ ops->adapter_setup = rx_tx_adapter_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index 52cb07707..02663242f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -186,10 +186,90 @@ event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)
}
}
+static void
+rx_tx_adapter_setup_internal_port(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ int32_t ret, i;
+
+ eventdev_rsrc->rx_adptr.nb_rx_adptr = ethdev_count;
+ eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->rx_adptr.nb_rx_adptr);
+ if (!eventdev_rsrc->rx_adptr.rx_adptr) {
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", i);
+
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i];
+ eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode;
+ ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", i);
+
+ eventdev_rsrc->rx_adptr.rx_adptr[i] = i;
+ }
+
+ eventdev_rsrc->tx_adptr.nb_tx_adptr = ethdev_count;
+ eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->tx_adptr.nb_tx_adptr);
+ if (!eventdev_rsrc->tx_adptr.tx_adptr) {
+ free(eventdev_rsrc->rx_adptr.rx_adptr);
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", i);
+
+ ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", i);
+
+ eventdev_rsrc->tx_adptr.tx_adptr[i] = i;
+ }
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
ops->eventdev_setup = eventdev_setup_internal_port;
ops->event_queue_setup = event_queue_setup_internal_port;
ops->event_port_setup = event_port_setup_internal_port;
+ ops->adapter_setup = rx_tx_adapter_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 07/10] examples/l2fwd-event: add service core setup
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (5 preceding siblings ...)
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
@ 2019-09-19 9:26 ` pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:26 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev_generic.c | 31 +++++++++++++++++++
.../l2fwd_eventdev_internal_port.c | 6 ++++
examples/l2fwd-event/main.c | 2 ++
3 files changed, 39 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index 68b63279a..e1e603052 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -17,6 +17,36 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static void
+eventdev_service_setup_generic(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t lcore_id[RTE_MAX_LCORE] = {0};
+ int32_t req_service_cores = 3;
+ int32_t avail_service_cores;
+
+ avail_service_cores = rte_service_lcore_list(lcore_id, RTE_MAX_LCORE);
+ if (avail_service_cores < req_service_cores) {
+ rte_exit(EXIT_FAILURE, "Enough services cores are not present"
+ " Required = %d Available = %d",
+ req_service_cores, avail_service_cores);
+ }
+
+ /* Start eventdev scheduler service */
+ rte_service_map_lcore_set(eventdev_rsrc->service_id, lcore_id[0], 1);
+ rte_service_lcore_start(lcore_id[0]);
+
+ /* Start eventdev Rx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc->rx_adptr.service_id,
+ lcore_id[1], 1);
+ rte_service_lcore_start(lcore_id[1]);
+
+ /* Start eventdev Tx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc->tx_adptr.service_id,
+ lcore_id[2], 1);
+ rte_service_lcore_start(lcore_id[2]);
+}
+
static uint32_t
eventdev_setup_generic(uint16_t ethdev_count)
{
@@ -315,4 +345,5 @@ eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
ops->event_queue_setup = event_queue_setup_generic;
ops->event_port_setup = event_port_setup_generic;
ops->adapter_setup = rx_tx_adapter_setup_generic;
+ ops->service_setup = eventdev_service_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index 02663242f..39fcb4326 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -265,6 +265,11 @@ rx_tx_adapter_setup_internal_port(uint16_t ethdev_count)
}
}
+static void
+eventdev_service_setup_internal_port(void)
+{
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
@@ -272,4 +277,5 @@ eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
ops->event_queue_setup = event_queue_setup_internal_port;
ops->event_port_setup = event_port_setup_internal_port;
ops->adapter_setup = rx_tx_adapter_setup_internal_port;
+ ops->service_setup = eventdev_service_setup_internal_port;
}
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index f24bdd4a4..09c86d2cd 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -616,6 +616,8 @@ main(int argc, char **argv)
ret, portid);
}
+ /* Now start internal services */
+ eventdev_rsrc->ops.service_setup();
goto skip_port_config;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (6 preceding siblings ...)
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 07/10] examples/l2fwd-event: add service core setup pbhagavatula
@ 2019-09-19 9:26 ` pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:26 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event dev main loop based on enabled l2fwd options and eventdev
capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 273 ++++++++++++++++++++++++++
examples/l2fwd-event/main.c | 10 +-
2 files changed, 280 insertions(+), 3 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index f964c69d6..345d9d15b 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -18,6 +18,12 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+#define L2FWD_EVENT_SINGLE 0x1
+#define L2FWD_EVENT_BURST 0x2
+#define L2FWD_EVENT_TX_DIRECT 0x4
+#define L2FWD_EVENT_TX_ENQ 0x8
+#define L2FWD_EVENT_UPDT_MAC 0x10
+
static void
print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
{
@@ -211,10 +217,272 @@ eventdev_capability_setup(void)
eventdev_set_internal_port_ops(&eventdev_rsrc->ops);
}
+static __rte_noinline int
+get_free_event_port(struct eventdev_resources *eventdev_rsrc)
+{
+ static int index;
+ int port_id;
+
+ rte_spinlock_lock(&eventdev_rsrc->evp.lock);
+ if (index >= eventdev_rsrc->evp.nb_ports) {
+ printf("No free event port is available\n");
+ return -1;
+ }
+
+ port_id = eventdev_rsrc->evp.event_p_id[index];
+ index++;
+ rte_spinlock_unlock(&eventdev_rsrc->evp.lock);
+
+ return port_id;
+}
+
+static __rte_always_inline void
+l2fwd_event_updt_mac(struct rte_mbuf *m, const struct rte_ether_addr *dst_mac,
+ uint8_t dst_port)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(dst_mac, ð->s_addr);
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_single(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ const uint64_t timer_period = eventdev_rsrc->timer_period;
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const int port_id = get_free_event_port(eventdev_rsrc);
+ const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ volatile bool *done = eventdev_rsrc->done;
+ struct rte_mbuf *mbuf;
+ uint16_t dst_port;
+ struct rte_event ev;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!*done) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
+ continue;
+
+
+ mbuf = ev.mbuf;
+ dst_port = eventdev_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_event_updt_mac(mbuf,
+ &eventdev_rsrc->ports_eth_addr[dst_port],
+ dst_port);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev.queue_id = tx_q_id;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ while (rte_event_enqueue_burst(event_d_id, port_id,
+ &ev, 1) && !*done)
+ ;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+ while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id,
+ &ev, 1) &&
+ !*done)
+ ;
+ }
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_burst(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ const uint64_t timer_period = eventdev_rsrc->timer_period;
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const int port_id = get_free_event_port(eventdev_rsrc);
+ const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ const uint8_t deq_len = eventdev_rsrc->deq_depth;
+ volatile bool *done = eventdev_rsrc->done;
+ struct rte_event ev[MAX_PKT_BURST];
+ struct rte_mbuf *mbuf;
+ uint16_t nb_rx, nb_tx;
+ uint16_t dst_port;
+ uint8_t i;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!*done) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
+ deq_len, 0);
+ if (nb_rx == 0)
+ continue;
+
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf = ev[i].mbuf;
+ dst_port = eventdev_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0) {
+ __atomic_fetch_add(
+ &eventdev_rsrc->stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+ __atomic_fetch_add(
+ &eventdev_rsrc->stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_event_updt_mac(mbuf,
+ &eventdev_rsrc->ports_eth_addr[
+ dst_port],
+ dst_port);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev[i].queue_id = tx_q_id;
+ ev[i].op = RTE_EVENT_OP_FORWARD;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT)
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+
+ }
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
+ ev, nb_rx);
+ while (nb_tx < nb_rx && !*done)
+ nb_tx += rte_event_enqueue_burst(event_d_id,
+ port_id, ev + nb_tx,
+ nb_rx - nb_tx);
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id, ev,
+ nb_rx);
+ while (nb_tx < nb_rx && !*done)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx);
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ if (flags & L2FWD_EVENT_SINGLE)
+ l2fwd_event_loop_single(eventdev_rsrc, flags);
+ if (flags & L2FWD_EVENT_BURST)
+ l2fwd_event_loop_burst(eventdev_rsrc, flags);
+}
+
+#define L2FWD_EVENT_MODE \
+FP(tx_d, 0, 0, 0, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE) \
+FP(tx_d_burst, 0, 0, 1, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST) \
+FP(tx_q, 0, 1, 0, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE) \
+FP(tx_q_burst, 0, 1, 1, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST) \
+FP(tx_d_mac, 1, 0, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \
+ L2FWD_EVENT_SINGLE) \
+FP(tx_d_brst_mac, 1, 0, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \
+ L2FWD_EVENT_BURST) \
+FP(tx_q_mac, 1, 1, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \
+ L2FWD_EVENT_SINGLE) \
+FP(tx_q_brst_mac, 1, 1, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \
+ L2FWD_EVENT_BURST)
+
+
+#define FP(_name, _f3, _f2, _f1, flags) \
+static void __rte_noinline \
+l2fwd_event_main_loop_ ## _name(void) \
+{ \
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); \
+ l2fwd_event_loop(eventdev_rsrc, flags); \
+}
+
+L2FWD_EVENT_MODE
+#undef FP
+
void
eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ /* [MAC_UPDT][TX_MODE][BURST] */
+ const event_loop_cb event_loop[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = l2fwd_event_main_loop_ ## _name,
+ L2FWD_EVENT_MODE
+#undef FP
+ };
uint16_t ethdev_count = rte_eth_dev_count_avail();
uint32_t event_queue_cfg = 0;
uint32_t service_id;
@@ -260,4 +528,9 @@ eventdev_resource_setup(void)
ret = rte_event_dev_start(eventdev_rsrc->event_d_id);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ eventdev_rsrc->ops.l2fwd_event_loop = event_loop
+ [eventdev_rsrc->mac_updt]
+ [eventdev_rsrc->tx_mode_q]
+ [eventdev_rsrc->has_burst];
}
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 09c86d2cd..56487809b 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -271,8 +271,12 @@ static void l2fwd_main_loop(void)
static int
l2fwd_launch_one_lcore(void *args)
{
- RTE_SET_USED(args);
- l2fwd_main_loop();
+ struct eventdev_resources *eventdev_rsrc = args;
+
+ if (eventdev_rsrc->enabled)
+ eventdev_rsrc->ops.l2fwd_event_loop();
+ else
+ l2fwd_main_loop();
return 0;
}
@@ -756,7 +760,7 @@ main(int argc, char **argv)
ret = 0;
/* launch per-lcore init on every lcore */
- rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL,
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc,
CALL_MASTER);
rte_eal_mp_wait_lcore();
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v2 09/10] examples/l2fwd-event: add graceful teardown
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (7 preceding siblings ...)
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-09-19 9:26 ` pbhagavatula
8 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 9:26 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add graceful teardown that addresses both event mode and poll mode.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/main.c | 44 +++++++++++++++++++++++++++++--------
1 file changed, 35 insertions(+), 9 deletions(-)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 56487809b..057381b29 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -522,7 +522,7 @@ main(int argc, char **argv)
uint32_t rx_lcore_id;
uint32_t nb_mbufs;
uint16_t nb_ports;
- int ret;
+ int i, ret;
/* init EAL */
ret = rte_eal_init(argc, argv);
@@ -762,15 +762,41 @@ main(int argc, char **argv)
/* launch per-lcore init on every lcore */
rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc,
CALL_MASTER);
- rte_eal_mp_wait_lcore();
+ if (eventdev_rsrc->enabled) {
+ for (i = 0; i < eventdev_rsrc->rx_adptr.nb_rx_adptr; i++)
+ rte_event_eth_rx_adapter_stop(
+ eventdev_rsrc->rx_adptr.rx_adptr[i]);
+ for (i = 0; i < eventdev_rsrc->tx_adptr.nb_tx_adptr; i++)
+ rte_event_eth_tx_adapter_stop(
+ eventdev_rsrc->tx_adptr.tx_adptr[i]);
- RTE_ETH_FOREACH_DEV(portid) {
- if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
- continue;
- printf("Closing port %d...", portid);
- rte_eth_dev_stop(portid);
- rte_eth_dev_close(portid);
- printf(" Done\n");
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ rte_eth_dev_stop(portid);
+ }
+
+ rte_eal_mp_wait_lcore();
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ rte_eth_dev_close(portid);
+ }
+
+ rte_event_dev_stop(eventdev_rsrc->event_d_id);
+ rte_event_dev_close(eventdev_rsrc->event_d_id);
+
+ } else {
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
}
printf("Bye...\n");
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-09-19 9:43 ` Sunil Kumar Kori
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
1 sibling, 0 replies; 107+ messages in thread
From: Sunil Kumar Kori @ 2019-09-19 9:43 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Pavan Nikhilesh Bhagavatula
Cc: dev
Regards
Sunil Kumar Kori
>-----Original Message-----
>From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
>Sent: Thursday, September 19, 2019 2:56 PM
>To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
>bruce.richardson@intel.com; akhil.goyal@nxp.com; Marko Kovacevic
><marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>; Radu
>Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
><tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>; Pavan
>Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
>Cc: dev@dpdk.org
>Subject: [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll
>mode routines
>
>From: Sunil Kumar Kori <skori@marvell.com>
>
>Add the default l2fwd poll mode routines similar to examples/l2fwd.
>
>Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
>---
> examples/Makefile | 1 +
> examples/l2fwd-event/Makefile | 57 +++
> examples/l2fwd-event/l2fwd_common.h | 26 +
> examples/l2fwd-event/main.c | 737 ++++++++++++++++++++++++++++
> examples/l2fwd-event/meson.build | 12 +
> examples/l2fwd/main.c | 10 +-
> 6 files changed, 838 insertions(+), 5 deletions(-) create mode 100644
>examples/l2fwd-event/Makefile create mode 100644 examples/l2fwd-
>event/l2fwd_common.h
> create mode 100644 examples/l2fwd-event/main.c create mode 100644
>examples/l2fwd-event/meson.build
>
>+++ b/examples/l2fwd-event/l2fwd_common.h
>@@ -0,0 +1,26 @@
>+/* SPDX-License-Identifier: BSD-3-Clause
>+ * Copyright(C) 2019 Marvell International Ltd.
>+ */
>+
>+#ifndef __L2FWD_COMMON_H__
>+#define __L2FWD_COMMON_H__
>+
>+#define MAX_PKT_BURST 32
>+#define MAX_RX_QUEUE_PER_LCORE 16
>+#define MAX_TX_QUEUE_PER_PORT 16
>+
>+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
>+
>+#define RTE_TEST_RX_DESC_DEFAULT 1024
>+#define RTE_TEST_TX_DESC_DEFAULT 1024
>+
>+/* Per-port statistics struct */
>+struct l2fwd_port_statistics {
>+ uint64_t dropped;
>+ uint64_t tx;
>+ uint64_t rx;
>+} __rte_cache_aligned;
>+
>+void print_stats(void);
>+
>+#endif /* __L2FWD_EVENTDEV_H__ */
Comment needs to be updated /* __L2FWD_COMMON_H__ */
>diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c index
>1e2b14297..f6d3d2cd7 100644
>--- a/examples/l2fwd/main.c
>+++ b/examples/l2fwd/main.c
>@@ -294,11 +294,11 @@ l2fwd_usage(const char *prgname)
> printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
> " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
> " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
>- " -T PERIOD: statistics will be refreshed each PERIOD
>seconds (0 to disable, 10 default, 86400 maximum)\n"
>- " --[no-]mac-updating: Enable or disable MAC addresses
>updating (enabled by default)\n"
>- " When enabled:\n"
>- " - The source MAC address is replaced by the TX port
>MAC address\n"
>- " - The destination MAC address is replaced by
>02:00:00:00:00:TX_PORT_ID\n",
>+ " -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to
>disable, 10 default, 86400 maximum)\n"
>+ " --[no-]mac-updating: Enable or disable MAC addresses updating
>(enabled by default)\n"
>+ " When enabled:\n"
>+ " - The source MAC address is replaced by the TX port MAC
>address\n"
>+ " - The destination MAC address is replaced by
>02:00:00:00:00:TX_PORT_ID\n",
> prgname);
> }
>
This change should not be part of l2fwd-event patch set.
>--
>2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-19 9:43 ` Sunil Kumar Kori
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
` (10 more replies)
1 sibling, 11 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds a new application to demonstrate the usage of event
mode. The poll mode is also available to help with the transition.
The following new command line parameters are added:
--mode: Dictates the mode of operation either poll or event.
--eventq_sync: Dictates event synchronization mode either atomic or
ordered.
Based on event device capability the configuration is done as follows:
- A single event device is enabled.
- The number of event ports is equal to the number of worker
cores enabled in the core mask. Additional event ports might
be configured based on Rx/Tx adapter capability.
- The number of event queues is equal to the number of ethernet
ports. If Tx adapter doesn't have internal port capability then
an additional single link event queue is used to enqueue events
to Tx adapter.
- Each event port is linked to all existing event queues.
- Dedicated Rx/Tx adapters for each Ethernet port.
v3 Changes:
- Remove unwanted change to example/l2fwd.
- fix checkpatch issue
http://mails.dpdk.org/archives/test-report/2019-September/098053.html
v2 Changes:
- Remove global variables.
- Split patches to make reviews friendlier.
- Split datapath based on eventdev capability.
checkpatch reports for v08/10 is intended
http://mails.dpdk.org/archives/test-report/2019-September/098059.html
Pavan Nikhilesh (5):
examples/l2fwd-event: add infra to split eventdev framework
examples/l2fwd-event: add eventdev queue and port setup
examples/l2fwd-event: add event Rx/Tx adapter setup
examples/l2fwd-event: add eventdev main loop
examples/l2fwd-event: add graceful teardown
Sunil Kumar Kori (5):
examples/l2fwd-event: add default poll mode routines
examples/l2fwd-event: add infra for eventdev
examples/l2fwd-event: add eth port setup for eventdev
examples/l2fwd-event: add service core setup
doc: add application usage guide for l2fwd-event
MAINTAINERS | 5 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
.../l2_forward_event_real_virtual.rst | 799 +++++++++++++++++
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 60 ++
examples/l2fwd-event/l2fwd_common.h | 26 +
examples/l2fwd-event/l2fwd_eventdev.c | 536 ++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 118 +++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 349 ++++++++
.../l2fwd_eventdev_internal_port.c | 281 ++++++
examples/l2fwd-event/main.c | 804 ++++++++++++++++++
examples/l2fwd-event/meson.build | 15 +
13 files changed, 3000 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
--
2.23.0
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 01/10] examples/l2fwd-event: add default poll mode routines
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
` (9 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add the default l2fwd poll mode routines similar to examples/l2fwd.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 57 +++
examples/l2fwd-event/l2fwd_common.h | 26 +
examples/l2fwd-event/main.c | 737 ++++++++++++++++++++++++++++
examples/l2fwd-event/meson.build | 12 +
5 files changed, 833 insertions(+)
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
diff --git a/examples/Makefile b/examples/Makefile
index de11dd487..d18504bd2 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -34,6 +34,7 @@ endif
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni
DIRS-y += l2fwd
+DIRS-y += l2fwd-event
ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
new file mode 100644
index 000000000..a156c4162
--- /dev/null
+++ b/examples/l2fwd-event/Makefile
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# binary name
+APP = l2fwd-event
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
new file mode 100644
index 000000000..b0ef49144
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_COMMON_H__
+#define __L2FWD_COMMON_H__
+
+#define MAX_PKT_BURST 32
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+ uint64_t dropped;
+ uint64_t tx;
+ uint64_t rx;
+} __rte_cache_aligned;
+
+void print_stats(void);
+
+#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
new file mode 100644
index 000000000..cc47fa203
--- /dev/null
+++ b/examples/l2fwd-event/main.c
@@ -0,0 +1,737 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+static volatile bool force_quit;
+
+/* MAC updating enabled by default */
+static int mac_updating = 1;
+
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define MEMPOOL_CACHE_SIZE 256
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct rte_ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint32_t l2fwd_enabled_port_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+static unsigned int l2fwd_rx_queue_per_lcore = 1;
+
+struct lcore_queue_conf {
+ uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ uint32_t n_rx_port;
+} __rte_cache_aligned;
+
+static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .split_hdr_size = 0,
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+};
+
+static struct rte_mempool *l2fwd_pktmbuf_pool;
+
+static struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+/* A tsc-based timer responsible for triggering statistics printout */
+static uint64_t timer_period = 10; /* default period is 10 seconds */
+
+/* Print out statistics on packets dropped */
+void print_stats(void)
+{
+ uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
+ uint32_t portid;
+
+ total_packets_dropped = 0;
+ total_packets_tx = 0;
+ total_packets_rx = 0;
+
+ const char clr[] = {27, '[', '2', 'J', '\0' };
+ const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("\nPort statistics ====================================");
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+ /* skip disabled ports */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("\nStatistics for port %u ------------------------------"
+ "\nPackets sent: %24"PRIu64
+ "\nPackets received: %20"PRIu64
+ "\nPackets dropped: %21"PRIu64,
+ portid,
+ port_statistics[portid].tx,
+ port_statistics[portid].rx,
+ port_statistics[portid].dropped);
+
+ total_packets_dropped += port_statistics[portid].dropped;
+ total_packets_tx += port_statistics[portid].tx;
+ total_packets_rx += port_statistics[portid].rx;
+ }
+ printf("\nAggregate statistics ==============================="
+ "\nTotal packets sent: %18"PRIu64
+ "\nTotal packets received: %14"PRIu64
+ "\nTotal packets dropped: %15"PRIu64,
+ total_packets_tx,
+ total_packets_rx,
+ total_packets_dropped);
+ printf("\n====================================================\n");
+}
+
+static void
+l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_portid)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->s_addr);
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+{
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+}
+
+/* main processing loop */
+static void l2fwd_main_loop(void)
+{
+ uint64_t prev_tsc, diff_tsc, cur_tsc, timer_tsc, drain_tsc;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_tx_buffer *buffer;
+ struct lcore_queue_conf *qconf;
+ uint32_t i, j, portid, nb_rx;
+ struct rte_mbuf *m;
+ uint32_t lcore_id;
+ int32_t sent;
+
+ drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S *
+ BURST_TX_DRAIN_US;
+ prev_tsc = 0;
+ timer_tsc = 0;
+
+ lcore_id = rte_lcore_id();
+ qconf = &lcore_queue_conf[lcore_id];
+
+ if (qconf->n_rx_port == 0) {
+ RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+ return;
+ }
+
+ RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ portid = qconf->rx_port_list[i];
+ RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+ portid);
+
+ }
+
+ while (!force_quit) {
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid =
+ l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id ==
+ rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+ /*
+ * Read packet from RX queues
+ */
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst(portid, 0,
+ pkts_burst, MAX_PKT_BURST);
+
+ port_statistics[portid].rx += nb_rx;
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+ }
+}
+
+static int
+l2fwd_launch_one_lcore(void *args)
+{
+ RTE_SET_USED(args);
+ l2fwd_main_loop();
+
+ return 0;
+}
+
+/* display usage */
+static void
+l2fwd_usage(const char *prgname)
+{
+ printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
+ " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+ " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds "
+ " (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ prgname);
+}
+
+static int
+l2fwd_parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static unsigned int
+l2fwd_parse_nqueue(const char *q_arg)
+{
+ char *end = NULL;
+ unsigned long n;
+
+ /* parse hexadecimal string */
+ n = strtoul(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+ if (n == 0)
+ return 0;
+ if (n >= MAX_RX_QUEUE_PER_LCORE)
+ return 0;
+
+ return n;
+}
+
+static int
+l2fwd_parse_timer_period(const char *q_arg)
+{
+ char *end = NULL;
+ int n;
+
+ /* parse number string */
+ n = strtol(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+ if (n >= MAX_TIMER_PERIOD)
+ return -1;
+
+ return n;
+}
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "q:" /* number of queues */
+ "T:" /* timer period */
+ ;
+
+#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
+#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options
+ */
+ CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+static const struct option lgopts[] = {
+ { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
+ { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ {NULL, 0, 0, 0}
+};
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_parse_args(int argc, char **argv)
+{
+ int opt, ret, timer_secs;
+ char *prgname = argv[0];
+ char **argvopt;
+ int option_index;
+
+ argvopt = argv;
+
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ l2fwd_enabled_port_mask = l2fwd_parse_portmask(optarg);
+ if (l2fwd_enabled_port_mask == 0) {
+ printf("invalid portmask\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* nqueue */
+ case 'q':
+ l2fwd_rx_queue_per_lcore = l2fwd_parse_nqueue(optarg);
+ if (l2fwd_rx_queue_per_lcore == 0) {
+ printf("invalid queue number\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* timer period */
+ case 'T':
+ timer_secs = l2fwd_parse_timer_period(optarg);
+ if (timer_secs < 0) {
+ printf("invalid timer period\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ timer_period = timer_secs;
+ break;
+
+ /* long options */
+ case 0:
+ break;
+
+ default:
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ }
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status...");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (force_quit)
+ return;
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(portid, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps - %s\n",
+ portid, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", portid);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+static void
+signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ force_quit = true;
+ }
+}
+
+int
+main(int argc, char **argv)
+{
+ uint16_t nb_ports_available = 0;
+ struct lcore_queue_conf *qconf;
+ uint32_t nb_ports_in_mask = 0;
+ uint16_t portid, last_port;
+ uint32_t nb_lcores = 0;
+ uint32_t rx_lcore_id;
+ uint32_t nb_mbufs;
+ uint16_t nb_ports;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+ argc -= ret;
+ argv += ret;
+
+ force_quit = false;
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* parse application arguments (after the EAL ones) */
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+
+ printf("MAC updating %s\n", mac_updating ? "enabled" : "disabled");
+
+ /* convert to number of cycles */
+ timer_period *= rte_get_timer_hz();
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports == 0)
+ rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
+
+ /* check port mask to possible port mask */
+ if (l2fwd_enabled_port_mask & ~((1 << nb_ports) - 1))
+ rte_exit(EXIT_FAILURE, "Invalid portmask; possible (0x%x)\n",
+ (1 << nb_ports) - 1);
+
+ /* reset l2fwd_dst_ports */
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ } else {
+ last_port = portid;
+ }
+
+ nb_ports_in_mask++;
+ }
+ if (nb_ports_in_mask % 2) {
+ printf("Notice: odd number of ports in portmask.\n");
+ l2fwd_dst_ports[last_port] = last_port;
+ }
+
+
+ rx_lcore_id = 0;
+ qconf = NULL;
+
+ nb_mbufs = RTE_MAX(nb_ports * (nb_rxd + nb_txd + MAX_PKT_BURST +
+ nb_lcores * MEMPOOL_CACHE_SIZE), 8192U);
+
+ /* create the mbuf pool */
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs,
+ MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+
+ /* Initialize the port/queue configuration of each logical core */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ /* get the lcore_id for this port */
+ while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+ lcore_queue_conf[rx_lcore_id].n_rx_port ==
+ l2fwd_rx_queue_per_lcore) {
+ rx_lcore_id++;
+ if (rx_lcore_id >= RTE_MAX_LCORE)
+ rte_exit(EXIT_FAILURE, "Not enough cores\n");
+ }
+
+ if (qconf != &lcore_queue_conf[rx_lcore_id]) {
+ /* Assigned a new logical core in the loop above. */
+ qconf = &lcore_queue_conf[rx_lcore_id];
+ nb_lcores++;
+ }
+
+ qconf->rx_port_list[qconf->n_rx_port] = portid;
+ qconf->n_rx_port++;
+ printf("Lcore %u: RX port %u\n", rx_lcore_id, portid);
+ }
+
+
+ /* Initialise each port */
+ RTE_ETH_FOREACH_DEV(portid) {
+ struct rte_eth_rxconf rxq_conf;
+ struct rte_eth_txconf txq_conf;
+ struct rte_eth_conf local_port_conf = port_conf;
+ struct rte_eth_dev_info dev_info;
+
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) {
+ printf("Skipping disabled port %u\n", portid);
+ continue;
+ }
+ nb_ports_available++;
+
+ /* init port */
+ printf("Initializing port %u... ", portid);
+ fflush(stdout);
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n",
+ ret, portid);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, port=%u\n",
+ ret, portid);
+
+ rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+ /* init one RX queue */
+ fflush(stdout);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+ rte_eth_dev_socket_id(portid),
+ &rxq_conf,
+ l2fwd_pktmbuf_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n",
+ ret, portid);
+
+ /* init one TX queue on each port */
+ fflush(stdout);
+ txq_conf = dev_info.default_txconf;
+ txq_conf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid),
+ &txq_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, portid);
+
+ /* Initialize TX buffers */
+ tx_buffer[portid] = rte_zmalloc_socket("tx_buffer",
+ RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
+ rte_eth_dev_socket_id(portid));
+ if (tx_buffer[portid] == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot allocate buffer for tx on port %u\n",
+ portid);
+
+ rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST);
+
+ ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid],
+ rte_eth_tx_buffer_count_callback,
+ &port_statistics[portid].dropped);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot set error callback for tx buffer on port %u\n",
+ portid);
+
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, portid);
+
+ printf("done:\n");
+
+ rte_eth_promiscuous_enable(portid);
+
+ printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ portid,
+ l2fwd_ports_eth_addr[portid].addr_bytes[0],
+ l2fwd_ports_eth_addr[portid].addr_bytes[1],
+ l2fwd_ports_eth_addr[portid].addr_bytes[2],
+ l2fwd_ports_eth_addr[portid].addr_bytes[3],
+ l2fwd_ports_eth_addr[portid].addr_bytes[4],
+ l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+ /* initialize port stats */
+ memset(&port_statistics, 0, sizeof(port_statistics));
+ }
+
+ if (!nb_ports_available) {
+ rte_exit(EXIT_FAILURE,
+ "All available ports are disabled. Please set portmask.\n");
+ }
+
+ check_all_ports_link_status(l2fwd_enabled_port_mask);
+
+ ret = 0;
+ /* launch per-lcore init on every lcore */
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL,
+ CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return ret;
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
new file mode 100644
index 000000000..16eadb0b4
--- /dev/null
+++ b/examples/l2fwd-event/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+sources = files(
+ 'main.c'
+)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 02/10] examples/l2fwd-event: add infra for eventdev
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
` (8 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add infra to select event device as a mode to process packets through
command line arguments. Also, allow the user to select the schedule type
to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
Usage:
`--mode="eventdev"` or `--mode="poll"`
`--eventq-sync="ordered"` or `--eventq-sync="atomic"`
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/Makefile | 1 +
examples/l2fwd-event/l2fwd_eventdev.c | 107 ++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 62 +++++++++++++++
examples/l2fwd-event/main.c | 22 +++++-
examples/l2fwd-event/meson.build | 3 +-
5 files changed, 192 insertions(+), 3 deletions(-)
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index a156c4162..bfe0058a2 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -7,6 +7,7 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
+SRCS-y += l2fwd_eventdev.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
new file mode 100644
index 000000000..19efb6d1e
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+static void
+parse_mode(const char *optarg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+
+ if (!strncmp(optarg, "poll", 4))
+ eventdev_rsrc->enabled = false;
+ else if (!strncmp(optarg, "eventdev", 8))
+ eventdev_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sync(const char *optarg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+
+ if (!strncmp(optarg, "ordered", 7))
+ eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ORDERED;
+ else if (!strncmp(optarg, "atomic", 6))
+ eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ATOMIC;
+}
+
+static int
+parse_eventdev_args(char **argv, int argc)
+{
+ const struct option eventdev_lgopts[] = {
+ {CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+ {CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+ {NULL, 0, 0, 0}
+ };
+ char **argvopt = argv;
+ int32_t option_index;
+ int32_t opt;
+
+ while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts,
+ &option_index)) != EOF) {
+ switch (opt) {
+ case CMD_LINE_OPT_MODE_NUM:
+ parse_mode(optarg);
+ break;
+
+ case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+ parse_eventq_sync(optarg);
+ break;
+
+ case '?':
+ /* skip other parameters except eventdev specific */
+ break;
+
+ default:
+ printf("Invalid eventdev parameter\n");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+void
+eventdev_resource_setup(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t service_id;
+ int32_t ret;
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(eventdev_rsrc->args, eventdev_rsrc->nb_args);
+ if (ret < 0)
+ return;
+
+ if (!rte_event_dev_count())
+ rte_exit(EXIT_FAILURE, "No Eventdev found");
+ /* Start event device service */
+ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->service_id = service_id;
+
+ /* Start event device */
+ ret = rte_event_dev_start(eventdev_rsrc->event_d_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+}
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
new file mode 100644
index 000000000..2e8d95e67
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_EVENTDEV_H__
+#define __L2FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sync"
+
+enum {
+ CMD_LINE_OPT_MODE_NUM = 265,
+ CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+};
+
+struct eventdev_resources {
+ struct l2fwd_port_statistics *stats;
+ struct rte_mempool *pkt_pool;
+ uint64_t timer_period;
+ uint32_t *dst_ports;
+ uint32_t service_id;
+ uint32_t port_mask;
+ volatile bool *done;
+ uint8_t event_d_id;
+ uint8_t sync_mode;
+ uint8_t tx_mode_q;
+ uint8_t mac_updt;
+ uint8_t enabled;
+ uint8_t nb_args;
+ char **args;
+};
+
+static inline struct eventdev_resources *
+get_eventdev_rsrc(void)
+{
+ static const char name[RTE_MEMZONE_NAMESIZE] = "l2fwd_event_rsrc";
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+
+ if (mz != NULL)
+ return mz->addr;
+
+ mz = rte_memzone_reserve(name, sizeof(struct eventdev_resources), 0, 0);
+ if (mz != NULL) {
+ memset(mz->addr, 0, sizeof(struct eventdev_resources));
+ return mz->addr;
+ }
+
+ rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+ return NULL;
+}
+
+void eventdev_resource_setup(void);
+
+#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index cc47fa203..661f0833f 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -42,6 +42,7 @@
#include <rte_spinlock.h>
#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
static volatile bool force_quit;
@@ -288,7 +289,12 @@ l2fwd_usage(const char *prgname)
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
" When enabled:\n"
" - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n"
+ " --mode: Packet transfer mode for I/O, poll or eventdev\n"
+ " Default mode = eventdev\n"
+ " --eventq-sync:Event queue synchronization method,\n"
+ " ordered or atomic.\nDefault: atomic\n"
+ " Valid only if --mode=eventdev\n\n",
prgname);
}
@@ -503,6 +509,7 @@ signal_handler(int signum)
int
main(int argc, char **argv)
{
+ struct eventdev_resources *eventdev_rsrc;
uint16_t nb_ports_available = 0;
struct lcore_queue_conf *qconf;
uint32_t nb_ports_in_mask = 0;
@@ -524,6 +531,7 @@ main(int argc, char **argv)
signal(SIGINT, signal_handler);
signal(SIGTERM, signal_handler);
+ eventdev_rsrc = get_eventdev_rsrc();
/* parse application arguments (after the EAL ones) */
ret = l2fwd_parse_args(argc, argv);
if (ret < 0)
@@ -584,6 +592,17 @@ main(int argc, char **argv)
if (l2fwd_pktmbuf_pool == NULL)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+ eventdev_rsrc->port_mask = l2fwd_enabled_port_mask;
+ eventdev_rsrc->pkt_pool = l2fwd_pktmbuf_pool;
+ eventdev_rsrc->dst_ports = l2fwd_dst_ports;
+ eventdev_rsrc->timer_period = timer_period;
+ eventdev_rsrc->mac_updt = mac_updating;
+ eventdev_rsrc->stats = port_statistics;
+ eventdev_rsrc->done = &force_quit;
+
+ /* Configure eventdev parameters if user has requested */
+ eventdev_resource_setup();
+
/* Initialize the port/queue configuration of each logical core */
RTE_ETH_FOREACH_DEV(portid) {
/* skip ports that are not enabled */
@@ -610,7 +629,6 @@ main(int argc, char **argv)
printf("Lcore %u: RX port %u\n", rx_lcore_id, portid);
}
-
/* Initialise each port */
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_rxconf rxq_conf;
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index 16eadb0b4..b1ad48cc5 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -8,5 +8,6 @@
# DPDK instance, use 'make'
sources = files(
- 'main.c'
+ 'main.c',
+ 'l2fwd_eventdev.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 03/10] examples/l2fwd-event: add infra to split eventdev framework
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
` (7 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 2 ++
examples/l2fwd-event/l2fwd_eventdev.c | 29 +++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 20 +++++++++++++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 24 +++++++++++++++
.../l2fwd_eventdev_internal_port.c | 24 +++++++++++++++
examples/l2fwd-event/meson.build | 4 ++-
6 files changed, 102 insertions(+), 1 deletion(-)
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index bfe0058a2..c1f700a65 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -8,6 +8,8 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
SRCS-y += l2fwd_eventdev.c
+SRCS-y += l2fwd_eventdev_internal_port.c
+SRCS-y += l2fwd_eventdev_generic.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 19efb6d1e..df76f1c1f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -76,6 +76,31 @@ parse_eventdev_args(char **argv, int argc)
return 0;
}
+static void
+eventdev_capability_setup(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t caps = 0;
+ uint16_t i;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(i) {
+ ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Invalid capability for Tx adptr port %d\n",
+ i);
+
+ eventdev_rsrc->tx_mode_q |= !(caps &
+ RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+ }
+
+ if (eventdev_rsrc->tx_mode_q)
+ eventdev_set_generic_ops(&eventdev_rsrc->ops);
+ else
+ eventdev_set_internal_port_ops(&eventdev_rsrc->ops);
+}
+
void
eventdev_resource_setup(void)
{
@@ -90,6 +115,10 @@ eventdev_resource_setup(void)
if (!rte_event_dev_count())
rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+ /* Setup eventdev capability callbacks */
+ eventdev_capability_setup();
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 2e8d95e67..8b6606b4c 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -18,8 +18,26 @@ enum {
CMD_LINE_OPT_EVENTQ_SYNC_NUM,
};
+typedef void (*event_queue_setup_cb)(uint16_t ethdev_count,
+ uint32_t event_queue_cfg);
+typedef uint32_t (*eventdev_setup_cb)(uint16_t ethdev_count);
+typedef void (*adapter_setup_cb)(uint16_t ethdev_count);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*service_setup_cb)(void);
+typedef void (*event_loop_cb)(void);
+
+struct eventdev_setup_ops {
+ event_queue_setup_cb event_queue_setup;
+ event_port_setup_cb event_port_setup;
+ eventdev_setup_cb eventdev_setup;
+ adapter_setup_cb adapter_setup;
+ service_setup_cb service_setup;
+ event_loop_cb l2fwd_event_loop;
+};
+
struct eventdev_resources {
struct l2fwd_port_statistics *stats;
+ struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
uint64_t timer_period;
uint32_t *dst_ports;
@@ -58,5 +76,7 @@ get_eventdev_rsrc(void)
}
void eventdev_resource_setup(void);
+void eventdev_set_generic_ops(struct eventdev_setup_ops *ops);
+void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops);
#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
new file mode 100644
index 000000000..e3990f8b0
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+void
+eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
new file mode 100644
index 000000000..a0d2111f9
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+void
+eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index b1ad48cc5..38560840c 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -9,5 +9,7 @@
sources = files(
'main.c',
- 'l2fwd_eventdev.c'
+ 'l2fwd_eventdev.c',
+ 'l2fwd_eventdev_internal_port.c',
+ 'l2fwd_eventdev_generic.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 04/10] examples/l2fwd-event: add eth port setup for eventdev
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (2 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
` (6 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 114 ++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 1 +
examples/l2fwd-event/main.c | 17 ++++
3 files changed, 132 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index df76f1c1f..0d0d3b8b9 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -18,6 +18,14 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
static void
parse_mode(const char *optarg)
{
@@ -76,6 +84,108 @@ parse_eventdev_args(char **argv, int argc)
return 0;
}
+static void
+eth_dev_port_setup(uint16_t ethdev_count __rte_unused)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ static struct rte_eth_conf port_config = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_key = NULL,
+ .rss_hf = ETH_RSS_IP,
+ }
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ }
+ };
+ struct rte_eth_conf local_port_conf;
+ struct rte_eth_dev_info dev_info;
+ struct rte_eth_txconf txconf;
+ struct rte_eth_rxconf rxconf;
+ uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+ uint16_t port_id;
+ int32_t ret;
+
+ /* initialize all ports */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ local_port_conf = port_config;
+ /* skip ports that are not enabled */
+ if ((eventdev_rsrc->port_mask & (1 << port_id)) == 0) {
+ printf("\nSkipping disabled port %d\n", port_id);
+ continue;
+ }
+
+ /* init port */
+ printf("Initializing port %d ... ", port_id);
+ fflush(stdout);
+ rte_eth_dev_info_get(port_id, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+ dev_info.flow_type_rss_offloads;
+ if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+ port_config.rx_adv_conf.rss_conf.rss_hf) {
+ printf("Port %u modified RSS hash function "
+ "based on hardware support,"
+ "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+ port_id,
+ port_config.rx_adv_conf.rss_conf.rss_hf,
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+ }
+
+ ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ rte_eth_macaddr_get(port_id,
+ &eventdev_rsrc->ports_eth_addr[port_id]);
+ print_ethaddr(" Address:",
+ &eventdev_rsrc->ports_eth_addr[port_id]);
+ printf("\n");
+
+
+ /* init one Rx queue per port */
+ rxconf = dev_info.default_rxconf;
+ rxconf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0, &rxconf,
+ eventdev_rsrc->pkt_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_rx_queue_setup: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ /* init one Tx queue per port */
+ txconf = dev_info.default_txconf;
+ txconf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_tx_queue_setup: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ rte_eth_promiscuous_enable(port_id);
+ }
+}
+
static void
eventdev_capability_setup(void)
{
@@ -105,6 +215,7 @@ void
eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint16_t ethdev_count = rte_eth_dev_count_avail();
uint32_t service_id;
int32_t ret;
@@ -119,6 +230,9 @@ eventdev_resource_setup(void)
/* Setup eventdev capability callbacks */
eventdev_capability_setup();
+ /* Ethernet device configuration */
+ eth_dev_port_setup(ethdev_count);
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 8b6606b4c..d380faff5 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -51,6 +51,7 @@ struct eventdev_resources {
uint8_t enabled;
uint8_t nb_args;
char **args;
+ struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
};
static inline struct eventdev_resources *
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 661f0833f..f24bdd4a4 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -602,6 +602,22 @@ main(int argc, char **argv)
/* Configure eventdev parameters if user has requested */
eventdev_resource_setup();
+ if (eventdev_rsrc->enabled) {
+ /* All settings are done. Now enable eth devices */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, portid);
+ }
+
+ goto skip_port_config;
+ }
/* Initialize the port/queue configuration of each logical core */
RTE_ETH_FOREACH_DEV(portid) {
@@ -733,6 +749,7 @@ main(int argc, char **argv)
"All available ports are disabled. Please set portmask.\n");
}
+skip_port_config:
check_all_ports_link_status(l2fwd_enabled_port_mask);
ret = 0;
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (3 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:35 ` Sunil Kumar Kori
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
` (5 subsequent siblings)
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event device queue and port setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 10 +
examples/l2fwd-event/l2fwd_eventdev.h | 18 ++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 179 +++++++++++++++++-
.../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++-
4 files changed, 378 insertions(+), 2 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 0d0d3b8b9..7a3d077ae 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -216,6 +216,7 @@ eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
uint16_t ethdev_count = rte_eth_dev_count_avail();
+ uint32_t event_queue_cfg = 0;
uint32_t service_id;
int32_t ret;
@@ -233,6 +234,15 @@ eventdev_resource_setup(void)
/* Ethernet device configuration */
eth_dev_port_setup(ethdev_count);
+ /* Event device configuration */
+ event_queue_cfg = eventdev_rsrc->ops.eventdev_setup(ethdev_count);
+
+ /* Event queue configuration */
+ eventdev_rsrc->ops.event_queue_setup(ethdev_count, event_queue_cfg);
+
+ /* Event port configuration */
+ eventdev_rsrc->ops.event_port_setup();
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index d380faff5..1d43200e2 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -26,6 +26,17 @@ typedef void (*event_port_setup_cb)(void);
typedef void (*service_setup_cb)(void);
typedef void (*event_loop_cb)(void);
+struct eventdev_queues {
+ uint8_t *event_q_id;
+ uint8_t nb_queues;
+};
+
+struct eventdev_ports {
+ uint8_t *event_p_id;
+ uint8_t nb_ports;
+ rte_spinlock_t lock;
+};
+
struct eventdev_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
@@ -36,9 +47,14 @@ struct eventdev_setup_ops {
};
struct eventdev_resources {
+ struct rte_event_port_conf def_p_conf;
struct l2fwd_port_statistics *stats;
+ /* Default port config. */
+ uint8_t disable_implicit_release;
struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
+ struct eventdev_queues evq;
+ struct eventdev_ports evp;
uint64_t timer_period;
uint32_t *dst_ports;
uint32_t service_id;
@@ -47,6 +63,8 @@ struct eventdev_resources {
uint8_t event_d_id;
uint8_t sync_mode;
uint8_t tx_mode_q;
+ uint8_t deq_depth;
+ uint8_t has_burst;
uint8_t mac_updt;
uint8_t enabled;
uint8_t nb_args;
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index e3990f8b0..65166fded 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -17,8 +17,185 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static uint32_t
+eventdev_setup_generic(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t num_workers = 0;
+ int ret;
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+ eventdev_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ /* One queue for each ethdev port + one Tx adapter Single link queue. */
+ event_d_conf.nb_event_queues = ethdev_count + 1;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count() - rte_service_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ eventdev_rsrc->evp.nb_ports = num_workers;
+
+ eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ eventdev_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
+static void
+event_port_setup_generic(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ /* Service cores are not used to run worker thread */
+ eventdev_rsrc->evp.nb_ports = eventdev_rsrc->evp.nb_ports;
+ eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evp.nb_ports);
+ if (!eventdev_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE, " No space is available");
+
+ memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ eventdev_rsrc->disable_implicit_release;
+ eventdev_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+ for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id,
+ eventdev_rsrc->evq.event_q_id,
+ NULL,
+ eventdev_rsrc->evq.nb_queues - 1);
+ if (ret != (eventdev_rsrc->evq.nb_queues - 1)) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+ }
+ /* init spinlock */
+ rte_spinlock_init(&eventdev_rsrc->evp.lock);
+
+ eventdev_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id;
+ int32_t ret;
+
+ event_q_conf.schedule_type = eventdev_rsrc->sync_mode;
+ eventdev_rsrc->evq.nb_queues = ethdev_count + 1;
+ eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evq.nb_queues);
+ if (!eventdev_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ for (event_q_id = 0; event_q_id < (eventdev_rsrc->evq.nb_queues - 1);
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+
+ event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+ event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue for Tx adapter");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
void
eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->eventdev_setup = eventdev_setup_generic;
+ ops->event_queue_setup = event_queue_setup_generic;
+ ops->event_port_setup = event_port_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index a0d2111f9..52cb07707 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -17,8 +17,179 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static uint32_t
+eventdev_setup_internal_port(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ uint8_t disable_implicit_release;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t num_workers = 0;
+ int ret;
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+
+ disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+ eventdev_rsrc->disable_implicit_release =
+ disable_implicit_release;
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ event_d_conf.nb_event_queues = ethdev_count;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ eventdev_rsrc->evp.nb_ports = num_workers;
+ eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ eventdev_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
+static void
+event_port_setup_internal_port(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evp.nb_ports);
+ if (!eventdev_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE,
+ "Failed to allocate memory for Event Ports");
+
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ eventdev_rsrc->disable_implicit_release;
+
+ for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+ /* init spinlock */
+ rte_spinlock_init(&eventdev_rsrc->evp.lock);
+ }
+
+ eventdev_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id = 0;
+ int32_t ret;
+
+ rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ if (def_q_conf.nb_atomic_order_sequences <
+ event_q_conf.nb_atomic_order_sequences)
+ event_q_conf.nb_atomic_order_sequences =
+ def_q_conf.nb_atomic_order_sequences;
+
+ event_q_conf.event_queue_cfg = event_queue_cfg;
+ event_q_conf.schedule_type = eventdev_rsrc->sync_mode;
+ eventdev_rsrc->evq.nb_queues = ethdev_count;
+ eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evq.nb_queues);
+ if (!eventdev_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->eventdev_setup = eventdev_setup_internal_port;
+ ops->event_queue_setup = event_queue_setup_internal_port;
+ ops->event_port_setup = event_port_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (4 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 07/10] examples/l2fwd-event: add service core setup pbhagavatula
` (4 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 3 +
examples/l2fwd-event/l2fwd_eventdev.h | 17 +++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 117 ++++++++++++++++++
.../l2fwd_eventdev_internal_port.c | 80 ++++++++++++
4 files changed, 217 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 7a3d077ae..f964c69d6 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -243,6 +243,9 @@ eventdev_resource_setup(void)
/* Event port configuration */
eventdev_rsrc->ops.event_port_setup();
+ /* Rx/Tx adapters configuration */
+ eventdev_rsrc->ops.adapter_setup(ethdev_count);
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 1d43200e2..532672f7d 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -6,6 +6,9 @@
#define __L2FWD_EVENTDEV_H__
#include <rte_common.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
#include <rte_spinlock.h>
#include "l2fwd_common.h"
@@ -37,6 +40,18 @@ struct eventdev_ports {
rte_spinlock_t lock;
};
+struct eventdev_rx_adptr {
+ uint32_t service_id;
+ uint8_t nb_rx_adptr;
+ uint8_t *rx_adptr;
+};
+
+struct eventdev_tx_adptr {
+ uint32_t service_id;
+ uint8_t nb_tx_adptr;
+ uint8_t *tx_adptr;
+};
+
struct eventdev_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
@@ -50,6 +65,8 @@ struct eventdev_resources {
struct rte_event_port_conf def_p_conf;
struct l2fwd_port_statistics *stats;
/* Default port config. */
+ struct eventdev_rx_adptr rx_adptr;
+ struct eventdev_tx_adptr tx_adptr;
uint8_t disable_implicit_release;
struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index 65166fded..68b63279a 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -192,10 +192,127 @@ event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)
eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
}
+static void
+rx_tx_adapter_setup_generic(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ uint8_t rx_adptr_id = 0;
+ uint8_t tx_adptr_id = 0;
+ uint8_t tx_port_id = 0;
+ uint32_t service_id;
+ int32_t ret, i;
+
+ /* Rx adapter setup */
+ eventdev_rsrc->rx_adptr.nb_rx_adptr = 1;
+ eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->rx_adptr.nb_rx_adptr);
+ if (!eventdev_rsrc->rx_adptr.rx_adptr) {
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create rx adapter");
+
+ eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode;
+ for (i = 0; i < ethdev_count; i++) {
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i];
+ ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, i, -1,
+ ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->rx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed",
+ rx_adptr_id);
+
+ eventdev_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+ /* Tx adapter setup */
+ eventdev_rsrc->tx_adptr.nb_tx_adptr = 1;
+ eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->tx_adptr.nb_tx_adptr);
+ if (!eventdev_rsrc->tx_adptr.tx_adptr) {
+ free(eventdev_rsrc->rx_adptr.rx_adptr);
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create tx adapter");
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->tx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+ ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed",
+ tx_adptr_id);
+
+ eventdev_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
void
eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
{
ops->eventdev_setup = eventdev_setup_generic;
ops->event_queue_setup = event_queue_setup_generic;
ops->event_port_setup = event_port_setup_generic;
+ ops->adapter_setup = rx_tx_adapter_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index 52cb07707..02663242f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -186,10 +186,90 @@ event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)
}
}
+static void
+rx_tx_adapter_setup_internal_port(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ int32_t ret, i;
+
+ eventdev_rsrc->rx_adptr.nb_rx_adptr = ethdev_count;
+ eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->rx_adptr.nb_rx_adptr);
+ if (!eventdev_rsrc->rx_adptr.rx_adptr) {
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", i);
+
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i];
+ eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode;
+ ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", i);
+
+ eventdev_rsrc->rx_adptr.rx_adptr[i] = i;
+ }
+
+ eventdev_rsrc->tx_adptr.nb_tx_adptr = ethdev_count;
+ eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->tx_adptr.nb_tx_adptr);
+ if (!eventdev_rsrc->tx_adptr.tx_adptr) {
+ free(eventdev_rsrc->rx_adptr.rx_adptr);
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", i);
+
+ ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", i);
+
+ eventdev_rsrc->tx_adptr.tx_adptr[i] = i;
+ }
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
ops->eventdev_setup = eventdev_setup_internal_port;
ops->event_queue_setup = event_queue_setup_internal_port;
ops->event_port_setup = event_port_setup_internal_port;
+ ops->adapter_setup = rx_tx_adapter_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 07/10] examples/l2fwd-event: add service core setup
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (5 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
` (3 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev_generic.c | 31 +++++++++++++++++++
.../l2fwd_eventdev_internal_port.c | 6 ++++
examples/l2fwd-event/main.c | 2 ++
3 files changed, 39 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index 68b63279a..e1e603052 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -17,6 +17,36 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static void
+eventdev_service_setup_generic(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t lcore_id[RTE_MAX_LCORE] = {0};
+ int32_t req_service_cores = 3;
+ int32_t avail_service_cores;
+
+ avail_service_cores = rte_service_lcore_list(lcore_id, RTE_MAX_LCORE);
+ if (avail_service_cores < req_service_cores) {
+ rte_exit(EXIT_FAILURE, "Enough services cores are not present"
+ " Required = %d Available = %d",
+ req_service_cores, avail_service_cores);
+ }
+
+ /* Start eventdev scheduler service */
+ rte_service_map_lcore_set(eventdev_rsrc->service_id, lcore_id[0], 1);
+ rte_service_lcore_start(lcore_id[0]);
+
+ /* Start eventdev Rx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc->rx_adptr.service_id,
+ lcore_id[1], 1);
+ rte_service_lcore_start(lcore_id[1]);
+
+ /* Start eventdev Tx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc->tx_adptr.service_id,
+ lcore_id[2], 1);
+ rte_service_lcore_start(lcore_id[2]);
+}
+
static uint32_t
eventdev_setup_generic(uint16_t ethdev_count)
{
@@ -315,4 +345,5 @@ eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
ops->event_queue_setup = event_queue_setup_generic;
ops->event_port_setup = event_port_setup_generic;
ops->adapter_setup = rx_tx_adapter_setup_generic;
+ ops->service_setup = eventdev_service_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index 02663242f..39fcb4326 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -265,6 +265,11 @@ rx_tx_adapter_setup_internal_port(uint16_t ethdev_count)
}
}
+static void
+eventdev_service_setup_internal_port(void)
+{
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
@@ -272,4 +277,5 @@ eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
ops->event_queue_setup = event_queue_setup_internal_port;
ops->event_port_setup = event_port_setup_internal_port;
ops->adapter_setup = rx_tx_adapter_setup_internal_port;
+ ops->service_setup = eventdev_service_setup_internal_port;
}
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index f24bdd4a4..09c86d2cd 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -616,6 +616,8 @@ main(int argc, char **argv)
ret, portid);
}
+ /* Now start internal services */
+ eventdev_rsrc->ops.service_setup();
goto skip_port_config;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (6 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 07/10] examples/l2fwd-event: add service core setup pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
` (2 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event dev main loop based on enabled l2fwd options and eventdev
capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 273 ++++++++++++++++++++++++++
examples/l2fwd-event/main.c | 10 +-
2 files changed, 280 insertions(+), 3 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index f964c69d6..345d9d15b 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -18,6 +18,12 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+#define L2FWD_EVENT_SINGLE 0x1
+#define L2FWD_EVENT_BURST 0x2
+#define L2FWD_EVENT_TX_DIRECT 0x4
+#define L2FWD_EVENT_TX_ENQ 0x8
+#define L2FWD_EVENT_UPDT_MAC 0x10
+
static void
print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
{
@@ -211,10 +217,272 @@ eventdev_capability_setup(void)
eventdev_set_internal_port_ops(&eventdev_rsrc->ops);
}
+static __rte_noinline int
+get_free_event_port(struct eventdev_resources *eventdev_rsrc)
+{
+ static int index;
+ int port_id;
+
+ rte_spinlock_lock(&eventdev_rsrc->evp.lock);
+ if (index >= eventdev_rsrc->evp.nb_ports) {
+ printf("No free event port is available\n");
+ return -1;
+ }
+
+ port_id = eventdev_rsrc->evp.event_p_id[index];
+ index++;
+ rte_spinlock_unlock(&eventdev_rsrc->evp.lock);
+
+ return port_id;
+}
+
+static __rte_always_inline void
+l2fwd_event_updt_mac(struct rte_mbuf *m, const struct rte_ether_addr *dst_mac,
+ uint8_t dst_port)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(dst_mac, ð->s_addr);
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_single(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ const uint64_t timer_period = eventdev_rsrc->timer_period;
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const int port_id = get_free_event_port(eventdev_rsrc);
+ const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ volatile bool *done = eventdev_rsrc->done;
+ struct rte_mbuf *mbuf;
+ uint16_t dst_port;
+ struct rte_event ev;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!*done) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
+ continue;
+
+
+ mbuf = ev.mbuf;
+ dst_port = eventdev_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_event_updt_mac(mbuf,
+ &eventdev_rsrc->ports_eth_addr[dst_port],
+ dst_port);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev.queue_id = tx_q_id;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ while (rte_event_enqueue_burst(event_d_id, port_id,
+ &ev, 1) && !*done)
+ ;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+ while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id,
+ &ev, 1) &&
+ !*done)
+ ;
+ }
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_burst(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ const uint64_t timer_period = eventdev_rsrc->timer_period;
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const int port_id = get_free_event_port(eventdev_rsrc);
+ const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ const uint8_t deq_len = eventdev_rsrc->deq_depth;
+ volatile bool *done = eventdev_rsrc->done;
+ struct rte_event ev[MAX_PKT_BURST];
+ struct rte_mbuf *mbuf;
+ uint16_t nb_rx, nb_tx;
+ uint16_t dst_port;
+ uint8_t i;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!*done) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
+ deq_len, 0);
+ if (nb_rx == 0)
+ continue;
+
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf = ev[i].mbuf;
+ dst_port = eventdev_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0) {
+ __atomic_fetch_add(
+ &eventdev_rsrc->stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+ __atomic_fetch_add(
+ &eventdev_rsrc->stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_event_updt_mac(mbuf,
+ &eventdev_rsrc->ports_eth_addr[
+ dst_port],
+ dst_port);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev[i].queue_id = tx_q_id;
+ ev[i].op = RTE_EVENT_OP_FORWARD;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT)
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+
+ }
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
+ ev, nb_rx);
+ while (nb_tx < nb_rx && !*done)
+ nb_tx += rte_event_enqueue_burst(event_d_id,
+ port_id, ev + nb_tx,
+ nb_rx - nb_tx);
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id, ev,
+ nb_rx);
+ while (nb_tx < nb_rx && !*done)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx);
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ if (flags & L2FWD_EVENT_SINGLE)
+ l2fwd_event_loop_single(eventdev_rsrc, flags);
+ if (flags & L2FWD_EVENT_BURST)
+ l2fwd_event_loop_burst(eventdev_rsrc, flags);
+}
+
+#define L2FWD_EVENT_MODE \
+FP(tx_d, 0, 0, 0, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE) \
+FP(tx_d_burst, 0, 0, 1, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST) \
+FP(tx_q, 0, 1, 0, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE) \
+FP(tx_q_burst, 0, 1, 1, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST) \
+FP(tx_d_mac, 1, 0, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \
+ L2FWD_EVENT_SINGLE) \
+FP(tx_d_brst_mac, 1, 0, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \
+ L2FWD_EVENT_BURST) \
+FP(tx_q_mac, 1, 1, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \
+ L2FWD_EVENT_SINGLE) \
+FP(tx_q_brst_mac, 1, 1, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \
+ L2FWD_EVENT_BURST)
+
+
+#define FP(_name, _f3, _f2, _f1, flags) \
+static void __rte_noinline \
+l2fwd_event_main_loop_ ## _name(void) \
+{ \
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); \
+ l2fwd_event_loop(eventdev_rsrc, flags); \
+}
+
+L2FWD_EVENT_MODE
+#undef FP
+
void
eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ /* [MAC_UPDT][TX_MODE][BURST] */
+ const event_loop_cb event_loop[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = l2fwd_event_main_loop_ ## _name,
+ L2FWD_EVENT_MODE
+#undef FP
+ };
uint16_t ethdev_count = rte_eth_dev_count_avail();
uint32_t event_queue_cfg = 0;
uint32_t service_id;
@@ -260,4 +528,9 @@ eventdev_resource_setup(void)
ret = rte_event_dev_start(eventdev_rsrc->event_d_id);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ eventdev_rsrc->ops.l2fwd_event_loop = event_loop
+ [eventdev_rsrc->mac_updt]
+ [eventdev_rsrc->tx_mode_q]
+ [eventdev_rsrc->has_burst];
}
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 09c86d2cd..56487809b 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -271,8 +271,12 @@ static void l2fwd_main_loop(void)
static int
l2fwd_launch_one_lcore(void *args)
{
- RTE_SET_USED(args);
- l2fwd_main_loop();
+ struct eventdev_resources *eventdev_rsrc = args;
+
+ if (eventdev_rsrc->enabled)
+ eventdev_rsrc->ops.l2fwd_event_loop();
+ else
+ l2fwd_main_loop();
return 0;
}
@@ -756,7 +760,7 @@ main(int argc, char **argv)
ret = 0;
/* launch per-lcore init on every lcore */
- rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL,
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc,
CALL_MASTER);
rte_eal_mp_wait_lcore();
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 09/10] examples/l2fwd-event: add graceful teardown
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (7 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add graceful teardown that addresses both event mode and poll mode.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/main.c | 44 +++++++++++++++++++++++++++++--------
1 file changed, 35 insertions(+), 9 deletions(-)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 56487809b..057381b29 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -522,7 +522,7 @@ main(int argc, char **argv)
uint32_t rx_lcore_id;
uint32_t nb_mbufs;
uint16_t nb_ports;
- int ret;
+ int i, ret;
/* init EAL */
ret = rte_eal_init(argc, argv);
@@ -762,15 +762,41 @@ main(int argc, char **argv)
/* launch per-lcore init on every lcore */
rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc,
CALL_MASTER);
- rte_eal_mp_wait_lcore();
+ if (eventdev_rsrc->enabled) {
+ for (i = 0; i < eventdev_rsrc->rx_adptr.nb_rx_adptr; i++)
+ rte_event_eth_rx_adapter_stop(
+ eventdev_rsrc->rx_adptr.rx_adptr[i]);
+ for (i = 0; i < eventdev_rsrc->tx_adptr.nb_tx_adptr; i++)
+ rte_event_eth_tx_adapter_stop(
+ eventdev_rsrc->tx_adptr.tx_adptr[i]);
- RTE_ETH_FOREACH_DEV(portid) {
- if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
- continue;
- printf("Closing port %d...", portid);
- rte_eth_dev_stop(portid);
- rte_eth_dev_close(portid);
- printf(" Done\n");
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ rte_eth_dev_stop(portid);
+ }
+
+ rte_eal_mp_wait_lcore();
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ rte_eth_dev_close(portid);
+ }
+
+ rte_event_dev_stop(eventdev_rsrc->event_d_id);
+ rte_event_dev_close(eventdev_rsrc->event_d_id);
+
+ } else {
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
}
printf("Bye...\n");
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v3 10/10] doc: add application usage guide for l2fwd-event
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (8 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
@ 2019-09-19 10:13 ` pbhagavatula
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-19 10:13 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add documentation for l2fwd-event example.
Update MAINTAINERS file claiming responsibility of l2fwd-event.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
MAINTAINERS | 5 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
.../l2_forward_event_real_virtual.rst | 799 ++++++++++++++++++
4 files changed, 810 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index b3d9aaddd..d8e1fa84d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1458,6 +1458,11 @@ M: Tomasz Kantecki <tomasz.kantecki@intel.com>
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
+M: Sunil Kumar Kori <skori@marvell.com>
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+F: examples/l2fwd-event/
+F: doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
+
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index f23f8f59e..83a4f8d5c 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -26,6 +26,7 @@ Sample Applications User Guides
l2_forward_crypto
l2_forward_job_stats
l2_forward_real_virtual
+ l2_forward_event_real_virtual
l2_forward_cat
l3_forward
l3_forward_power_man
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index 90704194a..b33904ed1 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -87,6 +87,11 @@ examples are highlighted below.
forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
addresses like a simple switch.
+* :doc:`Network Layer 2 forwarding<l2_forward_eventdev_real_virtual>`: The Network Layer 2
+ forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
+ addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx
+ mechanism.
+
* :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
forwarding, or ``l3fwd`` application does forwarding based on Internet
Protocol, IPv4 or IPv6 like a simple router.
diff --git a/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
new file mode 100644
index 000000000..7cea8efaf
--- /dev/null
+++ b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
@@ -0,0 +1,799 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2014 Intel Corporation.
+
+.. _l2_fwd_event_app_real_and_virtual:
+
+L2 Forwarding Eventdev Sample Application (in Real and Virtualized Environments)
+================================================================================
+
+The L2 Forwarding eventdev sample application is a simple example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
+poll and event mode packet I/O mechanism which also takes advantage of Single
+Root I/O Virtualization (SR-IOV) features in a virtualized environment.
+
+Overview
+--------
+
+The L2 Forwarding eventdev sample application, which can operate in real and
+virtualized environments, performs L2 forwarding for each packet that is
+received on an RX_PORT. The destination port is the adjacent port from the
+enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
+ports 1 and 2 forward into each other, and ports 3 and 4 forward into each
+other. Also, if MAC addresses updating is enabled, the MAC addresses are
+affected as follows:
+
+* The source MAC address is replaced by the TX_PORT MAC address
+
+* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
+
+Appliation receives packets from RX_PORT using below mentioned methods:
+
+* Poll mode
+
+* Eventdev mode (default)
+
+This application can be used to benchmark performance using a traffic-generator,
+as shown in the :numref:`figure_l2_fwd_benchmark_setup`, or in a virtualized
+environment as shown in :numref:`figure_l2_fwd_virtenv_benchmark_setup`.
+
+.. _figure_l2_fwd_benchmark_setup:
+
+.. figure:: img/l2_fwd_benchmark_setup.*
+
+ Performance Benchmark Setup (Basic Environment)
+
+.. _figure_l2_fwd_virtenv_benchmark_setup:
+
+.. figure:: img/l2_fwd_virtenv_benchmark_setup.*
+
+ Performance Benchmark Setup (Virtualized Environment)
+
+This application may be used for basic VM to VM communication as shown
+in :numref:`figure_l2_fwd_vm2vm`, when MAC addresses updating is disabled.
+
+.. _figure_l2_fwd_vm2vm:
+
+.. figure:: img/l2_fwd_vm2vm.*
+
+ Virtual Machine to Virtual Machine communication.
+
+.. _l2_fwd_event_vf_setup:
+
+Virtual Function Setup Instructions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Application can use the virtual function available in the system and therefore
+can be used in a virtual machine without passing through the whole Network
+Device into a guest machine in a virtualized scenario. The virtual functions
+can be enabled on host machine or the hypervisor with the respective physical
+function driver.
+
+For example, on a Linux* host machine, it is possible to enable a virtual
+function using the following command:
+
+.. code-block:: console
+
+ modprobe ixgbe max_vfs=2,2
+
+This command enables two Virtual Functions on each of Physical Function of the
+NIC, with two physical ports in the PCI configuration space.
+
+It is important to note that enabled Virtual Function 0 and 2 would belong to
+Physical Function 0 and Virtual Function 1 and 3 would belong to Physical
+Function 1, in this case enabling a total of four Virtual Functions.
+
+Compiling the Application
+-------------------------
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``l2fwd-event`` sub-directory.
+
+Running the Application
+-----------------------
+
+The application requires a number of command line options:
+
+.. code-block:: console
+
+ ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sync=SYNC_MODE
+
+where,
+
+* p PORTMASK: A hexadecimal bitmask of the ports to configure
+
+* q NQ: A number of queues (=ports) per lcore (default is 1)
+
+* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
+
+* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
+
+* --eventq-sync=SYNC_MODE: Event queue synchronization method, Ordered or Atomic. Atomic by default.
+
+Sample usage commands are given below to run the application into different mode:
+
+Poll mode on linux environment with 4 lcores, 16 ports and 8 RX queues per lcore
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
+
+Eventdev mode on linux environment with 4 lcores, 16 ports , sync method ordered
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sync=ordered
+
+or
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+To run application with S/W scheduler, it uses following DPDK services:
+
+* Software scheduler
+* Rx adapter service function
+* Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Explanation
+-----------
+
+The following sections provide some explanation of the code.
+
+.. _l2_fwd_event_app_cmd_arguments:
+
+Command Line Arguments
+~~~~~~~~~~~~~~~~~~~~~~
+
+The L2 Forwarding eventdev sample application takes specific parameters,
+in addition to Environment Abstraction Layer (EAL) arguments.
+The preferred way to parse parameters is to use the getopt() function,
+since it is part of a well-defined and portable library.
+
+The parsing of arguments is done in the **l2fwd_parse_args()** function for non
+eventdev parameteres and in **parse_eventdev_args()** for eventded parameters.
+The method of argument parsing is not described here. Refer to the
+*glibc getopt(3)* man page for details.
+
+EAL arguments are parsed first, then application-specific arguments.
+This is done at the beginning of the main() function and eventdev parameters
+are parsed in eventdev_resource_setup() function during eventdev setup:
+
+.. code-block:: c
+
+ /* init EAL */
+
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+
+ argc -= ret;
+ argv += ret;
+
+ /* parse application arguments (after the EAL ones) */
+
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+ .
+ .
+ .
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(argc, argv);
+ if (ret < 0)
+ return ret;
+
+
+
+
+.. _l2_fwd_event_app_mbuf_init:
+
+Mbuf Pool Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the arguments are parsed, the mbuf pool is created.
+The mbuf pool contains a set of mbuf objects that will be used by the driver
+and the application to store network packet data:
+
+.. code-block:: c
+
+ /* create the mbuf pool */
+
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+The rte_mempool is a generic structure used to handle pools of objects.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of 32 mbufs is kept.
+The memory is allocated in NUMA socket 0,
+but it is possible to extend this code to allocate one mbuf pool per socket.
+
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
+
+.. _l2_fwd_event_app_dvr_init:
+
+Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~
+
+The main part of the code in the main() function relates to the initialization
+of the driver. To fully understand this code, it is recommended to study the
+chapters that related to the Poll Mode and Event mode Driver in the
+*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
+
+.. code-block:: c
+
+ if (rte_pci_probe() < 0)
+ rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
+
+ /* reset l2fwd_dst_ports */
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ }
+ else
+ last_port = portid;
+
+ nb_ports_in_mask++;
+
+ rte_eth_dev_info_get((uint8_t) portid, &dev_info);
+ }
+
+Observe that:
+
+* rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver
+ and as an Ethernet Poll Mode Driver.
+
+* rte_pci_probe() parses the devices on the PCI bus and initializes recognized
+ devices.
+
+The next step is to configure the RX and TX queues. For each port, there is only
+one RX queue (only one lcore is able to poll a given port). The number of TX
+queues depends on the number of available lcores. The rte_eth_dev_configure()
+function is used to configure the number of queues for a port:
+
+.. code-block:: c
+
+ ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Cannot configure device: "
+ "err=%d, port=%u\n",
+ ret, portid);
+
+.. _l2_fwd_event_app_rx_init:
+
+RX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The application uses one lcore to poll one or several ports, depending on the -q
+option, which specifies the number of queues per lcore.
+
+For example, if the user specifies -q 4, the application is able to poll four
+ports with one lcore. If there are 16 ports on the target (and if the portmask
+argument is -p ffff ), the application will need four lcores to poll all the
+ports.
+
+.. code-block:: c
+
+ ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
+ &rx_conf, l2fwd_pktmbuf_pool);
+ if (ret < 0)
+
+ rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: "
+ "err=%d, port=%u\n",
+ ret, portid);
+
+The list of queues that must be polled for a given lcore is stored in a private
+structure called struct lcore_queue_conf.
+
+.. code-block:: c
+
+ struct lcore_queue_conf {
+ unsigned n_rx_port;
+ unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
+ } rte_cache_aligned;
+
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+The values n_rx_port and rx_port_list[] are used in the main packet processing
+loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
+
+.. _l2_fwd_event_app_tx_init:
+
+TX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Each lcore should be able to transmit on any port. For every port, a single TX
+queue is initialized.
+
+.. code-block:: c
+
+ /* init one TX queue on each port */
+
+ fflush(stdout);
+
+ ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid), &tx_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, (unsigned) portid);
+
+The global configuration for TX queues is stored in a static structure:
+
+.. code-block:: c
+
+ static const struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
+ };
+
+To configure eventdev support, application setups following components:
+
+* Event dev
+* Event queue
+* Event Port
+* Rx/Tx adapters
+* Ethernet ports
+
+.. _l2_fwd_event_app_event_dev_init:
+
+Event dev Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+Application can use either H/W or S/W based event device scheduler
+implementation and supports single instance of event device. It configures event
+device as per below configuration
+
+.. code-block:: c
+
+ struct rte_event_dev_config event_d_conf = {
+ .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
+ .nb_event_ports = num_workers, /* Dedicated to each lcore */
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+In case of S/W scheduler, application runs eventdev scheduler service on service
+core. Application retrieves service id and later on it starts the same on a
+given lcore.
+
+.. code-block:: c
+
+ /* Start event device service */
+ ret = rte_event_dev_service_id_get(eventdev_rsrc.event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.service_id = service_id;
+
+ /* Start eventdev scheduler service */
+ rte_service_map_lcore_set(eventdev_rsrc.service_id, lcore_id[0], 1);
+ rte_service_lcore_start(lcore_id[0]);
+
+.. _l2_fwd_app_event_queue_init:
+
+Event queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet device is assigned a dedicated event queue which will be linked
+to all available event ports i.e. each lcore can dequeue packets from any of the
+Ethernet ports.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = 0,
+ .schedule_type = RTE_SCHED_TYPE_ATOMIC,
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
+ };
+
+ /* User requested sync mode */
+ event_q_conf.schedule_type = eventq_sync_mode;
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ }
+
+In case of S/W scheduler, an extra event queue is created which will be used for
+Tx adapter service function for enqueue operation.
+
+.. _l2_fwd_app_event_port_init:
+
+Event port Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Each worker thread is assigned a dedicated event port for enq/deq operations
+to/from an event device. All event ports are linked with all available event
+queues.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ }
+
+In case of S/W scheduler, an extra event port is created by DPDK library which
+is retrieved by the application and same will be used by Tx adapter service.
+
+.. code-block:: c
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &eventdev_rsrc.evq.event_q_id[
+ eventdev_rsrc.evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+.. _l2_fwd_event_app_adapter_init:
+
+Rx/Tx adapter Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
+Ethernet port's Rx queues are connected to its respective event queue at
+priority 0 via Rx adapter configuration and Ethernet port's tx queues are
+connected via Tx adapter.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+ &event_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", i);
+
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc.evq.event_q_id[i];
+ eth_q_conf.ev.sched_type = eventq_sync_mode;
+ ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", i);
+
+ eventdev_rsrc.rx_adptr.rx_adptr[i] = i;
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+ &event_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", i);
+
+ ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", i);
+
+ eventdev_rsrc.tx_adptr.tx_adptr[i] = i;
+ }
+
+For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
+configured which will be shared among all the Ethernet ports. Also DPDK library
+need service cores to run internal services for Rx/Tx adapters. Application gets
+service id for Rx/Tx adapters and after successful setup it runs the services
+on dedicated service cores.
+
+.. code-block:: c
+
+ /* retrieving service Id for Rx adapter */
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.rx_adptr.service_id = service_id;
+
+ /* Start eventdev Rx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc.rx_adptr.service_id,
+ lcore_id[1], 1);
+ rte_service_lcore_start(lcore_id[1]);
+
+ /* retrieving service Id for Tx adapter */
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.tx_adptr.service_id = service_id;
+
+ /* Start eventdev Tx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc.tx_adptr.service_id,
+ lcore_id[2], 1);
+ rte_service_lcore_start(lcore_id[2]);
+
+.. _l2_fwd_event_app_rx_tx_packets:
+
+Receive, Process and Transmit Packets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
+the RX queues. This is done using the following code:
+
+.. code-block:: c
+
+ /*
+ * Read packet from RX queues
+ */
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+
+Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
+function writes the mbuf pointers in a local table and returns the number of
+available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_simple_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+ In the following code, one line for getting the output port requires some
+ explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+Also to optimize enqueue opeartion, l2fwd_simple_forward() stores incoming mbus
+upto MAX_PKT_BURST. Once it reaches upto limit, all packets are transmitted to
+destination ports.
+
+.. code-block:: c
+
+ static void
+ l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+ {
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+ }
+
+For this test application, the processing is exactly the same for all packets
+arriving on the same RX port. Therefore, it would have been possible to call
+the rte_eth_tx_buffer() function directly from the main loop to send all the
+received packets on the same TX port, using the burst-oriented send function,
+which is more efficient.
+
+However, in real-life applications (such as, L3 routing),
+packet N is not necessarily forwarded on the same port as packet N-1.
+The application is implemented to illustrate that, so the same approach can be
+reused in a more complex application.
+
+To ensure that no packets remain in the tables, each lcore does a draining of TX
+queue in its main loop. This technique introduces some latency when there are
+not many packets to send, however it improves performance:
+
+.. code-block:: c
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id == rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+In the **l2fwd_main_loop_eventdev()** function, the main task is to read ingress
+packets from the event ports. This is done using the following code:
+
+.. code-block:: c
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
+ events, deq_len, 0);
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf[i] = events[i].mbuf;
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
+ }
+
+
+Before reading packets, deq_len is fetched to ensure correct allowed deq length
+by the eventdev.
+The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
+and returns the number of available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+ In the following code, one line for getting the output port requires some
+ explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
+be to destination ports via Tx adapter or generic event dev enqueue API
+depending H/W or S/W scheduler is used.
+
+.. code-block:: c
+
+ static inline void
+ l2fwd_eventdev_forward(struct rte_mbuf *m[], uint32_t portid,
+ uint16_t nb_rx, uint16_t event_p_id)
+ {
+ uint32_t dst_port, i;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ for (i = 0; i < nb_rx; i++) {
+ if (mac_updating)
+ l2fwd_mac_updating(m[i], dst_port);
+
+ m[i]->port = dst_port;
+ }
+
+ if (timer_period > 0) {
+ rte_spinlock_lock(&port_stats_lock);
+ port_statistics[dst_port].tx += nb_rx;
+ rte_spinlock_unlock(&port_stats_lock);
+ }
+ /* Registered callback is invoked for Tx */
+ eventdev_rsrc.send_burst_eventdev(m, nb_rx, event_p_id);
+ }
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v3 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-09-19 10:35 ` Sunil Kumar Kori
0 siblings, 0 replies; 107+ messages in thread
From: Sunil Kumar Kori @ 2019-09-19 10:35 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Pavan Nikhilesh Bhagavatula
Cc: dev
>+static void
>+event_port_setup_generic(void)
>+{
>+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
>+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
>+ struct rte_event_port_conf event_p_conf = {
>+ .dequeue_depth = 32,
>+ .enqueue_depth = 32,
>+ .new_event_threshold = 4096
>+ };
>+ struct rte_event_port_conf def_p_conf;
>+ uint8_t event_p_id;
>+ int32_t ret;
>+
>+ /* Service cores are not used to run worker thread */
>+ eventdev_rsrc->evp.nb_ports = eventdev_rsrc->evp.nb_ports;
Is this line needed ?
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (9 preceding siblings ...)
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-09-24 9:41 ` pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
` (10 more replies)
10 siblings, 11 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:41 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds a new application to demonstrate the usage of event
mode. The poll mode is also available to help with the transition.
The following new command line parameters are added:
--mode: Dictates the mode of operation either poll or event.
--eventq_sync: Dictates event synchronization mode either atomic or
ordered.
Based on event device capability the configuration is done as follows:
- A single event device is enabled.
- The number of event ports is equal to the number of worker
cores enabled in the core mask. Additional event ports might
be configured based on Rx/Tx adapter capability.
- The number of event queues is equal to the number of ethernet
ports. If Tx adapter doesn't have internal port capability then
an additional single link event queue is used to enqueue events
to Tx adapter.
- Each event port is linked to all existing event queues.
- Dedicated Rx/Tx adapters for each Ethernet port.
v4 Changes:
- Fix missing eventdev args parsing.
v3 Changes:
- Remove unwanted change to example/l2fwd.
- Fix checkpatch issue
http://mails.dpdk.org/archives/test-report/2019-September/098053.html
v2 Changes:
- Remove global variables.
- Split patches to make reviews friendlier.
- Split datapath based on eventdev capability.
checkpatch reports for v08/10 is intended
http://mails.dpdk.org/archives/test-report/2019-September/098059.html
Pavan Nikhilesh (5):
examples/l2fwd-event: add infra to split eventdev framework
examples/l2fwd-event: add eventdev queue and port setup
examples/l2fwd-event: add event Rx/Tx adapter setup
examples/l2fwd-event: add eventdev main loop
examples/l2fwd-event: add graceful teardown
Sunil Kumar Kori (5):
examples/l2fwd-event: add default poll mode routines
examples/l2fwd-event: add infra for eventdev
examples/l2fwd-event: add eth port setup for eventdev
examples/l2fwd-event: add service core setup
doc: add application usage guide for l2fwd-event
MAINTAINERS | 5 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
.../l2_forward_event_real_virtual.rst | 799 +++++++++++++++++
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 60 ++
examples/l2fwd-event/l2fwd_common.h | 26 +
examples/l2fwd-event/l2fwd_eventdev.c | 536 ++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 118 +++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 349 ++++++++
.../l2fwd_eventdev_internal_port.c | 281 ++++++
examples/l2fwd-event/main.c | 821 ++++++++++++++++++
examples/l2fwd-event/meson.build | 15 +
13 files changed, 3017 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-26 17:28 ` Jerin Jacob
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
` (9 subsequent siblings)
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add the default l2fwd poll mode routines similar to examples/l2fwd.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 57 +++
examples/l2fwd-event/l2fwd_common.h | 26 +
examples/l2fwd-event/main.c | 737 ++++++++++++++++++++++++++++
examples/l2fwd-event/meson.build | 12 +
5 files changed, 833 insertions(+)
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
diff --git a/examples/Makefile b/examples/Makefile
index de11dd487..d18504bd2 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -34,6 +34,7 @@ endif
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni
DIRS-y += l2fwd
+DIRS-y += l2fwd-event
ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
new file mode 100644
index 000000000..a156c4162
--- /dev/null
+++ b/examples/l2fwd-event/Makefile
@@ -0,0 +1,57 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# binary name
+APP = l2fwd-event
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
new file mode 100644
index 000000000..b1d42183e
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_COMMON_H__
+#define __L2FWD_COMMON_H__
+
+#define MAX_PKT_BURST 32
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
+
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+ uint64_t dropped;
+ uint64_t tx;
+ uint64_t rx;
+} __rte_cache_aligned;
+
+void print_stats(void);
+
+#endif /* __L2FWD_COMMON_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
new file mode 100644
index 000000000..cc47fa203
--- /dev/null
+++ b/examples/l2fwd-event/main.c
@@ -0,0 +1,737 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+static volatile bool force_quit;
+
+/* MAC updating enabled by default */
+static int mac_updating = 1;
+
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define MEMPOOL_CACHE_SIZE 256
+
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+/* ethernet addresses of ports */
+static struct rte_ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
+
+/* mask of enabled ports */
+static uint32_t l2fwd_enabled_port_mask;
+
+/* list of enabled ports */
+static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
+
+static unsigned int l2fwd_rx_queue_per_lcore = 1;
+
+struct lcore_queue_conf {
+ uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ uint32_t n_rx_port;
+} __rte_cache_aligned;
+
+static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+static struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .split_hdr_size = 0,
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+};
+
+static struct rte_mempool *l2fwd_pktmbuf_pool;
+
+static struct l2fwd_port_statistics port_statistics[RTE_MAX_ETHPORTS];
+
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+/* A tsc-based timer responsible for triggering statistics printout */
+static uint64_t timer_period = 10; /* default period is 10 seconds */
+
+/* Print out statistics on packets dropped */
+void print_stats(void)
+{
+ uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
+ uint32_t portid;
+
+ total_packets_dropped = 0;
+ total_packets_tx = 0;
+ total_packets_rx = 0;
+
+ const char clr[] = {27, '[', '2', 'J', '\0' };
+ const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("\nPort statistics ====================================");
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+ /* skip disabled ports */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("\nStatistics for port %u ------------------------------"
+ "\nPackets sent: %24"PRIu64
+ "\nPackets received: %20"PRIu64
+ "\nPackets dropped: %21"PRIu64,
+ portid,
+ port_statistics[portid].tx,
+ port_statistics[portid].rx,
+ port_statistics[portid].dropped);
+
+ total_packets_dropped += port_statistics[portid].dropped;
+ total_packets_tx += port_statistics[portid].tx;
+ total_packets_rx += port_statistics[portid].rx;
+ }
+ printf("\nAggregate statistics ==============================="
+ "\nTotal packets sent: %18"PRIu64
+ "\nTotal packets received: %14"PRIu64
+ "\nTotal packets dropped: %15"PRIu64,
+ total_packets_tx,
+ total_packets_rx,
+ total_packets_dropped);
+ printf("\n====================================================\n");
+}
+
+static void
+l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_portid)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_portid << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(&l2fwd_ports_eth_addr[dest_portid], ð->s_addr);
+}
+
+static void
+l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+{
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+}
+
+/* main processing loop */
+static void l2fwd_main_loop(void)
+{
+ uint64_t prev_tsc, diff_tsc, cur_tsc, timer_tsc, drain_tsc;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_tx_buffer *buffer;
+ struct lcore_queue_conf *qconf;
+ uint32_t i, j, portid, nb_rx;
+ struct rte_mbuf *m;
+ uint32_t lcore_id;
+ int32_t sent;
+
+ drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S *
+ BURST_TX_DRAIN_US;
+ prev_tsc = 0;
+ timer_tsc = 0;
+
+ lcore_id = rte_lcore_id();
+ qconf = &lcore_queue_conf[lcore_id];
+
+ if (qconf->n_rx_port == 0) {
+ RTE_LOG(INFO, L2FWD, "lcore %u has nothing to do\n", lcore_id);
+ return;
+ }
+
+ RTE_LOG(INFO, L2FWD, "entering main loop on lcore %u\n", lcore_id);
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ portid = qconf->rx_port_list[i];
+ RTE_LOG(INFO, L2FWD, " -- lcoreid=%u portid=%u\n", lcore_id,
+ portid);
+
+ }
+
+ while (!force_quit) {
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid =
+ l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id ==
+ rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+ /*
+ * Read packet from RX queues
+ */
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst(portid, 0,
+ pkts_burst, MAX_PKT_BURST);
+
+ port_statistics[portid].rx += nb_rx;
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+ }
+}
+
+static int
+l2fwd_launch_one_lcore(void *args)
+{
+ RTE_SET_USED(args);
+ l2fwd_main_loop();
+
+ return 0;
+}
+
+/* display usage */
+static void
+l2fwd_usage(const char *prgname)
+{
+ printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
+ " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+ " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds "
+ " (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ prgname);
+}
+
+static int
+l2fwd_parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static unsigned int
+l2fwd_parse_nqueue(const char *q_arg)
+{
+ char *end = NULL;
+ unsigned long n;
+
+ /* parse hexadecimal string */
+ n = strtoul(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+ if (n == 0)
+ return 0;
+ if (n >= MAX_RX_QUEUE_PER_LCORE)
+ return 0;
+
+ return n;
+}
+
+static int
+l2fwd_parse_timer_period(const char *q_arg)
+{
+ char *end = NULL;
+ int n;
+
+ /* parse number string */
+ n = strtol(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+ if (n >= MAX_TIMER_PERIOD)
+ return -1;
+
+ return n;
+}
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "q:" /* number of queues */
+ "T:" /* timer period */
+ ;
+
+#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
+#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options
+ */
+ CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+static const struct option lgopts[] = {
+ { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
+ { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ {NULL, 0, 0, 0}
+};
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_parse_args(int argc, char **argv)
+{
+ int opt, ret, timer_secs;
+ char *prgname = argv[0];
+ char **argvopt;
+ int option_index;
+
+ argvopt = argv;
+
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ l2fwd_enabled_port_mask = l2fwd_parse_portmask(optarg);
+ if (l2fwd_enabled_port_mask == 0) {
+ printf("invalid portmask\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* nqueue */
+ case 'q':
+ l2fwd_rx_queue_per_lcore = l2fwd_parse_nqueue(optarg);
+ if (l2fwd_rx_queue_per_lcore == 0) {
+ printf("invalid queue number\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* timer period */
+ case 'T':
+ timer_secs = l2fwd_parse_timer_period(optarg);
+ if (timer_secs < 0) {
+ printf("invalid timer period\n");
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ timer_period = timer_secs;
+ break;
+
+ /* long options */
+ case 0:
+ break;
+
+ default:
+ l2fwd_usage(prgname);
+ return -1;
+ }
+ }
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status...");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (force_quit)
+ return;
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(portid, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps - %s\n",
+ portid, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", portid);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+static void
+signal_handler(int signum)
+{
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ force_quit = true;
+ }
+}
+
+int
+main(int argc, char **argv)
+{
+ uint16_t nb_ports_available = 0;
+ struct lcore_queue_conf *qconf;
+ uint32_t nb_ports_in_mask = 0;
+ uint16_t portid, last_port;
+ uint32_t nb_lcores = 0;
+ uint32_t rx_lcore_id;
+ uint32_t nb_mbufs;
+ uint16_t nb_ports;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+ argc -= ret;
+ argv += ret;
+
+ force_quit = false;
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* parse application arguments (after the EAL ones) */
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+
+ printf("MAC updating %s\n", mac_updating ? "enabled" : "disabled");
+
+ /* convert to number of cycles */
+ timer_period *= rte_get_timer_hz();
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports == 0)
+ rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
+
+ /* check port mask to possible port mask */
+ if (l2fwd_enabled_port_mask & ~((1 << nb_ports) - 1))
+ rte_exit(EXIT_FAILURE, "Invalid portmask; possible (0x%x)\n",
+ (1 << nb_ports) - 1);
+
+ /* reset l2fwd_dst_ports */
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ } else {
+ last_port = portid;
+ }
+
+ nb_ports_in_mask++;
+ }
+ if (nb_ports_in_mask % 2) {
+ printf("Notice: odd number of ports in portmask.\n");
+ l2fwd_dst_ports[last_port] = last_port;
+ }
+
+
+ rx_lcore_id = 0;
+ qconf = NULL;
+
+ nb_mbufs = RTE_MAX(nb_ports * (nb_rxd + nb_txd + MAX_PKT_BURST +
+ nb_lcores * MEMPOOL_CACHE_SIZE), 8192U);
+
+ /* create the mbuf pool */
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", nb_mbufs,
+ MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+
+ /* Initialize the port/queue configuration of each logical core */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ /* get the lcore_id for this port */
+ while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+ lcore_queue_conf[rx_lcore_id].n_rx_port ==
+ l2fwd_rx_queue_per_lcore) {
+ rx_lcore_id++;
+ if (rx_lcore_id >= RTE_MAX_LCORE)
+ rte_exit(EXIT_FAILURE, "Not enough cores\n");
+ }
+
+ if (qconf != &lcore_queue_conf[rx_lcore_id]) {
+ /* Assigned a new logical core in the loop above. */
+ qconf = &lcore_queue_conf[rx_lcore_id];
+ nb_lcores++;
+ }
+
+ qconf->rx_port_list[qconf->n_rx_port] = portid;
+ qconf->n_rx_port++;
+ printf("Lcore %u: RX port %u\n", rx_lcore_id, portid);
+ }
+
+
+ /* Initialise each port */
+ RTE_ETH_FOREACH_DEV(portid) {
+ struct rte_eth_rxconf rxq_conf;
+ struct rte_eth_txconf txq_conf;
+ struct rte_eth_conf local_port_conf = port_conf;
+ struct rte_eth_dev_info dev_info;
+
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) {
+ printf("Skipping disabled port %u\n", portid);
+ continue;
+ }
+ nb_ports_available++;
+
+ /* init port */
+ printf("Initializing port %u... ", portid);
+ fflush(stdout);
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Cannot configure device: err=%d, port=%u\n",
+ ret, portid);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, port=%u\n",
+ ret, portid);
+
+ rte_eth_macaddr_get(portid, &l2fwd_ports_eth_addr[portid]);
+
+ /* init one RX queue */
+ fflush(stdout);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+ rte_eth_dev_socket_id(portid),
+ &rxq_conf,
+ l2fwd_pktmbuf_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n",
+ ret, portid);
+
+ /* init one TX queue on each port */
+ fflush(stdout);
+ txq_conf = dev_info.default_txconf;
+ txq_conf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid),
+ &txq_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, portid);
+
+ /* Initialize TX buffers */
+ tx_buffer[portid] = rte_zmalloc_socket("tx_buffer",
+ RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
+ rte_eth_dev_socket_id(portid));
+ if (tx_buffer[portid] == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot allocate buffer for tx on port %u\n",
+ portid);
+
+ rte_eth_tx_buffer_init(tx_buffer[portid], MAX_PKT_BURST);
+
+ ret = rte_eth_tx_buffer_set_err_callback(tx_buffer[portid],
+ rte_eth_tx_buffer_count_callback,
+ &port_statistics[portid].dropped);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot set error callback for tx buffer on port %u\n",
+ portid);
+
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, portid);
+
+ printf("done:\n");
+
+ rte_eth_promiscuous_enable(portid);
+
+ printf("Port %u, MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ portid,
+ l2fwd_ports_eth_addr[portid].addr_bytes[0],
+ l2fwd_ports_eth_addr[portid].addr_bytes[1],
+ l2fwd_ports_eth_addr[portid].addr_bytes[2],
+ l2fwd_ports_eth_addr[portid].addr_bytes[3],
+ l2fwd_ports_eth_addr[portid].addr_bytes[4],
+ l2fwd_ports_eth_addr[portid].addr_bytes[5]);
+
+ /* initialize port stats */
+ memset(&port_statistics, 0, sizeof(port_statistics));
+ }
+
+ if (!nb_ports_available) {
+ rte_exit(EXIT_FAILURE,
+ "All available ports are disabled. Please set portmask.\n");
+ }
+
+ check_all_ports_link_status(l2fwd_enabled_port_mask);
+
+ ret = 0;
+ /* launch per-lcore init on every lcore */
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL,
+ CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return ret;
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
new file mode 100644
index 000000000..16eadb0b4
--- /dev/null
+++ b/examples/l2fwd-event/meson.build
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+sources = files(
+ 'main.c'
+)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-26 17:33 ` Jerin Jacob
2019-09-27 13:08 ` Nipun Gupta
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
` (8 subsequent siblings)
10 siblings, 2 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add infra to select event device as a mode to process packets through
command line arguments. Also, allow the user to select the schedule type
to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
Usage:
`--mode="eventdev"` or `--mode="poll"`
`--eventq-sync="ordered"` or `--eventq-sync="atomic"`
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/Makefile | 1 +
examples/l2fwd-event/l2fwd_eventdev.c | 107 ++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 62 +++++++++++++++
examples/l2fwd-event/main.c | 39 +++++++++-
examples/l2fwd-event/meson.build | 3 +-
5 files changed, 209 insertions(+), 3 deletions(-)
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index a156c4162..bfe0058a2 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -7,6 +7,7 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
+SRCS-y += l2fwd_eventdev.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
new file mode 100644
index 000000000..19efb6d1e
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+static void
+parse_mode(const char *optarg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+
+ if (!strncmp(optarg, "poll", 4))
+ eventdev_rsrc->enabled = false;
+ else if (!strncmp(optarg, "eventdev", 8))
+ eventdev_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sync(const char *optarg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+
+ if (!strncmp(optarg, "ordered", 7))
+ eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ORDERED;
+ else if (!strncmp(optarg, "atomic", 6))
+ eventdev_rsrc->sync_mode = RTE_SCHED_TYPE_ATOMIC;
+}
+
+static int
+parse_eventdev_args(char **argv, int argc)
+{
+ const struct option eventdev_lgopts[] = {
+ {CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+ {CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+ {NULL, 0, 0, 0}
+ };
+ char **argvopt = argv;
+ int32_t option_index;
+ int32_t opt;
+
+ while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts,
+ &option_index)) != EOF) {
+ switch (opt) {
+ case CMD_LINE_OPT_MODE_NUM:
+ parse_mode(optarg);
+ break;
+
+ case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+ parse_eventq_sync(optarg);
+ break;
+
+ case '?':
+ /* skip other parameters except eventdev specific */
+ break;
+
+ default:
+ printf("Invalid eventdev parameter\n");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+void
+eventdev_resource_setup(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t service_id;
+ int32_t ret;
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(eventdev_rsrc->args, eventdev_rsrc->nb_args);
+ if (ret < 0)
+ return;
+
+ if (!rte_event_dev_count())
+ rte_exit(EXIT_FAILURE, "No Eventdev found");
+ /* Start event device service */
+ ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->service_id = service_id;
+
+ /* Start event device */
+ ret = rte_event_dev_start(eventdev_rsrc->event_d_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+}
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
new file mode 100644
index 000000000..2e8d95e67
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_EVENTDEV_H__
+#define __L2FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sync"
+
+enum {
+ CMD_LINE_OPT_MODE_NUM = 265,
+ CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+};
+
+struct eventdev_resources {
+ struct l2fwd_port_statistics *stats;
+ struct rte_mempool *pkt_pool;
+ uint64_t timer_period;
+ uint32_t *dst_ports;
+ uint32_t service_id;
+ uint32_t port_mask;
+ volatile bool *done;
+ uint8_t event_d_id;
+ uint8_t sync_mode;
+ uint8_t tx_mode_q;
+ uint8_t mac_updt;
+ uint8_t enabled;
+ uint8_t nb_args;
+ char **args;
+};
+
+static inline struct eventdev_resources *
+get_eventdev_rsrc(void)
+{
+ static const char name[RTE_MEMZONE_NAMESIZE] = "l2fwd_event_rsrc";
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+
+ if (mz != NULL)
+ return mz->addr;
+
+ mz = rte_memzone_reserve(name, sizeof(struct eventdev_resources), 0, 0);
+ if (mz != NULL) {
+ memset(mz->addr, 0, sizeof(struct eventdev_resources));
+ return mz->addr;
+ }
+
+ rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+ return NULL;
+}
+
+void eventdev_resource_setup(void);
+
+#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index cc47fa203..087e84588 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -42,6 +42,7 @@
#include <rte_spinlock.h>
#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
static volatile bool force_quit;
@@ -288,7 +289,12 @@ l2fwd_usage(const char *prgname)
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
" When enabled:\n"
" - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n"
+ " --mode: Packet transfer mode for I/O, poll or eventdev\n"
+ " Default mode = eventdev\n"
+ " --eventq-sync:Event queue synchronization method,\n"
+ " ordered or atomic.\nDefault: atomic\n"
+ " Valid only if --mode=eventdev\n\n",
prgname);
}
@@ -371,11 +377,19 @@ static const struct option lgopts[] = {
static int
l2fwd_parse_args(int argc, char **argv)
{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
int opt, ret, timer_secs;
char *prgname = argv[0];
char **argvopt;
int option_index;
+ eventdev_rsrc->args = rte_zmalloc("l2fwd_event_args", sizeof(char *),
+ 0);
+ if (eventdev_rsrc->args == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Unable to allocate memory for eventdev arg");
+ eventdev_rsrc->args[0] = argv[0];
+ eventdev_rsrc->nb_args++;
argvopt = argv;
while ((opt = getopt_long(argc, argvopt, short_options,
@@ -413,6 +427,15 @@ l2fwd_parse_args(int argc, char **argv)
timer_period = timer_secs;
break;
+ case '?':
+ /* Eventdev options are encountered skip for
+ * now and processed later.
+ */
+ eventdev_rsrc->args[eventdev_rsrc->nb_args] =
+ argv[optind - 1];
+ eventdev_rsrc->nb_args++;
+ break;
+
/* long options */
case 0:
break;
@@ -503,6 +526,7 @@ signal_handler(int signum)
int
main(int argc, char **argv)
{
+ struct eventdev_resources *eventdev_rsrc;
uint16_t nb_ports_available = 0;
struct lcore_queue_conf *qconf;
uint32_t nb_ports_in_mask = 0;
@@ -524,6 +548,7 @@ main(int argc, char **argv)
signal(SIGINT, signal_handler);
signal(SIGTERM, signal_handler);
+ eventdev_rsrc = get_eventdev_rsrc();
/* parse application arguments (after the EAL ones) */
ret = l2fwd_parse_args(argc, argv);
if (ret < 0)
@@ -584,6 +609,17 @@ main(int argc, char **argv)
if (l2fwd_pktmbuf_pool == NULL)
rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+ eventdev_rsrc->port_mask = l2fwd_enabled_port_mask;
+ eventdev_rsrc->pkt_pool = l2fwd_pktmbuf_pool;
+ eventdev_rsrc->dst_ports = l2fwd_dst_ports;
+ eventdev_rsrc->timer_period = timer_period;
+ eventdev_rsrc->mac_updt = mac_updating;
+ eventdev_rsrc->stats = port_statistics;
+ eventdev_rsrc->done = &force_quit;
+
+ /* Configure eventdev parameters if user has requested */
+ eventdev_resource_setup();
+
/* Initialize the port/queue configuration of each logical core */
RTE_ETH_FOREACH_DEV(portid) {
/* skip ports that are not enabled */
@@ -610,7 +646,6 @@ main(int argc, char **argv)
printf("Lcore %u: RX port %u\n", rx_lcore_id, portid);
}
-
/* Initialise each port */
RTE_ETH_FOREACH_DEV(portid) {
struct rte_eth_rxconf rxq_conf;
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index 16eadb0b4..b1ad48cc5 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -8,5 +8,6 @@
# DPDK instance, use 'make'
sources = files(
- 'main.c'
+ 'main.c',
+ 'l2fwd_eventdev.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 03/10] examples/l2fwd-event: add infra to split eventdev framework
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
` (7 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 2 ++
examples/l2fwd-event/l2fwd_eventdev.c | 29 +++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 20 +++++++++++++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 24 +++++++++++++++
.../l2fwd_eventdev_internal_port.c | 24 +++++++++++++++
examples/l2fwd-event/meson.build | 4 ++-
6 files changed, 102 insertions(+), 1 deletion(-)
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_eventdev_internal_port.c
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index bfe0058a2..c1f700a65 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -8,6 +8,8 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
SRCS-y += l2fwd_eventdev.c
+SRCS-y += l2fwd_eventdev_internal_port.c
+SRCS-y += l2fwd_eventdev_generic.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 19efb6d1e..df76f1c1f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -76,6 +76,31 @@ parse_eventdev_args(char **argv, int argc)
return 0;
}
+static void
+eventdev_capability_setup(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t caps = 0;
+ uint16_t i;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(i) {
+ ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Invalid capability for Tx adptr port %d\n",
+ i);
+
+ eventdev_rsrc->tx_mode_q |= !(caps &
+ RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+ }
+
+ if (eventdev_rsrc->tx_mode_q)
+ eventdev_set_generic_ops(&eventdev_rsrc->ops);
+ else
+ eventdev_set_internal_port_ops(&eventdev_rsrc->ops);
+}
+
void
eventdev_resource_setup(void)
{
@@ -90,6 +115,10 @@ eventdev_resource_setup(void)
if (!rte_event_dev_count())
rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+ /* Setup eventdev capability callbacks */
+ eventdev_capability_setup();
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 2e8d95e67..8b6606b4c 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -18,8 +18,26 @@ enum {
CMD_LINE_OPT_EVENTQ_SYNC_NUM,
};
+typedef void (*event_queue_setup_cb)(uint16_t ethdev_count,
+ uint32_t event_queue_cfg);
+typedef uint32_t (*eventdev_setup_cb)(uint16_t ethdev_count);
+typedef void (*adapter_setup_cb)(uint16_t ethdev_count);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*service_setup_cb)(void);
+typedef void (*event_loop_cb)(void);
+
+struct eventdev_setup_ops {
+ event_queue_setup_cb event_queue_setup;
+ event_port_setup_cb event_port_setup;
+ eventdev_setup_cb eventdev_setup;
+ adapter_setup_cb adapter_setup;
+ service_setup_cb service_setup;
+ event_loop_cb l2fwd_event_loop;
+};
+
struct eventdev_resources {
struct l2fwd_port_statistics *stats;
+ struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
uint64_t timer_period;
uint32_t *dst_ports;
@@ -58,5 +76,7 @@ get_eventdev_rsrc(void)
}
void eventdev_resource_setup(void);
+void eventdev_set_generic_ops(struct eventdev_setup_ops *ops);
+void eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops);
#endif /* __L2FWD_EVENTDEV_H__ */
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
new file mode 100644
index 000000000..e3990f8b0
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+void
+eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
new file mode 100644
index 000000000..a0d2111f9
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_eventdev.h"
+
+void
+eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index b1ad48cc5..38560840c 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -9,5 +9,7 @@
sources = files(
'main.c',
- 'l2fwd_eventdev.c'
+ 'l2fwd_eventdev.c',
+ 'l2fwd_eventdev_internal_port.c',
+ 'l2fwd_eventdev_generic.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (2 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-27 13:15 ` Nipun Gupta
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
` (6 subsequent siblings)
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 114 ++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_eventdev.h | 1 +
examples/l2fwd-event/main.c | 17 ++++
3 files changed, 132 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index df76f1c1f..0d0d3b8b9 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -18,6 +18,14 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
static void
parse_mode(const char *optarg)
{
@@ -76,6 +84,108 @@ parse_eventdev_args(char **argv, int argc)
return 0;
}
+static void
+eth_dev_port_setup(uint16_t ethdev_count __rte_unused)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ static struct rte_eth_conf port_config = {
+ .rxmode = {
+ .mq_mode = ETH_MQ_RX_RSS,
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ .offloads = DEV_RX_OFFLOAD_CHECKSUM
+ },
+ .rx_adv_conf = {
+ .rss_conf = {
+ .rss_key = NULL,
+ .rss_hf = ETH_RSS_IP,
+ }
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ }
+ };
+ struct rte_eth_conf local_port_conf;
+ struct rte_eth_dev_info dev_info;
+ struct rte_eth_txconf txconf;
+ struct rte_eth_rxconf rxconf;
+ uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+ uint16_t port_id;
+ int32_t ret;
+
+ /* initialize all ports */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ local_port_conf = port_config;
+ /* skip ports that are not enabled */
+ if ((eventdev_rsrc->port_mask & (1 << port_id)) == 0) {
+ printf("\nSkipping disabled port %d\n", port_id);
+ continue;
+ }
+
+ /* init port */
+ printf("Initializing port %d ... ", port_id);
+ fflush(stdout);
+ rte_eth_dev_info_get(port_id, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+ dev_info.flow_type_rss_offloads;
+ if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+ port_config.rx_adv_conf.rss_conf.rss_hf) {
+ printf("Port %u modified RSS hash function "
+ "based on hardware support,"
+ "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+ port_id,
+ port_config.rx_adv_conf.rss_conf.rss_hf,
+ local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+ }
+
+ ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ rte_eth_macaddr_get(port_id,
+ &eventdev_rsrc->ports_eth_addr[port_id]);
+ print_ethaddr(" Address:",
+ &eventdev_rsrc->ports_eth_addr[port_id]);
+ printf("\n");
+
+
+ /* init one Rx queue per port */
+ rxconf = dev_info.default_rxconf;
+ rxconf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0, &rxconf,
+ eventdev_rsrc->pkt_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_rx_queue_setup: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ /* init one Tx queue per port */
+ txconf = dev_info.default_txconf;
+ txconf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_tx_queue_setup: err=%d, "
+ "port=%d\n", ret, port_id);
+
+ rte_eth_promiscuous_enable(port_id);
+ }
+}
+
static void
eventdev_capability_setup(void)
{
@@ -105,6 +215,7 @@ void
eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint16_t ethdev_count = rte_eth_dev_count_avail();
uint32_t service_id;
int32_t ret;
@@ -119,6 +230,9 @@ eventdev_resource_setup(void)
/* Setup eventdev capability callbacks */
eventdev_capability_setup();
+ /* Ethernet device configuration */
+ eth_dev_port_setup(ethdev_count);
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 8b6606b4c..d380faff5 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -51,6 +51,7 @@ struct eventdev_resources {
uint8_t enabled;
uint8_t nb_args;
char **args;
+ struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
};
static inline struct eventdev_resources *
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 087e84588..3f72d0579 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -619,6 +619,22 @@ main(int argc, char **argv)
/* Configure eventdev parameters if user has requested */
eventdev_resource_setup();
+ if (eventdev_rsrc->enabled) {
+ /* All settings are done. Now enable eth devices */
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_dev_start:err=%d, port=%u\n",
+ ret, portid);
+ }
+
+ goto skip_port_config;
+ }
/* Initialize the port/queue configuration of each logical core */
RTE_ETH_FOREACH_DEV(portid) {
@@ -750,6 +766,7 @@ main(int argc, char **argv)
"All available ports are disabled. Please set portmask.\n");
}
+skip_port_config:
check_all_ports_link_status(l2fwd_enabled_port_mask);
ret = 0;
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (3 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-27 13:22 ` Nipun Gupta
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
` (5 subsequent siblings)
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event device queue and port setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 10 +
examples/l2fwd-event/l2fwd_eventdev.h | 18 ++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 179 +++++++++++++++++-
.../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++-
4 files changed, 378 insertions(+), 2 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 0d0d3b8b9..7a3d077ae 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -216,6 +216,7 @@ eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
uint16_t ethdev_count = rte_eth_dev_count_avail();
+ uint32_t event_queue_cfg = 0;
uint32_t service_id;
int32_t ret;
@@ -233,6 +234,15 @@ eventdev_resource_setup(void)
/* Ethernet device configuration */
eth_dev_port_setup(ethdev_count);
+ /* Event device configuration */
+ event_queue_cfg = eventdev_rsrc->ops.eventdev_setup(ethdev_count);
+
+ /* Event queue configuration */
+ eventdev_rsrc->ops.event_queue_setup(ethdev_count, event_queue_cfg);
+
+ /* Event port configuration */
+ eventdev_rsrc->ops.event_port_setup();
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index d380faff5..1d43200e2 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -26,6 +26,17 @@ typedef void (*event_port_setup_cb)(void);
typedef void (*service_setup_cb)(void);
typedef void (*event_loop_cb)(void);
+struct eventdev_queues {
+ uint8_t *event_q_id;
+ uint8_t nb_queues;
+};
+
+struct eventdev_ports {
+ uint8_t *event_p_id;
+ uint8_t nb_ports;
+ rte_spinlock_t lock;
+};
+
struct eventdev_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
@@ -36,9 +47,14 @@ struct eventdev_setup_ops {
};
struct eventdev_resources {
+ struct rte_event_port_conf def_p_conf;
struct l2fwd_port_statistics *stats;
+ /* Default port config. */
+ uint8_t disable_implicit_release;
struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
+ struct eventdev_queues evq;
+ struct eventdev_ports evp;
uint64_t timer_period;
uint32_t *dst_ports;
uint32_t service_id;
@@ -47,6 +63,8 @@ struct eventdev_resources {
uint8_t event_d_id;
uint8_t sync_mode;
uint8_t tx_mode_q;
+ uint8_t deq_depth;
+ uint8_t has_burst;
uint8_t mac_updt;
uint8_t enabled;
uint8_t nb_args;
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index e3990f8b0..65166fded 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -17,8 +17,185 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static uint32_t
+eventdev_setup_generic(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t num_workers = 0;
+ int ret;
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+ eventdev_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ /* One queue for each ethdev port + one Tx adapter Single link queue. */
+ event_d_conf.nb_event_queues = ethdev_count + 1;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count() - rte_service_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ eventdev_rsrc->evp.nb_ports = num_workers;
+
+ eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ eventdev_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
+static void
+event_port_setup_generic(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ /* Service cores are not used to run worker thread */
+ eventdev_rsrc->evp.nb_ports = eventdev_rsrc->evp.nb_ports;
+ eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evp.nb_ports);
+ if (!eventdev_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE, " No space is available");
+
+ memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ eventdev_rsrc->disable_implicit_release;
+ eventdev_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+ for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id,
+ eventdev_rsrc->evq.event_q_id,
+ NULL,
+ eventdev_rsrc->evq.nb_queues - 1);
+ if (ret != (eventdev_rsrc->evq.nb_queues - 1)) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+ }
+ /* init spinlock */
+ rte_spinlock_init(&eventdev_rsrc->evp.lock);
+
+ eventdev_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id;
+ int32_t ret;
+
+ event_q_conf.schedule_type = eventdev_rsrc->sync_mode;
+ eventdev_rsrc->evq.nb_queues = ethdev_count + 1;
+ eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evq.nb_queues);
+ if (!eventdev_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ for (event_q_id = 0; event_q_id < (eventdev_rsrc->evq.nb_queues - 1);
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+
+ event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+ event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue for Tx adapter");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
void
eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->eventdev_setup = eventdev_setup_generic;
+ ops->event_queue_setup = event_queue_setup_generic;
+ ops->event_port_setup = event_port_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index a0d2111f9..52cb07707 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -17,8 +17,179 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static uint32_t
+eventdev_setup_internal_port(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ uint8_t disable_implicit_release;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t num_workers = 0;
+ int ret;
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+
+ disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+ eventdev_rsrc->disable_implicit_release =
+ disable_implicit_release;
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ event_d_conf.nb_event_queues = ethdev_count;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ eventdev_rsrc->evp.nb_ports = num_workers;
+ eventdev_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ eventdev_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
+static void
+event_port_setup_internal_port(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ eventdev_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evp.nb_ports);
+ if (!eventdev_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE,
+ "Failed to allocate memory for Event Ports");
+
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ eventdev_rsrc->disable_implicit_release;
+
+ for (event_p_id = 0; event_p_id < eventdev_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ eventdev_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+ /* init spinlock */
+ rte_spinlock_init(&eventdev_rsrc->evp.lock);
+ }
+
+ eventdev_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id = 0;
+ int32_t ret;
+
+ rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ if (def_q_conf.nb_atomic_order_sequences <
+ event_q_conf.nb_atomic_order_sequences)
+ event_q_conf.nb_atomic_order_sequences =
+ def_q_conf.nb_atomic_order_sequences;
+
+ event_q_conf.event_queue_cfg = event_queue_cfg;
+ event_q_conf.schedule_type = eventdev_rsrc->sync_mode;
+ eventdev_rsrc->evq.nb_queues = ethdev_count;
+ eventdev_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->evq.nb_queues);
+ if (!eventdev_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->eventdev_setup = eventdev_setup_internal_port;
+ ops->event_queue_setup = event_queue_setup_internal_port;
+ ops->event_port_setup = event_port_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (4 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 07/10] examples/l2fwd-event: add service core setup pbhagavatula
` (4 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 3 +
examples/l2fwd-event/l2fwd_eventdev.h | 17 +++
examples/l2fwd-event/l2fwd_eventdev_generic.c | 117 ++++++++++++++++++
.../l2fwd_eventdev_internal_port.c | 80 ++++++++++++
4 files changed, 217 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index 7a3d077ae..f964c69d6 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -243,6 +243,9 @@ eventdev_resource_setup(void)
/* Event port configuration */
eventdev_rsrc->ops.event_port_setup();
+ /* Rx/Tx adapters configuration */
+ eventdev_rsrc->ops.adapter_setup(ethdev_count);
+
/* Start event device service */
ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
&service_id);
diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-event/l2fwd_eventdev.h
index 1d43200e2..532672f7d 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.h
+++ b/examples/l2fwd-event/l2fwd_eventdev.h
@@ -6,6 +6,9 @@
#define __L2FWD_EVENTDEV_H__
#include <rte_common.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
#include <rte_spinlock.h>
#include "l2fwd_common.h"
@@ -37,6 +40,18 @@ struct eventdev_ports {
rte_spinlock_t lock;
};
+struct eventdev_rx_adptr {
+ uint32_t service_id;
+ uint8_t nb_rx_adptr;
+ uint8_t *rx_adptr;
+};
+
+struct eventdev_tx_adptr {
+ uint32_t service_id;
+ uint8_t nb_tx_adptr;
+ uint8_t *tx_adptr;
+};
+
struct eventdev_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
@@ -50,6 +65,8 @@ struct eventdev_resources {
struct rte_event_port_conf def_p_conf;
struct l2fwd_port_statistics *stats;
/* Default port config. */
+ struct eventdev_rx_adptr rx_adptr;
+ struct eventdev_tx_adptr tx_adptr;
uint8_t disable_implicit_release;
struct eventdev_setup_ops ops;
struct rte_mempool *pkt_pool;
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index 65166fded..68b63279a 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -192,10 +192,127 @@ event_queue_setup_generic(uint16_t ethdev_count, uint32_t event_queue_cfg)
eventdev_rsrc->evq.event_q_id[event_q_id] = event_q_id;
}
+static void
+rx_tx_adapter_setup_generic(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ uint8_t rx_adptr_id = 0;
+ uint8_t tx_adptr_id = 0;
+ uint8_t tx_port_id = 0;
+ uint32_t service_id;
+ int32_t ret, i;
+
+ /* Rx adapter setup */
+ eventdev_rsrc->rx_adptr.nb_rx_adptr = 1;
+ eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->rx_adptr.nb_rx_adptr);
+ if (!eventdev_rsrc->rx_adptr.rx_adptr) {
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create rx adapter");
+
+ eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode;
+ for (i = 0; i < ethdev_count; i++) {
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i];
+ ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, i, -1,
+ ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->rx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed",
+ rx_adptr_id);
+
+ eventdev_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+ /* Tx adapter setup */
+ eventdev_rsrc->tx_adptr.nb_tx_adptr = 1;
+ eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->tx_adptr.nb_tx_adptr);
+ if (!eventdev_rsrc->tx_adptr.tx_adptr) {
+ free(eventdev_rsrc->rx_adptr.rx_adptr);
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create tx adapter");
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc->tx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+ ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed",
+ tx_adptr_id);
+
+ eventdev_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
void
eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
{
ops->eventdev_setup = eventdev_setup_generic;
ops->event_queue_setup = event_queue_setup_generic;
ops->event_port_setup = event_port_setup_generic;
+ ops->adapter_setup = rx_tx_adapter_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index 52cb07707..02663242f 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -186,10 +186,90 @@ event_queue_setup_internal_port(uint16_t ethdev_count, uint32_t event_queue_cfg)
}
}
+static void
+rx_tx_adapter_setup_internal_port(uint16_t ethdev_count)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ int32_t ret, i;
+
+ eventdev_rsrc->rx_adptr.nb_rx_adptr = ethdev_count;
+ eventdev_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->rx_adptr.nb_rx_adptr);
+ if (!eventdev_rsrc->rx_adptr.rx_adptr) {
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", i);
+
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc->evq.event_q_id[i];
+ eth_q_conf.ev.sched_type = eventdev_rsrc->sync_mode;
+ ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", i);
+
+ eventdev_rsrc->rx_adptr.rx_adptr[i] = i;
+ }
+
+ eventdev_rsrc->tx_adptr.nb_tx_adptr = ethdev_count;
+ eventdev_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ eventdev_rsrc->tx_adptr.nb_tx_adptr);
+ if (!eventdev_rsrc->tx_adptr.tx_adptr) {
+ free(eventdev_rsrc->rx_adptr.rx_adptr);
+ free(eventdev_rsrc->evp.event_p_id);
+ free(eventdev_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+ &eventdev_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", i);
+
+ ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", i);
+
+ eventdev_rsrc->tx_adptr.tx_adptr[i] = i;
+ }
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
ops->eventdev_setup = eventdev_setup_internal_port;
ops->event_queue_setup = event_queue_setup_internal_port;
ops->event_port_setup = event_port_setup_internal_port;
+ ops->adapter_setup = rx_tx_adapter_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 07/10] examples/l2fwd-event: add service core setup
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (5 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
` (3 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev_generic.c | 31 +++++++++++++++++++
.../l2fwd_eventdev_internal_port.c | 6 ++++
examples/l2fwd-event/main.c | 2 ++
3 files changed, 39 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c b/examples/l2fwd-event/l2fwd_eventdev_generic.c
index 68b63279a..e1e603052 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
@@ -17,6 +17,36 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+static void
+eventdev_service_setup_generic(void)
+{
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ uint32_t lcore_id[RTE_MAX_LCORE] = {0};
+ int32_t req_service_cores = 3;
+ int32_t avail_service_cores;
+
+ avail_service_cores = rte_service_lcore_list(lcore_id, RTE_MAX_LCORE);
+ if (avail_service_cores < req_service_cores) {
+ rte_exit(EXIT_FAILURE, "Enough services cores are not present"
+ " Required = %d Available = %d",
+ req_service_cores, avail_service_cores);
+ }
+
+ /* Start eventdev scheduler service */
+ rte_service_map_lcore_set(eventdev_rsrc->service_id, lcore_id[0], 1);
+ rte_service_lcore_start(lcore_id[0]);
+
+ /* Start eventdev Rx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc->rx_adptr.service_id,
+ lcore_id[1], 1);
+ rte_service_lcore_start(lcore_id[1]);
+
+ /* Start eventdev Tx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc->tx_adptr.service_id,
+ lcore_id[2], 1);
+ rte_service_lcore_start(lcore_id[2]);
+}
+
static uint32_t
eventdev_setup_generic(uint16_t ethdev_count)
{
@@ -315,4 +345,5 @@ eventdev_set_generic_ops(struct eventdev_setup_ops *ops)
ops->event_queue_setup = event_queue_setup_generic;
ops->event_port_setup = event_port_setup_generic;
ops->adapter_setup = rx_tx_adapter_setup_generic;
+ ops->service_setup = eventdev_service_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
index 02663242f..39fcb4326 100644
--- a/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_eventdev_internal_port.c
@@ -265,6 +265,11 @@ rx_tx_adapter_setup_internal_port(uint16_t ethdev_count)
}
}
+static void
+eventdev_service_setup_internal_port(void)
+{
+}
+
void
eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
{
@@ -272,4 +277,5 @@ eventdev_set_internal_port_ops(struct eventdev_setup_ops *ops)
ops->event_queue_setup = event_queue_setup_internal_port;
ops->event_port_setup = event_port_setup_internal_port;
ops->adapter_setup = rx_tx_adapter_setup_internal_port;
+ ops->service_setup = eventdev_service_setup_internal_port;
}
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 3f72d0579..60882da52 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -633,6 +633,8 @@ main(int argc, char **argv)
ret, portid);
}
+ /* Now start internal services */
+ eventdev_rsrc->ops.service_setup();
goto skip_port_config;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (6 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 07/10] examples/l2fwd-event: add service core setup pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-27 13:28 ` Nipun Gupta
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
` (2 subsequent siblings)
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event dev main loop based on enabled l2fwd options and eventdev
capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_eventdev.c | 273 ++++++++++++++++++++++++++
examples/l2fwd-event/main.c | 10 +-
2 files changed, 280 insertions(+), 3 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_eventdev.c b/examples/l2fwd-event/l2fwd_eventdev.c
index f964c69d6..345d9d15b 100644
--- a/examples/l2fwd-event/l2fwd_eventdev.c
+++ b/examples/l2fwd-event/l2fwd_eventdev.c
@@ -18,6 +18,12 @@
#include "l2fwd_common.h"
#include "l2fwd_eventdev.h"
+#define L2FWD_EVENT_SINGLE 0x1
+#define L2FWD_EVENT_BURST 0x2
+#define L2FWD_EVENT_TX_DIRECT 0x4
+#define L2FWD_EVENT_TX_ENQ 0x8
+#define L2FWD_EVENT_UPDT_MAC 0x10
+
static void
print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
{
@@ -211,10 +217,272 @@ eventdev_capability_setup(void)
eventdev_set_internal_port_ops(&eventdev_rsrc->ops);
}
+static __rte_noinline int
+get_free_event_port(struct eventdev_resources *eventdev_rsrc)
+{
+ static int index;
+ int port_id;
+
+ rte_spinlock_lock(&eventdev_rsrc->evp.lock);
+ if (index >= eventdev_rsrc->evp.nb_ports) {
+ printf("No free event port is available\n");
+ return -1;
+ }
+
+ port_id = eventdev_rsrc->evp.event_p_id[index];
+ index++;
+ rte_spinlock_unlock(&eventdev_rsrc->evp.lock);
+
+ return port_id;
+}
+
+static __rte_always_inline void
+l2fwd_event_updt_mac(struct rte_mbuf *m, const struct rte_ether_addr *dst_mac,
+ uint8_t dst_port)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dst_port << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(dst_mac, ð->s_addr);
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_single(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ const uint64_t timer_period = eventdev_rsrc->timer_period;
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const int port_id = get_free_event_port(eventdev_rsrc);
+ const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ volatile bool *done = eventdev_rsrc->done;
+ struct rte_mbuf *mbuf;
+ uint16_t dst_port;
+ struct rte_event ev;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!*done) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
+ continue;
+
+
+ mbuf = ev.mbuf;
+ dst_port = eventdev_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_event_updt_mac(mbuf,
+ &eventdev_rsrc->ports_eth_addr[dst_port],
+ dst_port);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev.queue_id = tx_q_id;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ while (rte_event_enqueue_burst(event_d_id, port_id,
+ &ev, 1) && !*done)
+ ;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+ while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id,
+ &ev, 1) &&
+ !*done)
+ ;
+ }
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&eventdev_rsrc->stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_burst(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ const uint64_t timer_period = eventdev_rsrc->timer_period;
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const int port_id = get_free_event_port(eventdev_rsrc);
+ const uint8_t tx_q_id = eventdev_rsrc->evq.event_q_id[
+ eventdev_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = eventdev_rsrc->event_d_id;
+ const uint8_t deq_len = eventdev_rsrc->deq_depth;
+ volatile bool *done = eventdev_rsrc->done;
+ struct rte_event ev[MAX_PKT_BURST];
+ struct rte_mbuf *mbuf;
+ uint16_t nb_rx, nb_tx;
+ uint16_t dst_port;
+ uint8_t i;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!*done) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
+ deq_len, 0);
+ if (nb_rx == 0)
+ continue;
+
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf = ev[i].mbuf;
+ dst_port = eventdev_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0) {
+ __atomic_fetch_add(
+ &eventdev_rsrc->stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+ __atomic_fetch_add(
+ &eventdev_rsrc->stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_event_updt_mac(mbuf,
+ &eventdev_rsrc->ports_eth_addr[
+ dst_port],
+ dst_port);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev[i].queue_id = tx_q_id;
+ ev[i].op = RTE_EVENT_OP_FORWARD;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT)
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+
+ }
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
+ ev, nb_rx);
+ while (nb_tx < nb_rx && !*done)
+ nb_tx += rte_event_enqueue_burst(event_d_id,
+ port_id, ev + nb_tx,
+ nb_rx - nb_tx);
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id, ev,
+ nb_rx);
+ while (nb_tx < nb_rx && !*done)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx);
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop(struct eventdev_resources *eventdev_rsrc,
+ const uint32_t flags)
+{
+ if (flags & L2FWD_EVENT_SINGLE)
+ l2fwd_event_loop_single(eventdev_rsrc, flags);
+ if (flags & L2FWD_EVENT_BURST)
+ l2fwd_event_loop_burst(eventdev_rsrc, flags);
+}
+
+#define L2FWD_EVENT_MODE \
+FP(tx_d, 0, 0, 0, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE) \
+FP(tx_d_burst, 0, 0, 1, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST) \
+FP(tx_q, 0, 1, 0, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE) \
+FP(tx_q_burst, 0, 1, 1, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST) \
+FP(tx_d_mac, 1, 0, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \
+ L2FWD_EVENT_SINGLE) \
+FP(tx_d_brst_mac, 1, 0, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_DIRECT | \
+ L2FWD_EVENT_BURST) \
+FP(tx_q_mac, 1, 1, 0, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \
+ L2FWD_EVENT_SINGLE) \
+FP(tx_q_brst_mac, 1, 1, 1, L2FWD_EVENT_UPDT_MAC | L2FWD_EVENT_TX_ENQ | \
+ L2FWD_EVENT_BURST)
+
+
+#define FP(_name, _f3, _f2, _f1, flags) \
+static void __rte_noinline \
+l2fwd_event_main_loop_ ## _name(void) \
+{ \
+ struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc(); \
+ l2fwd_event_loop(eventdev_rsrc, flags); \
+}
+
+L2FWD_EVENT_MODE
+#undef FP
+
void
eventdev_resource_setup(void)
{
struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
+ /* [MAC_UPDT][TX_MODE][BURST] */
+ const event_loop_cb event_loop[2][2][2] = {
+#define FP(_name, _f3, _f2, _f1, flags) \
+ [_f3][_f2][_f1] = l2fwd_event_main_loop_ ## _name,
+ L2FWD_EVENT_MODE
+#undef FP
+ };
uint16_t ethdev_count = rte_eth_dev_count_avail();
uint32_t event_queue_cfg = 0;
uint32_t service_id;
@@ -260,4 +528,9 @@ eventdev_resource_setup(void)
ret = rte_event_dev_start(eventdev_rsrc->event_d_id);
if (ret < 0)
rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ eventdev_rsrc->ops.l2fwd_event_loop = event_loop
+ [eventdev_rsrc->mac_updt]
+ [eventdev_rsrc->tx_mode_q]
+ [eventdev_rsrc->has_burst];
}
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 60882da52..43f0b114c 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -271,8 +271,12 @@ static void l2fwd_main_loop(void)
static int
l2fwd_launch_one_lcore(void *args)
{
- RTE_SET_USED(args);
- l2fwd_main_loop();
+ struct eventdev_resources *eventdev_rsrc = args;
+
+ if (eventdev_rsrc->enabled)
+ eventdev_rsrc->ops.l2fwd_event_loop();
+ else
+ l2fwd_main_loop();
return 0;
}
@@ -773,7 +777,7 @@ main(int argc, char **argv)
ret = 0;
/* launch per-lcore init on every lcore */
- rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, NULL,
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc,
CALL_MASTER);
rte_eal_mp_wait_lcore();
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 09/10] examples/l2fwd-event: add graceful teardown
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (7 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add graceful teardown that addresses both event mode and poll mode.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/main.c | 44 +++++++++++++++++++++++++++++--------
1 file changed, 35 insertions(+), 9 deletions(-)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 43f0b114c..5a8173aff 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -539,7 +539,7 @@ main(int argc, char **argv)
uint32_t rx_lcore_id;
uint32_t nb_mbufs;
uint16_t nb_ports;
- int ret;
+ int i, ret;
/* init EAL */
ret = rte_eal_init(argc, argv);
@@ -779,15 +779,41 @@ main(int argc, char **argv)
/* launch per-lcore init on every lcore */
rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, eventdev_rsrc,
CALL_MASTER);
- rte_eal_mp_wait_lcore();
+ if (eventdev_rsrc->enabled) {
+ for (i = 0; i < eventdev_rsrc->rx_adptr.nb_rx_adptr; i++)
+ rte_event_eth_rx_adapter_stop(
+ eventdev_rsrc->rx_adptr.rx_adptr[i]);
+ for (i = 0; i < eventdev_rsrc->tx_adptr.nb_tx_adptr; i++)
+ rte_event_eth_tx_adapter_stop(
+ eventdev_rsrc->tx_adptr.tx_adptr[i]);
- RTE_ETH_FOREACH_DEV(portid) {
- if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
- continue;
- printf("Closing port %d...", portid);
- rte_eth_dev_stop(portid);
- rte_eth_dev_close(portid);
- printf(" Done\n");
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ rte_eth_dev_stop(portid);
+ }
+
+ rte_eal_mp_wait_lcore();
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ rte_eth_dev_close(portid);
+ }
+
+ rte_event_dev_stop(eventdev_rsrc->event_d_id);
+ rte_event_dev_close(eventdev_rsrc->event_d_id);
+
+ } else {
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+ printf("Closing port %d...", portid);
+ rte_eth_dev_stop(portid);
+ rte_eth_dev_close(portid);
+ printf(" Done\n");
+ }
}
printf("Bye...\n");
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v4 10/10] doc: add application usage guide for l2fwd-event
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (8 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
@ 2019-09-24 9:42 ` pbhagavatula
2019-09-26 17:42 ` Jerin Jacob
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-09-24 9:42 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add documentation for l2fwd-event example.
Update MAINTAINERS file claiming responsibility of l2fwd-event.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
MAINTAINERS | 5 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
.../l2_forward_event_real_virtual.rst | 799 ++++++++++++++++++
4 files changed, 810 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index b3d9aaddd..d8e1fa84d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1458,6 +1458,11 @@ M: Tomasz Kantecki <tomasz.kantecki@intel.com>
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
+M: Sunil Kumar Kori <skori@marvell.com>
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+F: examples/l2fwd-event/
+F: doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
+
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index f23f8f59e..83a4f8d5c 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -26,6 +26,7 @@ Sample Applications User Guides
l2_forward_crypto
l2_forward_job_stats
l2_forward_real_virtual
+ l2_forward_event_real_virtual
l2_forward_cat
l3_forward
l3_forward_power_man
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index 90704194a..b33904ed1 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -87,6 +87,11 @@ examples are highlighted below.
forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
addresses like a simple switch.
+* :doc:`Network Layer 2 forwarding<l2_forward_eventdev_real_virtual>`: The Network Layer 2
+ forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
+ addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx
+ mechanism.
+
* :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
forwarding, or ``l3fwd`` application does forwarding based on Internet
Protocol, IPv4 or IPv6 like a simple router.
diff --git a/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
new file mode 100644
index 000000000..7cea8efaf
--- /dev/null
+++ b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
@@ -0,0 +1,799 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2014 Intel Corporation.
+
+.. _l2_fwd_event_app_real_and_virtual:
+
+L2 Forwarding Eventdev Sample Application (in Real and Virtualized Environments)
+================================================================================
+
+The L2 Forwarding eventdev sample application is a simple example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
+poll and event mode packet I/O mechanism which also takes advantage of Single
+Root I/O Virtualization (SR-IOV) features in a virtualized environment.
+
+Overview
+--------
+
+The L2 Forwarding eventdev sample application, which can operate in real and
+virtualized environments, performs L2 forwarding for each packet that is
+received on an RX_PORT. The destination port is the adjacent port from the
+enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
+ports 1 and 2 forward into each other, and ports 3 and 4 forward into each
+other. Also, if MAC addresses updating is enabled, the MAC addresses are
+affected as follows:
+
+* The source MAC address is replaced by the TX_PORT MAC address
+
+* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
+
+Appliation receives packets from RX_PORT using below mentioned methods:
+
+* Poll mode
+
+* Eventdev mode (default)
+
+This application can be used to benchmark performance using a traffic-generator,
+as shown in the :numref:`figure_l2_fwd_benchmark_setup`, or in a virtualized
+environment as shown in :numref:`figure_l2_fwd_virtenv_benchmark_setup`.
+
+.. _figure_l2_fwd_benchmark_setup:
+
+.. figure:: img/l2_fwd_benchmark_setup.*
+
+ Performance Benchmark Setup (Basic Environment)
+
+.. _figure_l2_fwd_virtenv_benchmark_setup:
+
+.. figure:: img/l2_fwd_virtenv_benchmark_setup.*
+
+ Performance Benchmark Setup (Virtualized Environment)
+
+This application may be used for basic VM to VM communication as shown
+in :numref:`figure_l2_fwd_vm2vm`, when MAC addresses updating is disabled.
+
+.. _figure_l2_fwd_vm2vm:
+
+.. figure:: img/l2_fwd_vm2vm.*
+
+ Virtual Machine to Virtual Machine communication.
+
+.. _l2_fwd_event_vf_setup:
+
+Virtual Function Setup Instructions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Application can use the virtual function available in the system and therefore
+can be used in a virtual machine without passing through the whole Network
+Device into a guest machine in a virtualized scenario. The virtual functions
+can be enabled on host machine or the hypervisor with the respective physical
+function driver.
+
+For example, on a Linux* host machine, it is possible to enable a virtual
+function using the following command:
+
+.. code-block:: console
+
+ modprobe ixgbe max_vfs=2,2
+
+This command enables two Virtual Functions on each of Physical Function of the
+NIC, with two physical ports in the PCI configuration space.
+
+It is important to note that enabled Virtual Function 0 and 2 would belong to
+Physical Function 0 and Virtual Function 1 and 3 would belong to Physical
+Function 1, in this case enabling a total of four Virtual Functions.
+
+Compiling the Application
+-------------------------
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``l2fwd-event`` sub-directory.
+
+Running the Application
+-----------------------
+
+The application requires a number of command line options:
+
+.. code-block:: console
+
+ ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sync=SYNC_MODE
+
+where,
+
+* p PORTMASK: A hexadecimal bitmask of the ports to configure
+
+* q NQ: A number of queues (=ports) per lcore (default is 1)
+
+* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
+
+* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
+
+* --eventq-sync=SYNC_MODE: Event queue synchronization method, Ordered or Atomic. Atomic by default.
+
+Sample usage commands are given below to run the application into different mode:
+
+Poll mode on linux environment with 4 lcores, 16 ports and 8 RX queues per lcore
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
+
+Eventdev mode on linux environment with 4 lcores, 16 ports , sync method ordered
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sync=ordered
+
+or
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+To run application with S/W scheduler, it uses following DPDK services:
+
+* Software scheduler
+* Rx adapter service function
+* Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Explanation
+-----------
+
+The following sections provide some explanation of the code.
+
+.. _l2_fwd_event_app_cmd_arguments:
+
+Command Line Arguments
+~~~~~~~~~~~~~~~~~~~~~~
+
+The L2 Forwarding eventdev sample application takes specific parameters,
+in addition to Environment Abstraction Layer (EAL) arguments.
+The preferred way to parse parameters is to use the getopt() function,
+since it is part of a well-defined and portable library.
+
+The parsing of arguments is done in the **l2fwd_parse_args()** function for non
+eventdev parameteres and in **parse_eventdev_args()** for eventded parameters.
+The method of argument parsing is not described here. Refer to the
+*glibc getopt(3)* man page for details.
+
+EAL arguments are parsed first, then application-specific arguments.
+This is done at the beginning of the main() function and eventdev parameters
+are parsed in eventdev_resource_setup() function during eventdev setup:
+
+.. code-block:: c
+
+ /* init EAL */
+
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+
+ argc -= ret;
+ argv += ret;
+
+ /* parse application arguments (after the EAL ones) */
+
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+ .
+ .
+ .
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(argc, argv);
+ if (ret < 0)
+ return ret;
+
+
+
+
+.. _l2_fwd_event_app_mbuf_init:
+
+Mbuf Pool Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the arguments are parsed, the mbuf pool is created.
+The mbuf pool contains a set of mbuf objects that will be used by the driver
+and the application to store network packet data:
+
+.. code-block:: c
+
+ /* create the mbuf pool */
+
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+The rte_mempool is a generic structure used to handle pools of objects.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of 32 mbufs is kept.
+The memory is allocated in NUMA socket 0,
+but it is possible to extend this code to allocate one mbuf pool per socket.
+
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
+
+.. _l2_fwd_event_app_dvr_init:
+
+Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~
+
+The main part of the code in the main() function relates to the initialization
+of the driver. To fully understand this code, it is recommended to study the
+chapters that related to the Poll Mode and Event mode Driver in the
+*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
+
+.. code-block:: c
+
+ if (rte_pci_probe() < 0)
+ rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
+
+ /* reset l2fwd_dst_ports */
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ }
+ else
+ last_port = portid;
+
+ nb_ports_in_mask++;
+
+ rte_eth_dev_info_get((uint8_t) portid, &dev_info);
+ }
+
+Observe that:
+
+* rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver
+ and as an Ethernet Poll Mode Driver.
+
+* rte_pci_probe() parses the devices on the PCI bus and initializes recognized
+ devices.
+
+The next step is to configure the RX and TX queues. For each port, there is only
+one RX queue (only one lcore is able to poll a given port). The number of TX
+queues depends on the number of available lcores. The rte_eth_dev_configure()
+function is used to configure the number of queues for a port:
+
+.. code-block:: c
+
+ ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Cannot configure device: "
+ "err=%d, port=%u\n",
+ ret, portid);
+
+.. _l2_fwd_event_app_rx_init:
+
+RX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The application uses one lcore to poll one or several ports, depending on the -q
+option, which specifies the number of queues per lcore.
+
+For example, if the user specifies -q 4, the application is able to poll four
+ports with one lcore. If there are 16 ports on the target (and if the portmask
+argument is -p ffff ), the application will need four lcores to poll all the
+ports.
+
+.. code-block:: c
+
+ ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
+ &rx_conf, l2fwd_pktmbuf_pool);
+ if (ret < 0)
+
+ rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: "
+ "err=%d, port=%u\n",
+ ret, portid);
+
+The list of queues that must be polled for a given lcore is stored in a private
+structure called struct lcore_queue_conf.
+
+.. code-block:: c
+
+ struct lcore_queue_conf {
+ unsigned n_rx_port;
+ unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
+ } rte_cache_aligned;
+
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+The values n_rx_port and rx_port_list[] are used in the main packet processing
+loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
+
+.. _l2_fwd_event_app_tx_init:
+
+TX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Each lcore should be able to transmit on any port. For every port, a single TX
+queue is initialized.
+
+.. code-block:: c
+
+ /* init one TX queue on each port */
+
+ fflush(stdout);
+
+ ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid), &tx_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, (unsigned) portid);
+
+The global configuration for TX queues is stored in a static structure:
+
+.. code-block:: c
+
+ static const struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
+ };
+
+To configure eventdev support, application setups following components:
+
+* Event dev
+* Event queue
+* Event Port
+* Rx/Tx adapters
+* Ethernet ports
+
+.. _l2_fwd_event_app_event_dev_init:
+
+Event dev Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+Application can use either H/W or S/W based event device scheduler
+implementation and supports single instance of event device. It configures event
+device as per below configuration
+
+.. code-block:: c
+
+ struct rte_event_dev_config event_d_conf = {
+ .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
+ .nb_event_ports = num_workers, /* Dedicated to each lcore */
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+In case of S/W scheduler, application runs eventdev scheduler service on service
+core. Application retrieves service id and later on it starts the same on a
+given lcore.
+
+.. code-block:: c
+
+ /* Start event device service */
+ ret = rte_event_dev_service_id_get(eventdev_rsrc.event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.service_id = service_id;
+
+ /* Start eventdev scheduler service */
+ rte_service_map_lcore_set(eventdev_rsrc.service_id, lcore_id[0], 1);
+ rte_service_lcore_start(lcore_id[0]);
+
+.. _l2_fwd_app_event_queue_init:
+
+Event queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet device is assigned a dedicated event queue which will be linked
+to all available event ports i.e. each lcore can dequeue packets from any of the
+Ethernet ports.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = 0,
+ .schedule_type = RTE_SCHED_TYPE_ATOMIC,
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
+ };
+
+ /* User requested sync mode */
+ event_q_conf.schedule_type = eventq_sync_mode;
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ }
+
+In case of S/W scheduler, an extra event queue is created which will be used for
+Tx adapter service function for enqueue operation.
+
+.. _l2_fwd_app_event_port_init:
+
+Event port Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Each worker thread is assigned a dedicated event port for enq/deq operations
+to/from an event device. All event ports are linked with all available event
+queues.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ }
+
+In case of S/W scheduler, an extra event port is created by DPDK library which
+is retrieved by the application and same will be used by Tx adapter service.
+
+.. code-block:: c
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &eventdev_rsrc.evq.event_q_id[
+ eventdev_rsrc.evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+.. _l2_fwd_event_app_adapter_init:
+
+Rx/Tx adapter Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
+Ethernet port's Rx queues are connected to its respective event queue at
+priority 0 via Rx adapter configuration and Ethernet port's tx queues are
+connected via Tx adapter.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+ &event_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", i);
+
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc.evq.event_q_id[i];
+ eth_q_conf.ev.sched_type = eventq_sync_mode;
+ ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", i);
+
+ eventdev_rsrc.rx_adptr.rx_adptr[i] = i;
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+ &event_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", i);
+
+ ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", i);
+
+ eventdev_rsrc.tx_adptr.tx_adptr[i] = i;
+ }
+
+For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
+configured which will be shared among all the Ethernet ports. Also DPDK library
+need service cores to run internal services for Rx/Tx adapters. Application gets
+service id for Rx/Tx adapters and after successful setup it runs the services
+on dedicated service cores.
+
+.. code-block:: c
+
+ /* retrieving service Id for Rx adapter */
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.rx_adptr.service_id = service_id;
+
+ /* Start eventdev Rx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc.rx_adptr.service_id,
+ lcore_id[1], 1);
+ rte_service_lcore_start(lcore_id[1]);
+
+ /* retrieving service Id for Tx adapter */
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.tx_adptr.service_id = service_id;
+
+ /* Start eventdev Tx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc.tx_adptr.service_id,
+ lcore_id[2], 1);
+ rte_service_lcore_start(lcore_id[2]);
+
+.. _l2_fwd_event_app_rx_tx_packets:
+
+Receive, Process and Transmit Packets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
+the RX queues. This is done using the following code:
+
+.. code-block:: c
+
+ /*
+ * Read packet from RX queues
+ */
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+
+Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
+function writes the mbuf pointers in a local table and returns the number of
+available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_simple_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+ In the following code, one line for getting the output port requires some
+ explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+Also to optimize enqueue opeartion, l2fwd_simple_forward() stores incoming mbus
+upto MAX_PKT_BURST. Once it reaches upto limit, all packets are transmitted to
+destination ports.
+
+.. code-block:: c
+
+ static void
+ l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+ {
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+ }
+
+For this test application, the processing is exactly the same for all packets
+arriving on the same RX port. Therefore, it would have been possible to call
+the rte_eth_tx_buffer() function directly from the main loop to send all the
+received packets on the same TX port, using the burst-oriented send function,
+which is more efficient.
+
+However, in real-life applications (such as, L3 routing),
+packet N is not necessarily forwarded on the same port as packet N-1.
+The application is implemented to illustrate that, so the same approach can be
+reused in a more complex application.
+
+To ensure that no packets remain in the tables, each lcore does a draining of TX
+queue in its main loop. This technique introduces some latency when there are
+not many packets to send, however it improves performance:
+
+.. code-block:: c
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id == rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+In the **l2fwd_main_loop_eventdev()** function, the main task is to read ingress
+packets from the event ports. This is done using the following code:
+
+.. code-block:: c
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
+ events, deq_len, 0);
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf[i] = events[i].mbuf;
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
+ }
+
+
+Before reading packets, deq_len is fetched to ensure correct allowed deq length
+by the eventdev.
+The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
+and returns the number of available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+ In the following code, one line for getting the output port requires some
+ explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
+be to destination ports via Tx adapter or generic event dev enqueue API
+depending H/W or S/W scheduler is used.
+
+.. code-block:: c
+
+ static inline void
+ l2fwd_eventdev_forward(struct rte_mbuf *m[], uint32_t portid,
+ uint16_t nb_rx, uint16_t event_p_id)
+ {
+ uint32_t dst_port, i;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ for (i = 0; i < nb_rx; i++) {
+ if (mac_updating)
+ l2fwd_mac_updating(m[i], dst_port);
+
+ m[i]->port = dst_port;
+ }
+
+ if (timer_period > 0) {
+ rte_spinlock_lock(&port_stats_lock);
+ port_statistics[dst_port].tx += nb_rx;
+ rte_spinlock_unlock(&port_stats_lock);
+ }
+ /* Registered callback is invoked for Tx */
+ eventdev_rsrc.send_burst_eventdev(m, nb_rx, event_p_id);
+ }
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-09-26 17:28 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-09-26 17:28 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, akhil.goyal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
dpdk-dev
On Tue, Sep 24, 2019 at 3:12 PM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add the default l2fwd poll mode routines similar to examples/l2fwd.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
> +#ifndef __L2FWD_COMMON_H__
> +#define __L2FWD_COMMON_H__
> +
> +#define MAX_PKT_BURST 32
> +#define MAX_RX_QUEUE_PER_LCORE 16
> +#define MAX_TX_QUEUE_PER_PORT 16
> +
> +#define RTE_LOGTYPE_L2FWD RTE_LOGTYPE_USER1
IMO, Application does not need log type, normal printf will be enough
as it is not a subsystem.
> +
> +#define RTE_TEST_RX_DESC_DEFAULT 1024
> +#define RTE_TEST_TX_DESC_DEFAULT 1024
> +
> +/* Per-port statistics struct */
> +struct l2fwd_port_statistics {
> + uint64_t dropped;
> + uint64_t tx;
> + uint64_t rx;
> +} __rte_cache_aligned;
> +
> +void print_stats(void);
> +
> +#endif /* __L2FWD_COMMON_H__ */
> diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
> new file mode 100644
> index 000000000..cc47fa203
> --- /dev/null
> +++ b/examples/l2fwd-event/main.c
> @@ -0,0 +1,737 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2019 Marvell International Ltd.
> + */
> +
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <stdint.h>
> +#include <inttypes.h>
> +#include <sys/types.h>
> +#include <sys/queue.h>
> +#include <netinet/in.h>
> +#include <setjmp.h>
> +#include <stdarg.h>
> +#include <ctype.h>
> +#include <errno.h>
> +#include <getopt.h>
> +#include <signal.h>
> +#include <stdbool.h>
> +
> +#include <rte_common.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +#include <rte_memory.h>
> +#include <rte_memcpy.h>
> +#include <rte_eal.h>
> +#include <rte_launch.h>
> +#include <rte_atomic.h>
> +#include <rte_cycles.h>
> +#include <rte_prefetch.h>
> +#include <rte_lcore.h>
> +#include <rte_per_lcore.h>
> +#include <rte_branch_prediction.h>
> +#include <rte_interrupts.h>
> +#include <rte_random.h>
> +#include <rte_debug.h>
> +#include <rte_ether.h>
> +#include <rte_ethdev.h>
> +#include <rte_eventdev.h>
> +#include <rte_mempool.h>
> +#include <rte_mbuf.h>
> +#include <rte_spinlock.h>
Sort in alphabetical order.
> +
> +#include "l2fwd_common.h"
> +
> +static volatile bool force_quit;
> +
> +/* MAC updating enabled by default */
> +static int mac_updating = 1;
> +
> +#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
> +#define MEMPOOL_CACHE_SIZE 256
> +
> +/*
> + * Configurable number of RX/TX ring descriptors
> + */
> +static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
> +static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
> +
> +/* ethernet addresses of ports */
> +static struct rte_ether_addr l2fwd_ports_eth_addr[RTE_MAX_ETHPORTS];
> +
> +/* mask of enabled ports */
> +static uint32_t l2fwd_enabled_port_mask;
> +
> +/* list of enabled ports */
> +static uint32_t l2fwd_dst_ports[RTE_MAX_ETHPORTS];
> +
> +static unsigned int l2fwd_rx_queue_per_lcore = 1;
I think, we move this global variable to structures.
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-09-26 17:33 ` Jerin Jacob
2019-09-27 13:08 ` Nipun Gupta
1 sibling, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-09-26 17:33 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, akhil.goyal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
dpdk-dev
On Tue, Sep 24, 2019 at 3:12 PM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add infra to select event device as a mode to process packets through
> command line arguments. Also, allow the user to select the schedule type
> to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
>
> Usage:
>
> `--mode="eventdev"` or `--mode="poll"`
> `--eventq-sync="ordered"` or `--eventq-sync="atomic"`
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
> main(int argc, char **argv)
> {
> + struct eventdev_resources *eventdev_rsrc;
> uint16_t nb_ports_available = 0;
> struct lcore_queue_conf *qconf;
> uint32_t nb_ports_in_mask = 0;
> @@ -524,6 +548,7 @@ main(int argc, char **argv)
> signal(SIGINT, signal_handler);
> signal(SIGTERM, signal_handler);
>
> + eventdev_rsrc = get_eventdev_rsrc();
> /* parse application arguments (after the EAL ones) */
> ret = l2fwd_parse_args(argc, argv);
> if (ret < 0)
> @@ -584,6 +609,17 @@ main(int argc, char **argv)
> if (l2fwd_pktmbuf_pool == NULL)
> rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
>
> + eventdev_rsrc->port_mask = l2fwd_enabled_port_mask;
> + eventdev_rsrc->pkt_pool = l2fwd_pktmbuf_pool;
> + eventdev_rsrc->dst_ports = l2fwd_dst_ports;
> + eventdev_rsrc->timer_period = timer_period;
> + eventdev_rsrc->mac_updt = mac_updating;
> + eventdev_rsrc->stats = port_statistics;
> + eventdev_rsrc->done = &force_quit;
These resources are repeating for poll mode as well. IMO,
We can have "rsrc" object which will have variables required for this
application.(i.e poll and event mode)
to avoid code duplication.
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 10/10] doc: add application usage guide for l2fwd-event
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-09-26 17:42 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-09-26 17:42 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, akhil.goyal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori, dpdk-dev
On Tue, Sep 24, 2019 at 3:13 PM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add documentation for l2fwd-event example.
> Update MAINTAINERS file claiming responsibility of l2fwd-event.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
> MAINTAINERS | 5 +
> doc/guides/sample_app_ug/index.rst | 1 +
> doc/guides/sample_app_ug/intro.rst | 5 +
> .../l2_forward_event_real_virtual.rst | 799 ++++++++++++++++++
Missing release notes update.
> 4 files changed, 810 insertions(+)
> create mode 100644 doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index b3d9aaddd..d8e1fa84d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1458,6 +1458,11 @@ M: Tomasz Kantecki <tomasz.kantecki@intel.com>
> F: doc/guides/sample_app_ug/l2_forward_cat.rst
> F: examples/l2fwd-cat/
>
> +M: Sunil Kumar Kori <skori@marvell.com>
> +M: Pavan Nikhilesh <pbhagavatula@marvell.com>
> +F: examples/l2fwd-event/
I think, you can add
T: git://dpdk.org/next/dpdk-next-eventdev
and move this section to first patch.
> +F: doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
I think, file name can be l2_forward_event.rst. looks like
"real_virtual" is legacy stuff.
> +
> F: examples/l3fwd/
> F: doc/guides/sample_app_ug/l3_forward.rst
>
> diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
> index f23f8f59e..83a4f8d5c 100644
> --- a/doc/guides/sample_app_ug/index.rst
> +++ b/doc/guides/sample_app_ug/index.rst
> @@ -26,6 +26,7 @@ Sample Applications User Guides
> l2_forward_crypto
> l2_forward_job_stats
> l2_forward_real_virtual
> + l2_forward_event_real_virtual
l2_forward_event
> l2_forward_cat
> l3_forward
> l3_forward_power_man
> diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
> index 90704194a..b33904ed1 100644
> --- a/doc/guides/sample_app_ug/intro.rst
> +++ b/doc/guides/sample_app_ug/intro.rst
> @@ -87,6 +87,11 @@ examples are highlighted below.
> forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
> addresses like a simple switch.
>
> +* :doc:`Network Layer 2 forwarding<l2_forward_eventdev_real_virtual>`: The Network Layer 2
> + forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
> + addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx
> + mechanism.
> +
> * :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
> forwarding, or ``l3fwd`` application does forwarding based on Internet
> Protocol, IPv4 or IPv6 like a simple router.
> diff --git a/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
> new file mode 100644
> index 000000000..7cea8efaf
> --- /dev/null
> +++ b/doc/guides/sample_app_ug/l2_forward_event_real_virtual.rst
> @@ -0,0 +1,799 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2010-2014 Intel Corporation.
> +
> +.. _l2_fwd_event_app_real_and_virtual:
> +
> +L2 Forwarding Eventdev Sample Application (in Real and Virtualized Environments)
> +================================================================================
> +
> +The L2 Forwarding eventdev sample application is a simple example of packet
> +processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
> +poll and event mode packet I/O mechanism which also takes advantage of Single
> +Root I/O Virtualization (SR-IOV) features in a virtualized environment.
The SR-IOV section can be omitted. it is from legacy.
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-09-26 17:33 ` Jerin Jacob
@ 2019-09-27 13:08 ` Nipun Gupta
1 sibling, 0 replies; 107+ messages in thread
From: Nipun Gupta @ 2019-09-27 13:08 UTC (permalink / raw)
To: pbhagavatula, jerinj, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori
Cc: dev
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Tuesday, September 24, 2019 3:12 PM
> To: jerinj@marvell.com; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> <skori@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for
> eventdev
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add infra to select event device as a mode to process packets through
> command line arguments. Also, allow the user to select the schedule type
> to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
>
> Usage:
>
> `--mode="eventdev"` or `--mode="poll"`
> `--eventq-sync="ordered"` or `--eventq-sync="atomic"`
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
> examples/l2fwd-event/Makefile | 1 +
> examples/l2fwd-event/l2fwd_eventdev.c | 107
> ++++++++++++++++++++++++++
> examples/l2fwd-event/l2fwd_eventdev.h | 62 +++++++++++++++
> examples/l2fwd-event/main.c | 39 +++++++++-
> examples/l2fwd-event/meson.build | 3 +-
> 5 files changed, 209 insertions(+), 3 deletions(-)
> create mode 100644 examples/l2fwd-event/l2fwd_eventdev.c
> create mode 100644 examples/l2fwd-event/l2fwd_eventdev.h
>
<snip>
>
> @@ -288,7 +289,12 @@ l2fwd_usage(const char *prgname)
> " --[no-]mac-updating: Enable or disable MAC addresses
> updating (enabled by default)\n"
> " When enabled:\n"
> " - The source MAC address is replaced by the TX port MAC
> address\n"
> - " - The destination MAC address is replaced by
> 02:00:00:00:00:TX_PORT_ID\n",
> + " - The destination MAC address is replaced by
> 02:00:00:00:00:TX_PORT_ID\n"
> + " --mode: Packet transfer mode for I/O, poll or eventdev\n"
> + " Default mode = eventdev\n"
> + " --eventq-sync:Event queue synchronization method,\n"
> + " ordered or atomic.\nDefault: atomic\n"
> + " Valid only if --mode=eventdev\n\n",
> prgname);
> }
'l2fwd-event' -- --help does not prints this help.
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
@ 2019-09-27 13:15 ` Nipun Gupta
2019-09-27 14:45 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Nipun Gupta @ 2019-09-27 13:15 UTC (permalink / raw)
To: pbhagavatula, jerinj, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori
Cc: dev
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Tuesday, September 24, 2019 3:12 PM
> To: jerinj@marvell.com; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> <skori@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port
> setup for eventdev
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add ethernet port Rx/Tx queue setup for event device which are later
> used for setting up event eth Rx/Tx adapters.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
> examples/l2fwd-event/l2fwd_eventdev.c | 114
> ++++++++++++++++++++++++++
> examples/l2fwd-event/l2fwd_eventdev.h | 1 +
> examples/l2fwd-event/main.c | 17 ++++
> 3 files changed, 132 insertions(+)
>
<snip>
> +
> static void
> eventdev_capability_setup(void)
> {
> @@ -105,6 +215,7 @@ void
> eventdev_resource_setup(void)
> {
> struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
> + uint16_t ethdev_count = rte_eth_dev_count_avail();
Why do we need to use all the Ethernet devices available?
We should use the devices in the portmask instead.
> uint32_t service_id;
> int32_t ret;
>
> @@ -119,6 +230,9 @@ eventdev_resource_setup(void)
> /* Setup eventdev capability callbacks */
> eventdev_capability_setup();
>
> + /* Ethernet device configuration */
> + eth_dev_port_setup(ethdev_count);
> +
> /* Start event device service */
> ret = rte_event_dev_service_id_get(eventdev_rsrc->event_d_id,
> &service_id);
> diff --git a/examples/l2fwd-event/l2fwd_eventdev.h b/examples/l2fwd-
> event/l2fwd_eventdev.h
> index 8b6606b4c..d380faff5 100644
> --- a/examples/l2fwd-event/l2fwd_eventdev.h
> +++ b/examples/l2fwd-event/l2fwd_eventdev.h
> @@ -51,6 +51,7 @@ struct eventdev_resources {
> uint8_t enabled;
> uint8_t nb_args;
> char **args;
> + struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
> };
>
> static inline struct eventdev_resources *
> diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
> index 087e84588..3f72d0579 100644
> --- a/examples/l2fwd-event/main.c
> +++ b/examples/l2fwd-event/main.c
> @@ -619,6 +619,22 @@ main(int argc, char **argv)
>
> /* Configure eventdev parameters if user has requested */
> eventdev_resource_setup();
> + if (eventdev_rsrc->enabled) {
> + /* All settings are done. Now enable eth devices */
> + RTE_ETH_FOREACH_DEV(portid) {
> + /* skip ports that are not enabled */
> + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
> + continue;
> +
> + ret = rte_eth_dev_start(portid);
> + if (ret < 0)
> + rte_exit(EXIT_FAILURE,
> + "rte_eth_dev_start:err=%d,
> port=%u\n",
> + ret, portid);
> + }
> +
> + goto skip_port_config;
> + }
>
> /* Initialize the port/queue configuration of each logical core */
> RTE_ETH_FOREACH_DEV(portid) {
> @@ -750,6 +766,7 @@ main(int argc, char **argv)
> "All available ports are disabled. Please set
> portmask.\n");
> }
>
> +skip_port_config:
> check_all_ports_link_status(l2fwd_enabled_port_mask);
>
> ret = 0;
> --
> 2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-09-27 13:22 ` Nipun Gupta
2019-09-27 14:43 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Nipun Gupta @ 2019-09-27 13:22 UTC (permalink / raw)
To: pbhagavatula, jerinj, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori
Cc: dev
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Tuesday, September 24, 2019 3:12 PM
> To: jerinj@marvell.com; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> <skori@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev
> queue and port setup
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add event device queue and port setup based on event eth Tx adapter
> capabilities.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> examples/l2fwd-event/l2fwd_eventdev.c | 10 +
> examples/l2fwd-event/l2fwd_eventdev.h | 18 ++
> examples/l2fwd-event/l2fwd_eventdev_generic.c | 179
> +++++++++++++++++-
> .../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++-
> 4 files changed, 378 insertions(+), 2 deletions(-)
>
<snip>
> diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c
> b/examples/l2fwd-event/l2fwd_eventdev_generic.c
> index e3990f8b0..65166fded 100644
> --- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
> +++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
> @@ -17,8 +17,185 @@
> #include "l2fwd_common.h"
> #include "l2fwd_eventdev.h"
>
> +static uint32_t
> +eventdev_setup_generic(uint16_t ethdev_count)
> +{
> + struct eventdev_resources *eventdev_rsrc = get_eventdev_rsrc();
> + struct rte_event_dev_config event_d_conf = {
> + .nb_events_limit = 4096,
> + .nb_event_queue_flows = 1024,
> + .nb_event_port_dequeue_depth = 128,
> + .nb_event_port_enqueue_depth = 128
> + };
> + struct rte_event_dev_info dev_info;
> + const uint8_t event_d_id = 0; /* Always use first event device only */
> + uint32_t event_queue_cfg = 0;
> + uint16_t num_workers = 0;
> + int ret;
> +
> + /* Event device configurtion */
> + rte_event_dev_info_get(event_d_id, &dev_info);
> + eventdev_rsrc->disable_implicit_release
> = !!(dev_info.event_dev_cap &
> +
> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
> +
> + if (dev_info.event_dev_cap &
> RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
> + event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
> +
> + /* One queue for each ethdev port + one Tx adapter Single link
> queue. */
> + event_d_conf.nb_event_queues = ethdev_count + 1;
It may not be true that max event queues are always more than ethdev_count.
Please use event_d_conf.nb_event_queues as 1, in case ' ethdev_count + 1' is more than
max event queues. This will also require change in 'event_queue_setup_generic' API where
this parameter is being used.
> + if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
> + event_d_conf.nb_event_queues =
> dev_info.max_event_queues;
> +
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-09-27 13:28 ` Nipun Gupta
2019-09-27 14:35 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Nipun Gupta @ 2019-09-27 13:28 UTC (permalink / raw)
To: pbhagavatula, jerinj, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori
Cc: dev
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Tuesday, September 24, 2019 3:12 PM
> To: jerinj@marvell.com; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> <skori@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev
> main loop
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add event dev main loop based on enabled l2fwd options and eventdev
> capabilities.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
<snip>
> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> + while
> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> + port_id,
> + &ev, 1) &&
> + !*done)
> + ;
> + }
In the TX direct mode we can send packets directly to the ethernet device using ethdev
API's. This will save unnecessary indirections and event unfolds within the driver.
> +
> + if (timer_period > 0)
> + __atomic_fetch_add(&eventdev_rsrc->stats[mbuf-
> >port].tx,
> + 1, __ATOMIC_RELAXED);
> + }
> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-27 13:28 ` Nipun Gupta
@ 2019-09-27 14:35 ` Pavan Nikhilesh Bhagavatula
2019-09-30 5:38 ` Nipun Gupta
0 siblings, 1 reply; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-09-27 14:35 UTC (permalink / raw)
To: Nipun Gupta, Jerin Jacob Kollanukkaran, bruce.richardson,
Akhil Goyal, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori
Cc: dev
>>
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Add event dev main loop based on enabled l2fwd options and
>eventdev
>> capabilities.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>
><snip>
>
>> + if (flags & L2FWD_EVENT_TX_DIRECT) {
>> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
>> + while
>> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
>> + port_id,
>> + &ev, 1)
>&&
>> + !*done)
>> + ;
>> + }
>
>In the TX direct mode we can send packets directly to the ethernet
>device using ethdev
>API's. This will save unnecessary indirections and event unfolds within
>the driver.
How would we guarantee atomicity of access to Tx queues? Between cores as we can only use one Tx queue.
Also, if SCHED_TYPE is ORDERED how would we guarantee flow ordering?
The capability of MT_LOCKFREE and flow ordering is abstracted through ` rte_event_eth_tx_adapter_enqueue `.
@see examples/eventdev_pipeline and app/test-eventdev/test_pipeline_*.
>
>> +
>> + if (timer_period > 0)
>> + __atomic_fetch_add(&eventdev_rsrc-
>>stats[mbuf-
>> >port].tx,
>> + 1, __ATOMIC_RELAXED);
>> + }
>> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-09-27 13:22 ` Nipun Gupta
@ 2019-09-27 14:43 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-09-27 14:43 UTC (permalink / raw)
To: Nipun Gupta, Jerin Jacob Kollanukkaran, bruce.richardson,
Akhil Goyal, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori
Cc: dev
>> Subject: [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add
>eventdev
>> queue and port setup
>>
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Add event device queue and port setup based on event eth Tx
>adapter
>> capabilities.
>>
>> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>> examples/l2fwd-event/l2fwd_eventdev.c | 10 +
>> examples/l2fwd-event/l2fwd_eventdev.h | 18 ++
>> examples/l2fwd-event/l2fwd_eventdev_generic.c | 179
>> +++++++++++++++++-
>> .../l2fwd_eventdev_internal_port.c | 173 ++++++++++++++++-
>> 4 files changed, 378 insertions(+), 2 deletions(-)
>>
>
><snip>
>
>> diff --git a/examples/l2fwd-event/l2fwd_eventdev_generic.c
>> b/examples/l2fwd-event/l2fwd_eventdev_generic.c
>> index e3990f8b0..65166fded 100644
>> --- a/examples/l2fwd-event/l2fwd_eventdev_generic.c
>> +++ b/examples/l2fwd-event/l2fwd_eventdev_generic.c
>> @@ -17,8 +17,185 @@
>> #include "l2fwd_common.h"
>> #include "l2fwd_eventdev.h"
>>
>> +static uint32_t
>> +eventdev_setup_generic(uint16_t ethdev_count)
>> +{
>> + struct eventdev_resources *eventdev_rsrc =
>get_eventdev_rsrc();
>> + struct rte_event_dev_config event_d_conf = {
>> + .nb_events_limit = 4096,
>> + .nb_event_queue_flows = 1024,
>> + .nb_event_port_dequeue_depth = 128,
>> + .nb_event_port_enqueue_depth = 128
>> + };
>> + struct rte_event_dev_info dev_info;
>> + const uint8_t event_d_id = 0; /* Always use first event device
>only */
>> + uint32_t event_queue_cfg = 0;
>> + uint16_t num_workers = 0;
>> + int ret;
>> +
>> + /* Event device configurtion */
>> + rte_event_dev_info_get(event_d_id, &dev_info);
>> + eventdev_rsrc->disable_implicit_release
>> = !!(dev_info.event_dev_cap &
>> +
>> RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
>> +
>> + if (dev_info.event_dev_cap &
>> RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
>> + event_queue_cfg |=
>RTE_EVENT_QUEUE_CFG_ALL_TYPES;
>> +
>> + /* One queue for each ethdev port + one Tx adapter Single link
>> queue. */
>> + event_d_conf.nb_event_queues = ethdev_count + 1;
>
>It may not be true that max event queues are always more than
>ethdev_count.
>Please use event_d_conf.nb_event_queues as 1, in case '
>ethdev_count + 1' is more than
>max event queues.
See below.
This will also require change in
>'event_queue_setup_generic' API where
>this parameter is being used.
I will fix this in v5.
>
>> + if (dev_info.max_event_queues <
>event_d_conf.nb_event_queues)
>> + event_d_conf.nb_event_queues =
>> dev_info.max_event_queues;
>> +
The above check would make .nb_event_queues to 1.
>
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev
2019-09-27 13:15 ` Nipun Gupta
@ 2019-09-27 14:45 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-09-27 14:45 UTC (permalink / raw)
To: Nipun Gupta, Jerin Jacob Kollanukkaran, bruce.richardson,
Akhil Goyal, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori
Cc: dev
>> Subject: [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth
>port
>> setup for eventdev
>>
>> From: Sunil Kumar Kori <skori@marvell.com>
>>
>> Add ethernet port Rx/Tx queue setup for event device which are later
>> used for setting up event eth Rx/Tx adapters.
>>
>> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
>> ---
>> examples/l2fwd-event/l2fwd_eventdev.c | 114
>> ++++++++++++++++++++++++++
>> examples/l2fwd-event/l2fwd_eventdev.h | 1 +
>> examples/l2fwd-event/main.c | 17 ++++
>> 3 files changed, 132 insertions(+)
>>
>
><snip>
>
>> +
>> static void
>> eventdev_capability_setup(void)
>> {
>> @@ -105,6 +215,7 @@ void
>> eventdev_resource_setup(void)
>> {
>> struct eventdev_resources *eventdev_rsrc =
>get_eventdev_rsrc();
>> + uint16_t ethdev_count = rte_eth_dev_count_avail();
>
>Why do we need to use all the Ethernet devices available?
>We should use the devices in the portmask instead.
Will fix in v5.
>
>> uint32_t service_id;
>> int32_t ret;
>>
>> @@ -119,6 +230,9 @@ eventdev_resource_setup(void)
>> /* Setup eventdev capability callbacks */
>> eventdev_capability_setup();
>>
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-27 14:35 ` Pavan Nikhilesh Bhagavatula
@ 2019-09-30 5:38 ` Nipun Gupta
2019-09-30 6:38 ` Jerin Jacob
0 siblings, 1 reply; 107+ messages in thread
From: Nipun Gupta @ 2019-09-30 5:38 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
bruce.richardson, Akhil Goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori
Cc: dev
> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Friday, September 27, 2019 8:05 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> <skori@marvell.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> eventdev main loop
>
> >>
> >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >>
> >> Add event dev main loop based on enabled l2fwd options and
> >eventdev
> >> capabilities.
> >>
> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> ---
> >
> ><snip>
> >
> >> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> >> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> >> + while
> >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> >> + port_id,
> >> + &ev, 1)
> >&&
> >> + !*done)
> >> + ;
> >> + }
> >
> >In the TX direct mode we can send packets directly to the ethernet
> >device using ethdev
> >API's. This will save unnecessary indirections and event unfolds within
> >the driver.
>
> How would we guarantee atomicity of access to Tx queues? Between cores
> as we can only use one Tx queue.
> Also, if SCHED_TYPE is ORDERED how would we guarantee flow ordering?
> The capability of MT_LOCKFREE and flow ordering is abstracted through `
> rte_event_eth_tx_adapter_enqueue `.
I understand your objective here. Probably in your case the DIRECT is equivalent
to giving the packet to the scheduler, which will pass on the packet to the destined device.
On NXP platform, DIRECT implies sending the packet directly to the device (eth/crypto),
and scheduler will internally pitch in.
Here we will need another option to send it directly to the device.
We can set up a call to discuss the same, or send patch regarding this to you to incorporate
the same in your series.
>
> @see examples/eventdev_pipeline and app/test-eventdev/test_pipeline_*.
Yes, we are aware of that, They are one way of representing, how to build a complete eventdev pipeline.
They don't work on NXP HW.
We plan to send patches for them to fix them for NXP HW soon.
Regards,
Nipun
>
> >
> >> +
> >> + if (timer_period > 0)
> >> + __atomic_fetch_add(&eventdev_rsrc-
> >>stats[mbuf-
> >> >port].tx,
> >> + 1, __ATOMIC_RELAXED);
> >> + }
> >> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-30 5:38 ` Nipun Gupta
@ 2019-09-30 6:38 ` Jerin Jacob
2019-09-30 7:46 ` Nipun Gupta
0 siblings, 1 reply; 107+ messages in thread
From: Jerin Jacob @ 2019-09-30 6:38 UTC (permalink / raw)
To: Nipun Gupta
Cc: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
bruce.richardson, Akhil Goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, dev
On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > Sent: Friday, September 27, 2019 8:05 PM
> > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil Goyal
> > <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> > Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> > <skori@marvell.com>
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> > eventdev main loop
> >
> > >>
> > >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > >>
> > >> Add event dev main loop based on enabled l2fwd options and
> > >eventdev
> > >> capabilities.
> > >>
> > >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > >> ---
> > >
> > ><snip>
> > >
> > >> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> > >> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> > >> + while
> > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> > >> + port_id,
> > >> + &ev, 1)
> > >&&
> > >> + !*done)
> > >> + ;
> > >> + }
> > >
> > >In the TX direct mode we can send packets directly to the ethernet
> > >device using ethdev
> > >API's. This will save unnecessary indirections and event unfolds within
> > >the driver.
> >
> > How would we guarantee atomicity of access to Tx queues? Between cores
> > as we can only use one Tx queue.
> > Also, if SCHED_TYPE is ORDERED how would we guarantee flow ordering?
> > The capability of MT_LOCKFREE and flow ordering is abstracted through `
> > rte_event_eth_tx_adapter_enqueue `.
>
> I understand your objective here. Probably in your case the DIRECT is equivalent
> to giving the packet to the scheduler, which will pass on the packet to the destined device.
> On NXP platform, DIRECT implies sending the packet directly to the device (eth/crypto),
> and scheduler will internally pitch in.
> Here we will need another option to send it directly to the device.
> We can set up a call to discuss the same, or send patch regarding this to you to incorporate
> the same in your series.
Yes. Sending the patch will make us understand better.
Currently, We have two different means for abstracting Tx adapter fast
path changes,
a) SINGLE LINK QUEUE
b) rte_event_eth_tx_adapter_enqueue()
Could you please share why any of the above schemes do not work for NXP HW?
If there is no additional functionality in
rte_event_eth_tx_adapter_enqueue(), you could
simply call direct ethdev tx burst function pointer to make
abstraction intact to avoid
one more code flow in the fast path.
If I guess it right since NXP HW supports MT_LOCKFREE and only atomic, due to
that, calling eth_dev_tx_burst will be sufficient. But abstracting
over rte_event_eth_tx_adapter_enqueue()
makes application life easy. You can call the low level DPPA2 Tx function in
rte_event_eth_tx_adapter_enqueue() to avoid any performance impact(We
are doing the same).
>
> >
> > @see examples/eventdev_pipeline and app/test-eventdev/test_pipeline_*.
>
> Yes, we are aware of that, They are one way of representing, how to build a complete eventdev pipeline.
> They don't work on NXP HW.
> We plan to send patches for them to fix them for NXP HW soon.
>
> Regards,
> Nipun
>
> >
> > >
> > >> +
> > >> + if (timer_period > 0)
> > >> + __atomic_fetch_add(&eventdev_rsrc-
> > >>stats[mbuf-
> > >> >port].tx,
> > >> + 1, __ATOMIC_RELAXED);
> > >> + }
> > >> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-30 6:38 ` Jerin Jacob
@ 2019-09-30 7:46 ` Nipun Gupta
2019-09-30 8:09 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Nipun Gupta @ 2019-09-30 7:46 UTC (permalink / raw)
To: Jerin Jacob
Cc: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
bruce.richardson, Akhil Goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, dev
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Monday, September 30, 2019 12:08 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil
> Goyal <akhil.goyal@nxp.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>; Radu
> Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> eventdev main loop
>
> On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta <nipun.gupta@nxp.com>
> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > > Sent: Friday, September 27, 2019 8:05 PM
> > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > > <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil Goyal
> > > <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> > > Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > > Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> > > <skori@marvell.com>
> > > Cc: dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> > > eventdev main loop
> > >
> > > >>
> > > >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > >>
> > > >> Add event dev main loop based on enabled l2fwd options and
> > > >eventdev
> > > >> capabilities.
> > > >>
> > > >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > > >> ---
> > > >
> > > ><snip>
> > > >
> > > >> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> > > >> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> > > >> + while
> > > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> > > >> + port_id,
> > > >> + &ev, 1)
> > > >&&
> > > >> + !*done)
> > > >> + ;
> > > >> + }
> > > >
> > > >In the TX direct mode we can send packets directly to the ethernet
> > > >device using ethdev
> > > >API's. This will save unnecessary indirections and event unfolds within
> > > >the driver.
> > >
> > > How would we guarantee atomicity of access to Tx queues? Between
> cores
> > > as we can only use one Tx queue.
> > > Also, if SCHED_TYPE is ORDERED how would we guarantee flow ordering?
> > > The capability of MT_LOCKFREE and flow ordering is abstracted through `
> > > rte_event_eth_tx_adapter_enqueue `.
> >
> > I understand your objective here. Probably in your case the DIRECT is
> equivalent
> > to giving the packet to the scheduler, which will pass on the packet to the
> destined device.
> > On NXP platform, DIRECT implies sending the packet directly to the device
> (eth/crypto),
> > and scheduler will internally pitch in.
> > Here we will need another option to send it directly to the device.
> > We can set up a call to discuss the same, or send patch regarding this to you
> to incorporate
> > the same in your series.
>
> Yes. Sending the patch will make us understand better.
>
> Currently, We have two different means for abstracting Tx adapter fast
> path changes,
> a) SINGLE LINK QUEUE
> b) rte_event_eth_tx_adapter_enqueue()
>
> Could you please share why any of the above schemes do not work for NXP
> HW?
> If there is no additional functionality in
> rte_event_eth_tx_adapter_enqueue(), you could
> simply call direct ethdev tx burst function pointer to make
> abstraction intact to avoid
> one more code flow in the fast path.
>
> If I guess it right since NXP HW supports MT_LOCKFREE and only atomic, due
> to
> that, calling eth_dev_tx_burst will be sufficient. But abstracting
> over rte_event_eth_tx_adapter_enqueue()
> makes application life easy. You can call the low level DPPA2 Tx function in
> rte_event_eth_tx_adapter_enqueue() to avoid any performance
> impact(We
> are doing the same).
Yes, that’s correct regarding our H/W capability.
Agree that the application will become complex by adding more code flow,
but calling Tx functions internally may lead to additional CPU cycles.
Give us a couple of days to analyze the performance impact, and as you also say, I too
don't think it would be much. We should be able to manage it in within our driver.
>
>
> >
> > >
> > > @see examples/eventdev_pipeline and app/test-
> eventdev/test_pipeline_*.
> >
> > Yes, we are aware of that, They are one way of representing, how to build
> a complete eventdev pipeline.
> > They don't work on NXP HW.
> > We plan to send patches for them to fix them for NXP HW soon.
> >
> > Regards,
> > Nipun
> >
> > >
> > > >
> > > >> +
> > > >> + if (timer_period > 0)
> > > >> + __atomic_fetch_add(&eventdev_rsrc-
> > > >>stats[mbuf-
> > > >> >port].tx,
> > > >> + 1, __ATOMIC_RELAXED);
> > > >> + }
> > > >> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-30 7:46 ` Nipun Gupta
@ 2019-09-30 8:09 ` Pavan Nikhilesh Bhagavatula
2019-09-30 17:50 ` Nipun Gupta
0 siblings, 1 reply; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-09-30 8:09 UTC (permalink / raw)
To: Nipun Gupta, Jerin Jacob
Cc: Jerin Jacob Kollanukkaran, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori, dev
>-----Original Message-----
>From: Nipun Gupta <nipun.gupta@nxp.com>
>Sent: Monday, September 30, 2019 1:17 PM
>To: Jerin Jacob <jerinjacobk@gmail.com>
>Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
>Jacob Kollanukkaran <jerinj@marvell.com>;
>bruce.richardson@intel.com; Akhil Goyal <akhil.goyal@nxp.com>;
>Marko Kovacevic <marko.kovacevic@intel.com>; Ori Kam
><orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
>Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>; dev@dpdk.org
>Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
>eventdev main loop
>
>
>
>> -----Original Message-----
>> From: Jerin Jacob <jerinjacobk@gmail.com>
>> Sent: Monday, September 30, 2019 12:08 PM
>> To: Nipun Gupta <nipun.gupta@nxp.com>
>> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
>Jacob
>> Kollanukkaran <jerinj@marvell.com>; bruce.richardson@intel.com;
>Akhil
>> Goyal <akhil.goyal@nxp.com>; Marko Kovacevic
>> <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>;
>Radu
>> Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
>> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>;
>> dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
>> eventdev main loop
>>
>> On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta
><nipun.gupta@nxp.com>
>> wrote:
>> >
>> >
>> >
>> > > -----Original Message-----
>> > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
>> > > Sent: Friday, September 27, 2019 8:05 PM
>> > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob
>Kollanukkaran
>> > > <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil Goyal
>> > > <akhil.goyal@nxp.com>; Marko Kovacevic
><marko.kovacevic@intel.com>;
>> > > Ori Kam <orika@mellanox.com>; Radu Nicolau
><radu.nicolau@intel.com>;
>> > > Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
>> > > <skori@marvell.com>
>> > > Cc: dev@dpdk.org
>> > > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event:
>add
>> > > eventdev main loop
>> > >
>> > > >>
>> > > >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> > > >>
>> > > >> Add event dev main loop based on enabled l2fwd options and
>> > > >eventdev
>> > > >> capabilities.
>> > > >>
>> > > >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> > > >> ---
>> > > >
>> > > ><snip>
>> > > >
>> > > >> + if (flags & L2FWD_EVENT_TX_DIRECT) {
>> > > >> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
>> > > >> + while
>> > > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
>> > > >> + port_id,
>> > > >> + &ev, 1)
>> > > >&&
>> > > >> + !*done)
>> > > >> + ;
>> > > >> + }
>> > > >
>> > > >In the TX direct mode we can send packets directly to the
>ethernet
>> > > >device using ethdev
>> > > >API's. This will save unnecessary indirections and event unfolds
>within
>> > > >the driver.
>> > >
>> > > How would we guarantee atomicity of access to Tx queues?
>Between
>> cores
>> > > as we can only use one Tx queue.
>> > > Also, if SCHED_TYPE is ORDERED how would we guarantee flow
>ordering?
>> > > The capability of MT_LOCKFREE and flow ordering is abstracted
>through `
>> > > rte_event_eth_tx_adapter_enqueue `.
>> >
>> > I understand your objective here. Probably in your case the DIRECT
>is
>> equivalent
>> > to giving the packet to the scheduler, which will pass on the packet
>to the
>> destined device.
>> > On NXP platform, DIRECT implies sending the packet directly to the
>device
>> (eth/crypto),
>> > and scheduler will internally pitch in.
>> > Here we will need another option to send it directly to the device.
>> > We can set up a call to discuss the same, or send patch regarding this
>to you
>> to incorporate
>> > the same in your series.
>>
>> Yes. Sending the patch will make us understand better.
>>
>> Currently, We have two different means for abstracting Tx adapter
>fast
>> path changes,
>> a) SINGLE LINK QUEUE
>> b) rte_event_eth_tx_adapter_enqueue()
>>
>> Could you please share why any of the above schemes do not work
>for NXP
>> HW?
>> If there is no additional functionality in
>> rte_event_eth_tx_adapter_enqueue(), you could
>> simply call direct ethdev tx burst function pointer to make
>> abstraction intact to avoid
>> one more code flow in the fast path.
>>
>> If I guess it right since NXP HW supports MT_LOCKFREE and only
>atomic, due
>> to
>> that, calling eth_dev_tx_burst will be sufficient. But abstracting
>> over rte_event_eth_tx_adapter_enqueue()
>> makes application life easy. You can call the low level DPPA2 Tx
>function in
>> rte_event_eth_tx_adapter_enqueue() to avoid any performance
>> impact(We
>> are doing the same).
>
>Yes, that’s correct regarding our H/W capability.
>Agree that the application will become complex by adding more code
>flow,
>but calling Tx functions internally may lead to additional CPU cycles.
>Give us a couple of days to analyze the performance impact, and as you
>also say, I too
>don't think it would be much. We should be able to manage it in within
>our driver.
When application calls rte_event_eth_tx_adapter_queue_add() based on
the eth_dev_id the underlying eventdevice can set
set rte_event_eth_tx_adapter_enqueue() to directly call a function which
does the platform specific Tx.
i.e if eth_dev is net/dpaa and event dev is also net/dpaa we need _not_ call
`rte_eth_tx_burst()` in ` rte_event_eth_tx_adapter_enqueue()` it can directly
Invoke the platform specific Rx function which would avoid function pointer
indirection.
>
>>
>>
>> >
>> > >
>> > > @see examples/eventdev_pipeline and app/test-
>> eventdev/test_pipeline_*.
>> >
>> > Yes, we are aware of that, They are one way of representing, how
>to build
>> a complete eventdev pipeline.
>> > They don't work on NXP HW.
>> > We plan to send patches for them to fix them for NXP HW soon.
>> >
>> > Regards,
>> > Nipun
>> >
>> > >
>> > > >
>> > > >> +
>> > > >> + if (timer_period > 0)
>> > > >> + __atomic_fetch_add(&eventdev_rsrc-
>> > > >>stats[mbuf-
>> > > >> >port].tx,
>> > > >> + 1, __ATOMIC_RELAXED);
>> > > >> + }
>> > > >> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-30 8:09 ` Pavan Nikhilesh Bhagavatula
@ 2019-09-30 17:50 ` Nipun Gupta
2019-10-01 5:59 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Nipun Gupta @ 2019-09-30 17:50 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob
Cc: Jerin Jacob Kollanukkaran, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori, dev, Hemant Agrawal
> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Monday, September 30, 2019 1:39 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob <jerinjacobk@gmail.com>
> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> bruce.richardson@intel.com; Akhil Goyal <akhil.goyal@nxp.com>; Marko
> Kovacevic <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>;
> Radu Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev
> main loop
>
>
>
> >-----Original Message-----
> >From: Nipun Gupta <nipun.gupta@nxp.com>
> >Sent: Monday, September 30, 2019 1:17 PM
> >To: Jerin Jacob <jerinjacobk@gmail.com>
> >Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
> >Jacob Kollanukkaran <jerinj@marvell.com>;
> >bruce.richardson@intel.com; Akhil Goyal <akhil.goyal@nxp.com>;
> >Marko Kovacevic <marko.kovacevic@intel.com>; Ori Kam
> ><orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> >Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> ><skori@marvell.com>; dev@dpdk.org
> >Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> >eventdev main loop
> >
> >
> >
> >> -----Original Message-----
> >> From: Jerin Jacob <jerinjacobk@gmail.com>
> >> Sent: Monday, September 30, 2019 12:08 PM
> >> To: Nipun Gupta <nipun.gupta@nxp.com>
> >> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
> >Jacob
> >> Kollanukkaran <jerinj@marvell.com>; bruce.richardson@intel.com;
> >Akhil
> >> Goyal <akhil.goyal@nxp.com>; Marko Kovacevic
> >> <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>;
> >Radu
> >> Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
> >> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> ><skori@marvell.com>;
> >> dev@dpdk.org
> >> Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
> >> eventdev main loop
> >>
> >> On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta
> ><nipun.gupta@nxp.com>
> >> wrote:
> >> >
> >> >
> >> >
> >> > > -----Original Message-----
> >> > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> >> > > Sent: Friday, September 27, 2019 8:05 PM
> >> > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob
> >Kollanukkaran
> >> > > <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil Goyal
> >> > > <akhil.goyal@nxp.com>; Marko Kovacevic
> ><marko.kovacevic@intel.com>;
> >> > > Ori Kam <orika@mellanox.com>; Radu Nicolau
> ><radu.nicolau@intel.com>;
> >> > > Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> >> > > <skori@marvell.com>
> >> > > Cc: dev@dpdk.org
> >> > > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event:
> >add
> >> > > eventdev main loop
> >> > >
> >> > > >>
> >> > > >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> > > >>
> >> > > >> Add event dev main loop based on enabled l2fwd options and
> >> > > >eventdev
> >> > > >> capabilities.
> >> > > >>
> >> > > >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> > > >> ---
> >> > > >
> >> > > ><snip>
> >> > > >
> >> > > >> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> >> > > >> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> >> > > >> + while
> >> > > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> >> > > >> + port_id,
> >> > > >> + &ev, 1)
> >> > > >&&
> >> > > >> + !*done)
> >> > > >> + ;
> >> > > >> + }
> >> > > >
> >> > > >In the TX direct mode we can send packets directly to the
> >ethernet
> >> > > >device using ethdev
> >> > > >API's. This will save unnecessary indirections and event unfolds
> >within
> >> > > >the driver.
> >> > >
> >> > > How would we guarantee atomicity of access to Tx queues?
> >Between
> >> cores
> >> > > as we can only use one Tx queue.
> >> > > Also, if SCHED_TYPE is ORDERED how would we guarantee flow
> >ordering?
> >> > > The capability of MT_LOCKFREE and flow ordering is abstracted
> >through `
> >> > > rte_event_eth_tx_adapter_enqueue `.
> >> >
> >> > I understand your objective here. Probably in your case the DIRECT
> >is
> >> equivalent
> >> > to giving the packet to the scheduler, which will pass on the packet
> >to the
> >> destined device.
> >> > On NXP platform, DIRECT implies sending the packet directly to the
> >device
> >> (eth/crypto),
> >> > and scheduler will internally pitch in.
> >> > Here we will need another option to send it directly to the device.
> >> > We can set up a call to discuss the same, or send patch regarding this
> >to you
> >> to incorporate
> >> > the same in your series.
> >>
> >> Yes. Sending the patch will make us understand better.
> >>
> >> Currently, We have two different means for abstracting Tx adapter
> >fast
> >> path changes,
> >> a) SINGLE LINK QUEUE
> >> b) rte_event_eth_tx_adapter_enqueue()
> >>
> >> Could you please share why any of the above schemes do not work
> >for NXP
> >> HW?
> >> If there is no additional functionality in
> >> rte_event_eth_tx_adapter_enqueue(), you could
> >> simply call direct ethdev tx burst function pointer to make
> >> abstraction intact to avoid
> >> one more code flow in the fast path.
> >>
> >> If I guess it right since NXP HW supports MT_LOCKFREE and only
> >atomic, due
> >> to
> >> that, calling eth_dev_tx_burst will be sufficient. But abstracting
> >> over rte_event_eth_tx_adapter_enqueue()
> >> makes application life easy. You can call the low level DPPA2 Tx
> >function in
> >> rte_event_eth_tx_adapter_enqueue() to avoid any performance
> >> impact(We
> >> are doing the same).
> >
> >Yes, that’s correct regarding our H/W capability.
> >Agree that the application will become complex by adding more code
> >flow,
> >but calling Tx functions internally may lead to additional CPU cycles.
> >Give us a couple of days to analyze the performance impact, and as you
> >also say, I too
> >don't think it would be much. We should be able to manage it in within
> >our driver.
>
> When application calls rte_event_eth_tx_adapter_queue_add() based on
> the eth_dev_id the underlying eventdevice can set
> set rte_event_eth_tx_adapter_enqueue() to directly call a function which
> does the platform specific Tx.
>
> i.e if eth_dev is net/dpaa and event dev is also net/dpaa we need _not_ call
> `rte_eth_tx_burst()` in ` rte_event_eth_tx_adapter_enqueue()` it can directly
> Invoke the platform specific Rx function which would avoid function pointer
> indirection.
I have some performance concern regarding the burst mode; not w.r.t the
function call sequence, but w.r.t the burst functionality.
The API `rte_event_eth_tx_adapter_enqueue()` is called with `nb_rx` events. In case we
are calling the Ethernet API's directly from within the adapter, we will still need to send
all of them separately to the Ethernet device rather than in burst (or scan and separate
the packets internally for ethernet device, queue pair). This separation in the driver is
more complex than in the application, as application is aware of the Eth dev and queues
it is using and thus can easily bifurcate the events.
I suggest to have a flag in the `rte_event_eth_tx_adapter_enqueue()` API to determine
if the application is sending all the packets in a particular API call for a single destination,
so that driver can act smartly and send the burst to Eth Tx function, on the basis of fields
set in the first mbuf.
Seems fine to you guys? I plan to send the patch regarding this soon.
Regards,
Nipun
>
> >
> >>
> >>
> >> >
> >> > >
> >> > > @see examples/eventdev_pipeline and app/test-
> >> eventdev/test_pipeline_*.
> >> >
> >> > Yes, we are aware of that, They are one way of representing, how
> >to build
> >> a complete eventdev pipeline.
> >> > They don't work on NXP HW.
> >> > We plan to send patches for them to fix them for NXP HW soon.
> >> >
> >> > Regards,
> >> > Nipun
> >> >
> >> > >
> >> > > >
> >> > > >> +
> >> > > >> + if (timer_period > 0)
> >> > > >> + __atomic_fetch_add(&eventdev_rsrc-
> >> > > >>stats[mbuf-
> >> > > >> >port].tx,
> >> > > >> + 1, __ATOMIC_RELAXED);
> >> > > >> + }
> >> > > >> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop
2019-09-30 17:50 ` Nipun Gupta
@ 2019-10-01 5:59 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-10-01 5:59 UTC (permalink / raw)
To: Nipun Gupta, Jerin Jacob
Cc: Jerin Jacob Kollanukkaran, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori, dev, Hemant Agrawal
>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Nipun Gupta
>Sent: Monday, September 30, 2019 11:21 PM
>To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
>Jacob <jerinjacobk@gmail.com>
>Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
>bruce.richardson@intel.com; Akhil Goyal <akhil.goyal@nxp.com>;
>Marko Kovacevic <marko.kovacevic@intel.com>; Ori Kam
><orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
>Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>; dev@dpdk.org; Hemant Agrawal
><hemant.agrawal@nxp.com>
>Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
>eventdev main loop
>
>
>
>> -----Original Message-----
>> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
>> Sent: Monday, September 30, 2019 1:39 PM
>> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob
><jerinjacobk@gmail.com>
>> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
>> bruce.richardson@intel.com; Akhil Goyal <akhil.goyal@nxp.com>;
>Marko
>> Kovacevic <marko.kovacevic@intel.com>; Ori Kam
><orika@mellanox.com>;
>> Radu Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
>> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>;
>> dev@dpdk.org
>> Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add
>eventdev
>> main loop
>>
>>
>>
>> >-----Original Message-----
>> >From: Nipun Gupta <nipun.gupta@nxp.com>
>> >Sent: Monday, September 30, 2019 1:17 PM
>> >To: Jerin Jacob <jerinjacobk@gmail.com>
>> >Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
>> >Jacob Kollanukkaran <jerinj@marvell.com>;
>> >bruce.richardson@intel.com; Akhil Goyal <akhil.goyal@nxp.com>;
>> >Marko Kovacevic <marko.kovacevic@intel.com>; Ori Kam
>> ><orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
>> >Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
>> ><skori@marvell.com>; dev@dpdk.org
>> >Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event:
>add
>> >eventdev main loop
>> >
>> >
>> >
>> >> -----Original Message-----
>> >> From: Jerin Jacob <jerinjacobk@gmail.com>
>> >> Sent: Monday, September 30, 2019 12:08 PM
>> >> To: Nipun Gupta <nipun.gupta@nxp.com>
>> >> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>;
>Jerin
>> >Jacob
>> >> Kollanukkaran <jerinj@marvell.com>;
>bruce.richardson@intel.com;
>> >Akhil
>> >> Goyal <akhil.goyal@nxp.com>; Marko Kovacevic
>> >> <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>;
>> >Radu
>> >> Nicolau <radu.nicolau@intel.com>; Tomasz Kantecki
>> >> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
>> ><skori@marvell.com>;
>> >> dev@dpdk.org
>> >> Subject: Re: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event:
>add
>> >> eventdev main loop
>> >>
>> >> On Mon, Sep 30, 2019 at 11:08 AM Nipun Gupta
>> ><nipun.gupta@nxp.com>
>> >> wrote:
>> >> >
>> >> >
>> >> >
>> >> > > -----Original Message-----
>> >> > > From: Pavan Nikhilesh Bhagavatula
><pbhagavatula@marvell.com>
>> >> > > Sent: Friday, September 27, 2019 8:05 PM
>> >> > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob
>> >Kollanukkaran
>> >> > > <jerinj@marvell.com>; bruce.richardson@intel.com; Akhil
>Goyal
>> >> > > <akhil.goyal@nxp.com>; Marko Kovacevic
>> ><marko.kovacevic@intel.com>;
>> >> > > Ori Kam <orika@mellanox.com>; Radu Nicolau
>> ><radu.nicolau@intel.com>;
>> >> > > Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar
>Kori
>> >> > > <skori@marvell.com>
>> >> > > Cc: dev@dpdk.org
>> >> > > Subject: RE: [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-
>event:
>> >add
>> >> > > eventdev main loop
>> >> > >
>> >> > > >>
>> >> > > >> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> >> > > >>
>> >> > > >> Add event dev main loop based on enabled l2fwd options
>and
>> >> > > >eventdev
>> >> > > >> capabilities.
>> >> > > >>
>> >> > > >> Signed-off-by: Pavan Nikhilesh
><pbhagavatula@marvell.com>
>> >> > > >> ---
>> >> > > >
>> >> > > ><snip>
>> >> > > >
>> >> > > >> + if (flags & L2FWD_EVENT_TX_DIRECT) {
>> >> > > >> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
>> >> > > >> + while
>> >> > > >> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
>> >> > > >> + port_id,
>> >> > > >> + &ev, 1)
>> >> > > >&&
>> >> > > >> + !*done)
>> >> > > >> + ;
>> >> > > >> + }
>> >> > > >
>> >> > > >In the TX direct mode we can send packets directly to the
>> >ethernet
>> >> > > >device using ethdev
>> >> > > >API's. This will save unnecessary indirections and event
>unfolds
>> >within
>> >> > > >the driver.
>> >> > >
>> >> > > How would we guarantee atomicity of access to Tx queues?
>> >Between
>> >> cores
>> >> > > as we can only use one Tx queue.
>> >> > > Also, if SCHED_TYPE is ORDERED how would we guarantee flow
>> >ordering?
>> >> > > The capability of MT_LOCKFREE and flow ordering is abstracted
>> >through `
>> >> > > rte_event_eth_tx_adapter_enqueue `.
>> >> >
>> >> > I understand your objective here. Probably in your case the
>DIRECT
>> >is
>> >> equivalent
>> >> > to giving the packet to the scheduler, which will pass on the
>packet
>> >to the
>> >> destined device.
>> >> > On NXP platform, DIRECT implies sending the packet directly to
>the
>> >device
>> >> (eth/crypto),
>> >> > and scheduler will internally pitch in.
>> >> > Here we will need another option to send it directly to the
>device.
>> >> > We can set up a call to discuss the same, or send patch regarding
>this
>> >to you
>> >> to incorporate
>> >> > the same in your series.
>> >>
>> >> Yes. Sending the patch will make us understand better.
>> >>
>> >> Currently, We have two different means for abstracting Tx adapter
>> >fast
>> >> path changes,
>> >> a) SINGLE LINK QUEUE
>> >> b) rte_event_eth_tx_adapter_enqueue()
>> >>
>> >> Could you please share why any of the above schemes do not
>work
>> >for NXP
>> >> HW?
>> >> If there is no additional functionality in
>> >> rte_event_eth_tx_adapter_enqueue(), you could
>> >> simply call direct ethdev tx burst function pointer to make
>> >> abstraction intact to avoid
>> >> one more code flow in the fast path.
>> >>
>> >> If I guess it right since NXP HW supports MT_LOCKFREE and only
>> >atomic, due
>> >> to
>> >> that, calling eth_dev_tx_burst will be sufficient. But abstracting
>> >> over rte_event_eth_tx_adapter_enqueue()
>> >> makes application life easy. You can call the low level DPPA2 Tx
>> >function in
>> >> rte_event_eth_tx_adapter_enqueue() to avoid any performance
>> >> impact(We
>> >> are doing the same).
>> >
>> >Yes, that’s correct regarding our H/W capability.
>> >Agree that the application will become complex by adding more
>code
>> >flow,
>> >but calling Tx functions internally may lead to additional CPU cycles.
>> >Give us a couple of days to analyze the performance impact, and as
>you
>> >also say, I too
>> >don't think it would be much. We should be able to manage it in
>within
>> >our driver.
>>
>> When application calls rte_event_eth_tx_adapter_queue_add()
>based on
>> the eth_dev_id the underlying eventdevice can set
>> set rte_event_eth_tx_adapter_enqueue() to directly call a function
>which
>> does the platform specific Tx.
>>
>> i.e if eth_dev is net/dpaa and event dev is also net/dpaa we need
>_not_ call
>> `rte_eth_tx_burst()` in ` rte_event_eth_tx_adapter_enqueue()` it
>can directly
>> Invoke the platform specific Rx function which would avoid function
>pointer
>> indirection.
>
>I have some performance concern regarding the burst mode; not w.r.t
>the
>function call sequence, but w.r.t the burst functionality.
>
>The API `rte_event_eth_tx_adapter_enqueue()` is called with `nb_rx`
>events. In case we
>are calling the Ethernet API's directly from within the adapter, we will
>still need to send
>all of them separately to the Ethernet device rather than in burst (or
>scan and separate
>the packets internally for ethernet device, queue pair). This separation
>in the driver is
>more complex than in the application, as application is aware of the Eth
>dev and queues
>it is using and thus can easily bifurcate the events.
>
>I suggest to have a flag in the `rte_event_eth_tx_adapter_enqueue()`
>API to determine
>if the application is sending all the packets in a particular API call for a
>single destination,
>so that driver can act smartly and send the burst to Eth Tx function, on
>the basis of fields
>set in the first mbuf.
>
We could have a flag for the above but the application still needs to segregate
packets based on port_id as `rte_event_dequeue_burst` doesn’t guarantee
that all the packets arrive from the same ethernet port/queue.
I think since the application is setting mbuf->port and
`rte_event_eth_tx_adapter_txq_set`, the segregation should be done at
`rte_event_eth_tx_adapter_enqueue` as it would be the same logic for every
application and reduces application complexity.
Regards,
Pavan.
>Seems fine to you guys? I plan to send the patch regarding this soon.
>
>Regards,
>Nipun
>
>>
>> >
>> >>
>> >>
>> >> >
>> >> > >
>> >> > > @see examples/eventdev_pipeline and app/test-
>> >> eventdev/test_pipeline_*.
>> >> >
>> >> > Yes, we are aware of that, They are one way of representing,
>how
>> >to build
>> >> a complete eventdev pipeline.
>> >> > They don't work on NXP HW.
>> >> > We plan to send patches for them to fix them for NXP HW soon.
>> >> >
>> >> > Regards,
>> >> > Nipun
>> >> >
>> >> > >
>> >> > > >
>> >> > > >> +
>> >> > > >> + if (timer_period > 0)
>> >> > > >> + __atomic_fetch_add(&eventdev_rsrc-
>> >> > > >>stats[mbuf-
>> >> > > >> >port].tx,
>> >> > > >> + 1, __ATOMIC_RELAXED);
>> >> > > >> + }
>> >> > > >> +}
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (9 preceding siblings ...)
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
` (12 more replies)
10 siblings, 13 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds a new application to demonstrate the usage of event
mode. The poll mode is also available to help with the transition.
The following new command line parameters are added:
--mode: Dictates the mode of operation either poll or event.
--eventq_sync: Dictates event synchronization mode either atomic or
ordered.
Based on event device capability the configuration is done as follows:
- A single event device is enabled.
- The number of event ports is equal to the number of worker
cores enabled in the core mask. Additional event ports might
be configured based on Rx/Tx adapter capability.
- The number of event queues is equal to the number of ethernet
ports. If Tx adapter doesn't have internal port capability then
an additional single link event queue is used to enqueue events
to Tx adapter.
- Each event port is linked to all existing event queues.
- Dedicated Rx/Tx adapters for each Ethernet port.
v5 Changes:
- Redo poll mode datapath by removing all the static globals.
- Fix event queue configuration when required queues are not available.
- Fix Rx/Tx adapter creation based on portmask.
- Update release notes.
- Unroll macro used to generate event mode functions.
v4 Changes:
- Fix missing eventdev args parsing.
v3 Changes:
- Remove unwanted change to example/l2fwd.
- Fix checkpatch issue
http://mails.dpdk.org/archives/test-report/2019-September/098053.html
v2 Changes:
- Remove global variables.
- Split patches to make reviews friendlier.
- Split datapath based on eventdev capability.
Pavan Nikhilesh (5):
examples/l2fwd-event: add default poll mode routines
examples/l2fwd-event: add infra for eventdev
examples/l2fwd-event: add service core setup
examples/l2fwd-event: add eventdev main loop
examples/l2fwd-event: add graceful teardown
Sunil Kumar Kori (5):
examples/l2fwd-event: add infra to split eventdev framework
examples/l2fwd-event: add event device setup
examples/l2fwd-event: add eventdev queue and port setup
examples/l2fwd-event: add event Rx/Tx adapter setup
doc: add application usage guide for l2fwd-event
MAINTAINERS | 6 +
doc/guides/rel_notes/release_19_11.rst | 6 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
doc/guides/sample_app_ug/l2_forward_event.rst | 755 ++++++++++++++++++
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 62 ++
examples/l2fwd-event/l2fwd_common.c | 148 ++++
examples/l2fwd-event/l2fwd_common.h | 134 ++++
examples/l2fwd-event/l2fwd_event.c | 455 +++++++++++
examples/l2fwd-event/l2fwd_event.h | 73 ++
examples/l2fwd-event/l2fwd_event_generic.c | 331 ++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 306 +++++++
examples/l2fwd-event/l2fwd_poll.c | 197 +++++
examples/l2fwd-event/l2fwd_poll.h | 25 +
examples/l2fwd-event/main.c | 456 +++++++++++
examples/l2fwd-event/meson.build | 17 +
17 files changed, 2978 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.c
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_event.c
create mode 100644 examples/l2fwd-event/l2fwd_event.h
create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-11 14:41 ` Jerin Jacob
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
` (11 subsequent siblings)
12 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Thomas Monjalon,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the default l2fwd poll mode routines similar to examples/l2fwd.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
MAINTAINERS | 5 +
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 59 +++++
examples/l2fwd-event/l2fwd_common.c | 142 +++++++++++
examples/l2fwd-event/l2fwd_common.h | 129 ++++++++++
examples/l2fwd-event/l2fwd_poll.c | 197 +++++++++++++++
examples/l2fwd-event/l2fwd_poll.h | 25 ++
examples/l2fwd-event/main.c | 374 ++++++++++++++++++++++++++++
examples/l2fwd-event/meson.build | 14 ++
9 files changed, 946 insertions(+)
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.c
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_poll.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
diff --git a/MAINTAINERS b/MAINTAINERS
index b3d9aaddd..292ac10c3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1458,6 +1458,11 @@ M: Tomasz Kantecki <tomasz.kantecki@intel.com>
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
+M: Sunil Kumar Kori <skori@marvell.com>
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+F: examples/l2fwd-event/
+T: git://dpdk.org/next/dpdk-next-eventdev
+
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
diff --git a/examples/Makefile b/examples/Makefile
index de11dd487..d18504bd2 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -34,6 +34,7 @@ endif
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni
DIRS-y += l2fwd
+DIRS-y += l2fwd-event
ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
new file mode 100644
index 000000000..73f02dd3b
--- /dev/null
+++ b/examples/l2fwd-event/Makefile
@@ -0,0 +1,59 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# binary name
+APP = l2fwd-event
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+SRCS-y += l2fwd_poll.c
+SRCS-y += l2fwd_common.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
new file mode 100644
index 000000000..213652d72
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -0,0 +1,142 @@
+#include "l2fwd_common.h"
+
+/* Print out statistics on packets dropped */
+void
+print_stats(struct l2fwd_resources *l2fwd_rsrc)
+{
+ uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
+ uint32_t port_id;
+
+ total_packets_dropped = 0;
+ total_packets_tx = 0;
+ total_packets_rx = 0;
+
+ const char clr[] = {27, '[', '2', 'J', '\0' };
+ const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("\nPort statistics ====================================");
+
+ for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {
+ /* skip disabled ports */
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ printf("\nStatistics for port %u ------------------------------"
+ "\nPackets sent: %24"PRIu64
+ "\nPackets received: %20"PRIu64
+ "\nPackets dropped: %21"PRIu64,
+ port_id,
+ l2fwd_rsrc->port_stats[port_id].tx,
+ l2fwd_rsrc->port_stats[port_id].rx,
+ l2fwd_rsrc->port_stats[port_id].dropped);
+
+ total_packets_dropped +=
+ l2fwd_rsrc->port_stats[port_id].dropped;
+ total_packets_tx += l2fwd_rsrc->port_stats[port_id].tx;
+ total_packets_rx += l2fwd_rsrc->port_stats[port_id].rx;
+ }
+ printf("\nAggregate statistics ==============================="
+ "\nTotal packets sent: %18"PRIu64
+ "\nTotal packets received: %14"PRIu64
+ "\nTotal packets dropped: %15"PRIu64,
+ total_packets_tx,
+ total_packets_rx,
+ total_packets_dropped);
+ printf("\n====================================================\n");
+}
+
+int
+l2fwd_event_init_ports(struct l2fwd_resources *l2fwd_rsrc)
+{
+ uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+ struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+ };
+ uint16_t nb_ports_available = 0;
+ uint16_t port_id;
+ int ret;
+
+ /* Initialise each port */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_eth_conf local_port_conf = port_conf;
+ struct rte_eth_dev_info dev_info;
+ struct rte_eth_rxconf rxq_conf;
+ struct rte_eth_txconf txq_conf;
+
+ /* skip ports that are not enabled */
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0) {
+ printf("Skipping disabled port %u\n", port_id);
+ continue;
+ }
+ nb_ports_available++;
+
+ /* init port */
+ printf("Initializing port %u... ", port_id);
+ fflush(stdout);
+ rte_eth_dev_info_get(port_id, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot configure device: err=%d, port=%u\n",
+ ret, port_id);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot adjust number of descriptors: err=%d, port=%u\n",
+ ret, port_id);
+
+ rte_eth_macaddr_get(port_id, &l2fwd_rsrc->eth_addr[port_id]);
+
+ /* init one RX queue */
+ fflush(stdout);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd,
+ rte_eth_dev_socket_id(port_id),
+ &rxq_conf,
+ l2fwd_rsrc->pktmbuf_pool);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_rx_queue_setup:err=%d, port=%u\n",
+ ret, port_id);
+
+ /* init one TX queue on each port */
+ fflush(stdout);
+ txq_conf = dev_info.default_txconf;
+ txq_conf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ rte_eth_dev_socket_id(port_id),
+ &txq_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, port_id);
+
+ rte_eth_promiscuous_enable(port_id);
+
+ printf("Port %u,MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ port_id,
+ l2fwd_rsrc->eth_addr[port_id].addr_bytes[0],
+ l2fwd_rsrc->eth_addr[port_id].addr_bytes[1],
+ l2fwd_rsrc->eth_addr[port_id].addr_bytes[2],
+ l2fwd_rsrc->eth_addr[port_id].addr_bytes[3],
+ l2fwd_rsrc->eth_addr[port_id].addr_bytes[4],
+ l2fwd_rsrc->eth_addr[port_id].addr_bytes[5]);
+ }
+
+ return nb_ports_available;
+}
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
new file mode 100644
index 000000000..7b5958c7d
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -0,0 +1,129 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_COMMON_H__
+#define __L2FWD_COMMON_H__
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#define MAX_PKT_BURST 32
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define MEMPOOL_CACHE_SIZE 256
+
+#define DEFAULT_TIMER_PERIOD 10 /* default period is 10 seconds */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+ uint64_t dropped;
+ uint64_t tx;
+ uint64_t rx;
+} __rte_cache_aligned;
+
+struct l2fwd_resources {
+ volatile uint8_t force_quit;
+ uint8_t mac_updating;
+ uint8_t rx_queue_per_lcore;
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint32_t enabled_port_mask;
+ uint64_t timer_period;
+ struct rte_mempool *pktmbuf_pool;
+ uint32_t dst_ports[RTE_MAX_ETHPORTS];
+ struct rte_ether_addr eth_addr[RTE_MAX_ETHPORTS];
+ struct l2fwd_port_statistics port_stats[RTE_MAX_ETHPORTS];
+ void *poll_rsrc;
+} __rte_cache_aligned;
+
+static __rte_always_inline void
+l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_port_id,
+ struct rte_ether_addr *addr)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_port_id << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(addr, ð->s_addr);
+}
+
+static __rte_always_inline struct l2fwd_resources *
+l2fwd_get_rsrc(void)
+{
+ static const char name[RTE_MEMZONE_NAMESIZE] = "l2fwd_rsrc";
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ mz = rte_memzone_reserve(name, sizeof(struct l2fwd_resources), 0, 0);
+ if (mz != NULL) {
+ struct l2fwd_resources *l2fwd_rsrc = mz->addr;
+
+ memset(l2fwd_rsrc, 0, sizeof(struct l2fwd_resources));
+ l2fwd_rsrc->mac_updating = true;
+ l2fwd_rsrc->rx_queue_per_lcore = 1;
+ l2fwd_rsrc->timer_period = 10 * rte_get_timer_hz();
+
+ return mz->addr;
+ }
+
+ rte_exit(EXIT_FAILURE,
+ "Unable to allocate memory for l2fwd resources\n");
+
+ return NULL;
+}
+
+void print_stats(struct l2fwd_resources *l2fwd_rsrc);
+int l2fwd_event_init_ports(struct l2fwd_resources *l2fwd_rsrc);
+
+#endif /* __L2FWD_COMMON_H__ */
diff --git a/examples/l2fwd-event/l2fwd_poll.c b/examples/l2fwd-event/l2fwd_poll.c
new file mode 100644
index 000000000..a0faadc52
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_poll.c
@@ -0,0 +1,197 @@
+
+#include "l2fwd_poll.h"
+
+/* main poll mode processing loop */
+static void
+l2fwd_poll_main_loop(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = l2fwd_rsrc->poll_rsrc;
+ uint64_t prev_tsc, diff_tsc, cur_tsc, timer_tsc, drain_tsc;
+ uint64_t timer_period = l2fwd_rsrc->timer_period;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_tx_buffer *buffer;
+ struct lcore_queue_conf *qconf;
+ uint32_t i, j, port_id, nb_rx;
+ struct rte_mbuf *m;
+ uint32_t lcore_id;
+ uint16_t dst_port;
+ int32_t sent;
+
+ drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S *
+ BURST_TX_DRAIN_US;
+ prev_tsc = 0;
+ timer_tsc = 0;
+
+ lcore_id = rte_lcore_id();
+ qconf = &poll_rsrc->lcore_queue_conf[lcore_id];
+
+ if (qconf->n_rx_port == 0) {
+ printf("lcore %u has nothing to do\n", lcore_id);
+ return;
+ }
+
+ printf("entering main loop on lcore %u\n", lcore_id);
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ port_id = qconf->rx_port_list[i];
+ printf(" -- lcoreid=%u port_id=%u\n", lcore_id, port_id);
+
+ }
+
+ while (!l2fwd_rsrc->force_quit) {
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ port_id =
+ l2fwd_rsrc->dst_ports[
+ qconf->rx_port_list[i]];
+ buffer = poll_rsrc->tx_buffer[port_id];
+ sent = rte_eth_tx_buffer_flush(port_id, 0,
+ buffer);
+ if (sent)
+ l2fwd_rsrc->port_stats[port_id].tx +=
+ sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id ==
+ rte_get_master_lcore()) {
+ print_stats(l2fwd_rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+ /*
+ * Read packet from RX queues
+ */
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ port_id = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst(port_id, 0,
+ pkts_burst, MAX_PKT_BURST);
+
+ l2fwd_rsrc->port_stats[port_id].rx += nb_rx;
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ dst_port = l2fwd_rsrc->dst_ports[port_id];
+
+ if (l2fwd_rsrc->mac_updating)
+ l2fwd_mac_updating(m, dst_port,
+ &l2fwd_rsrc->eth_addr[
+ dst_port]);
+
+ buffer = poll_rsrc->tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer,
+ m);
+ if (sent)
+ l2fwd_rsrc->port_stats[dst_port].tx
+ += sent;
+
+ }
+ }
+ }
+}
+
+static void
+l2fwd_poll_lcore_config(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = l2fwd_rsrc->poll_rsrc;
+ struct lcore_queue_conf *qconf = NULL;
+ uint32_t rx_lcore_id = 0;
+ uint16_t port_id;
+
+ /* Initialize the port/queue configuration of each logical core */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+
+ /* get the lcore_id for this port */
+ while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+ poll_rsrc->lcore_queue_conf[rx_lcore_id].n_rx_port ==
+ l2fwd_rsrc->rx_queue_per_lcore) {
+ rx_lcore_id++;
+ if (rx_lcore_id >= RTE_MAX_LCORE)
+ rte_exit(EXIT_FAILURE, "Not enough cores\n");
+ }
+
+ if (qconf != &poll_rsrc->lcore_queue_conf[rx_lcore_id]) {
+ /* Assigned a new logical core in the loop above. */
+ qconf = &poll_rsrc->lcore_queue_conf[rx_lcore_id];
+ }
+
+ qconf->rx_port_list[qconf->n_rx_port] = port_id;
+ qconf->n_rx_port++;
+ printf("Lcore %u: RX port %u\n", rx_lcore_id, port_id);
+ }
+}
+
+static void
+l2fwd_poll_init_tx_buffers(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = l2fwd_rsrc->poll_rsrc;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* Initialize TX buffers */
+ poll_rsrc->tx_buffer[port_id] = rte_zmalloc_socket("tx_buffer",
+ RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
+ rte_eth_dev_socket_id(port_id));
+ if (poll_rsrc->tx_buffer[port_id] == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Cannot allocate buffer for tx on port %u\n",
+ port_id);
+
+ rte_eth_tx_buffer_init(poll_rsrc->tx_buffer[port_id],
+ MAX_PKT_BURST);
+
+ ret = rte_eth_tx_buffer_set_err_callback(
+ poll_rsrc->tx_buffer[port_id],
+ rte_eth_tx_buffer_count_callback,
+ &l2fwd_rsrc->port_stats[port_id].dropped);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Cannot set error callback for tx buffer on port %u\n",
+ port_id);
+ }
+}
+
+void
+l2fwd_poll_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc;
+
+ poll_rsrc = rte_zmalloc("l2fwd_poll_rsrc",
+ sizeof(struct l2fwd_poll_resources), 0);
+ if (poll_rsrc == NULL)
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate resources for l2fwd poll mode\n");
+
+ l2fwd_rsrc->poll_rsrc = poll_rsrc;
+ l2fwd_poll_lcore_config(l2fwd_rsrc);
+ l2fwd_poll_init_tx_buffers(l2fwd_rsrc);
+
+ poll_rsrc->poll_main_loop = l2fwd_poll_main_loop;
+}
diff --git a/examples/l2fwd-event/l2fwd_poll.h b/examples/l2fwd-event/l2fwd_poll.h
new file mode 100644
index 000000000..18b30870b
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_poll.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_POLL_H__
+#define __L2FWD_POLL_H__
+
+#include "l2fwd_common.h"
+
+typedef void (*poll_main_loop_cb)(struct l2fwd_resources *l2fwd_rsrc);
+
+struct lcore_queue_conf {
+ uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ uint32_t n_rx_port;
+} __rte_cache_aligned;
+
+struct l2fwd_poll_resources {
+ poll_main_loop_cb poll_main_loop;
+ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+};
+
+void l2fwd_poll_resource_setup(struct l2fwd_resources *l2fwd_rsrc);
+
+#endif
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
new file mode 100644
index 000000000..887a979d5
--- /dev/null
+++ b/examples/l2fwd-event/main.c
@@ -0,0 +1,374 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "l2fwd_poll.h"
+
+/* display usage */
+static void
+l2fwd_event_usage(const char *prgname)
+{
+ printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
+ " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+ " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds "
+ " (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ prgname);
+}
+
+static int
+l2fwd_event_parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static unsigned int
+l2fwd_event_parse_nqueue(const char *q_arg)
+{
+ char *end = NULL;
+ unsigned long n;
+
+ /* parse hexadecimal string */
+ n = strtoul(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+ if (n == 0)
+ return 0;
+ if (n >= MAX_RX_QUEUE_PER_LCORE)
+ return 0;
+
+ return n;
+}
+
+static int
+l2fwd_event_parse_timer_period(const char *q_arg)
+{
+ char *end = NULL;
+ int n;
+
+ /* parse number string */
+ n = strtol(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+ if (n >= MAX_TIMER_PERIOD)
+ return -1;
+
+ return n;
+}
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "q:" /* number of queues */
+ "T:" /* timer period */
+ ;
+
+#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
+#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options
+ */
+ CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_event_parse_args(int argc, char **argv,
+ struct l2fwd_resources *l2fwd_rsrc)
+{
+ int mac_updating = 1;
+ struct option lgopts[] = {
+ { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
+ { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ {NULL, 0, 0, 0}
+ };
+ int opt, ret, timer_secs;
+ char *prgname = argv[0];
+ char **argvopt;
+ int option_index;
+
+ argvopt = argv;
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ l2fwd_rsrc->enabled_port_mask =
+ l2fwd_event_parse_portmask(optarg);
+ if (l2fwd_rsrc->enabled_port_mask == 0) {
+ printf("invalid portmask\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* nqueue */
+ case 'q':
+ l2fwd_rsrc->rx_queue_per_lcore =
+ l2fwd_event_parse_nqueue(optarg);
+ if (l2fwd_rsrc->rx_queue_per_lcore == 0) {
+ printf("invalid queue number\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* timer period */
+ case 'T':
+ timer_secs = l2fwd_event_parse_timer_period(optarg);
+ if (timer_secs < 0) {
+ printf("invalid timer period\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ l2fwd_rsrc->timer_period = timer_secs;
+ /* convert to number of cycles */
+ l2fwd_rsrc->timer_period *= rte_get_timer_hz();
+ break;
+
+ /* long options */
+ case 0:
+ break;
+
+ default:
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ }
+
+ l2fwd_rsrc->mac_updating = mac_updating;
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+static int
+l2fwd_launch_one_lcore(void *args)
+{
+ struct l2fwd_resources *l2fwd_rsrc = args;
+ struct l2fwd_poll_resources *poll_rsrc = l2fwd_rsrc->poll_rsrc;
+
+ poll_rsrc->poll_main_loop(l2fwd_rsrc);
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(struct l2fwd_resources *l2fwd_rsrc,
+ uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t port_id;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status...");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (l2fwd_rsrc->force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if (l2fwd_rsrc->force_quit)
+ return;
+ if ((port_mask & (1 << port_id)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(port_id, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps - %s\n",
+ port_id, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", port_id);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+static void
+signal_handler(int signum)
+{
+ struct l2fwd_resources *l2fwd_rsrc = l2fwd_get_rsrc();
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ l2fwd_rsrc->force_quit = true;
+ }
+}
+
+int
+main(int argc, char **argv)
+{
+ struct l2fwd_resources *l2fwd_rsrc;
+ uint16_t nb_ports_available = 0;
+ uint32_t nb_ports_in_mask = 0;
+ uint16_t port_id, last_port;
+ uint32_t nb_mbufs;
+ uint16_t nb_ports;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+ argc -= ret;
+ argv += ret;
+
+ l2fwd_rsrc = l2fwd_get_rsrc();
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* parse application arguments (after the EAL ones) */
+ ret = l2fwd_event_parse_args(argc, argv, l2fwd_rsrc);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+
+ printf("MAC updating %s\n", l2fwd_rsrc->mac_updating ? "enabled" :
+ "disabled");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports == 0)
+ rte_exit(EXIT_FAILURE, "No Ethernet ports - bye\n");
+
+ /* check port mask to possible port mask */
+ if (l2fwd_rsrc->enabled_port_mask & ~((1 << nb_ports) - 1))
+ rte_exit(EXIT_FAILURE, "Invalid portmask; possible (0x%x)\n",
+ (1 << nb_ports) - 1);
+
+ /* reset l2fwd_dst_ports */
+ for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++)
+ l2fwd_rsrc->dst_ports[port_id] = 0;
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_rsrc->dst_ports[port_id] = last_port;
+ l2fwd_rsrc->dst_ports[last_port] = port_id;
+ } else {
+ last_port = port_id;
+ }
+
+ nb_ports_in_mask++;
+ }
+ if (nb_ports_in_mask % 2) {
+ printf("Notice: odd number of ports in portmask.\n");
+ l2fwd_rsrc->dst_ports[last_port] = last_port;
+ }
+
+ nb_mbufs = RTE_MAX(nb_ports * (RTE_TEST_RX_DESC_DEFAULT +
+ RTE_TEST_TX_DESC_DEFAULT +
+ MAX_PKT_BURST + rte_lcore_count() *
+ MEMPOOL_CACHE_SIZE), 8192U);
+
+ /* create the mbuf pool */
+ l2fwd_rsrc->pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool",
+ nb_mbufs, MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+ if (l2fwd_rsrc->pktmbuf_pool == NULL)
+ rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
+
+ nb_ports_available = l2fwd_event_init_ports(l2fwd_rsrc);
+ if (!nb_ports_available) {
+ rte_exit(EXIT_FAILURE,
+ "All available ports are disabled. Please set portmask.\n");
+ }
+
+ l2fwd_poll_resource_setup(l2fwd_rsrc);
+
+ /* initialize port stats */
+ memset(&l2fwd_rsrc->port_stats, 0,
+ sizeof(struct l2fwd_port_statistics));
+
+ /* All settings are done. Now enable eth devices */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((l2fwd_rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "rte_eth_dev_start:err=%d, port=%u\n", ret,
+ port_id);
+ }
+
+ check_all_ports_link_status(l2fwd_rsrc, l2fwd_rsrc->enabled_port_mask);
+
+ /* launch per-lcore init on every lcore */
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, l2fwd_rsrc,
+ CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ printf("Closing port %d...", port_id);
+ rte_eth_dev_stop(port_id);
+ rte_eth_dev_close(port_id);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return 0;
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
new file mode 100644
index 000000000..f482e1817
--- /dev/null
+++ b/examples/l2fwd-event/meson.build
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+sources = files(
+ 'main.c',
+ 'l2fwd_poll.c',
+ 'l2fwd_common.c',
+)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for eventdev
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-04 12:30 ` Nipun Gupta
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
` (10 subsequent siblings)
12 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add infra to select event device as a mode to process packets through
command line arguments. Also, allow the user to select the schedule type
to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
Usage:
`--mode="eventdev"` or `--mode="poll"`
`--eventq-sched="ordered"` or `--eventq-sched="atomic"`
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 1 +
examples/l2fwd-event/l2fwd_common.h | 3 ++
examples/l2fwd-event/l2fwd_event.c | 34 ++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 21 ++++++++++++
examples/l2fwd-event/main.c | 50 +++++++++++++++++++++++++++--
examples/l2fwd-event/meson.build | 1 +
6 files changed, 108 insertions(+), 2 deletions(-)
create mode 100644 examples/l2fwd-event/l2fwd_event.c
create mode 100644 examples/l2fwd-event/l2fwd_event.h
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index 73f02dd3b..08ba1835d 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -8,6 +8,7 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
SRCS-y += l2fwd_poll.c
+SRCS-y += l2fwd_event.c
SRCS-y += l2fwd_common.c
# Build using pkg-config variables if possible
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 7b5958c7d..cdafa52c7 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -65,6 +65,8 @@ struct l2fwd_port_statistics {
struct l2fwd_resources {
volatile uint8_t force_quit;
+ uint8_t event_mode;
+ uint8_t sched_type;
uint8_t mac_updating;
uint8_t rx_queue_per_lcore;
uint16_t nb_rxd;
@@ -75,6 +77,7 @@ struct l2fwd_resources {
uint32_t dst_ports[RTE_MAX_ETHPORTS];
struct rte_ether_addr eth_addr[RTE_MAX_ETHPORTS];
struct l2fwd_port_statistics port_stats[RTE_MAX_ETHPORTS];
+ void *event_rsrc;
void *poll_rsrc;
} __rte_cache_aligned;
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
new file mode 100644
index 000000000..621ff63f0
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc;
+
+ if (!rte_event_dev_count())
+ rte_exit(EXIT_FAILURE, "No Eventdev found\n");
+
+ event_rsrc = rte_zmalloc("l2fwd_event",
+ sizeof(struct l2fwd_event_resources), 0);
+ if (event_rsrc == NULL)
+ rte_exit(EXIT_FAILURE, "failed to allocate memory\n");
+
+ l2fwd_rsrc->event_rsrc = event_rsrc;
+}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
new file mode 100644
index 000000000..8ac2dc266
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_EVENT_H__
+#define __L2FWD_EVENT_H__
+
+#include <rte_common.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+struct l2fwd_event_resources {
+};
+
+void l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc);
+
+#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 887a979d5..01b1d531d 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -2,6 +2,7 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include "l2fwd_event.h"
#include "l2fwd_poll.h"
/* display usage */
@@ -16,7 +17,12 @@ l2fwd_event_usage(const char *prgname)
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
" When enabled:\n"
" - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n"
+ " --mode: Packet transfer mode for I/O, poll or eventdev\n"
+ " Default mode = eventdev\n"
+ " --eventq-sched: Event queue schedule type, ordered or atomic.\n"
+ " Default: atomic\n"
+ " Valid only if --mode=eventdev\n\n",
prgname);
}
@@ -71,6 +77,26 @@ l2fwd_event_parse_timer_period(const char *q_arg)
return n;
}
+static void
+l2fwd_event_parse_mode(const char *optarg,
+ struct l2fwd_resources *l2fwd_rsrc)
+{
+ if (!strncmp(optarg, "poll", 4))
+ l2fwd_rsrc->event_mode = false;
+ else if (!strncmp(optarg, "eventdev", 8))
+ l2fwd_rsrc->event_mode = true;
+}
+
+static void
+l2fwd_event_parse_eventq_sched(const char *optarg,
+ struct l2fwd_resources *l2fwd_rsrc)
+{
+ if (!strncmp(optarg, "ordered", 7))
+ l2fwd_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+ else if (!strncmp(optarg, "atomic", 6))
+ l2fwd_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+}
+
static const char short_options[] =
"p:" /* portmask */
"q:" /* number of queues */
@@ -79,6 +105,8 @@ static const char short_options[] =
#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SCHED "eventq-sched"
enum {
/* long options mapped to a short option */
@@ -87,6 +115,8 @@ enum {
* conflict with short options
*/
CMD_LINE_OPT_MIN_NUM = 256,
+ CMD_LINE_OPT_MODE_NUM,
+ CMD_LINE_OPT_EVENTQ_SCHED_NUM,
};
/* Parse the argument given in the command line of the application */
@@ -98,6 +128,10 @@ l2fwd_event_parse_args(int argc, char **argv,
struct option lgopts[] = {
{ CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
{ CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ { CMD_LINE_OPT_MODE, required_argument, NULL,
+ CMD_LINE_OPT_MODE_NUM},
+ { CMD_LINE_OPT_EVENTQ_SCHED, required_argument, NULL,
+ CMD_LINE_OPT_EVENTQ_SCHED_NUM},
{NULL, 0, 0, 0}
};
int opt, ret, timer_secs;
@@ -145,6 +179,14 @@ l2fwd_event_parse_args(int argc, char **argv,
l2fwd_rsrc->timer_period *= rte_get_timer_hz();
break;
+ case CMD_LINE_OPT_MODE_NUM:
+ l2fwd_event_parse_mode(optarg, l2fwd_rsrc);
+ break;
+
+ case CMD_LINE_OPT_EVENTQ_SCHED_NUM:
+ l2fwd_event_parse_eventq_sched(optarg, l2fwd_rsrc);
+ break;
+
/* long options */
case 0:
break;
@@ -332,7 +374,11 @@ main(int argc, char **argv)
"All available ports are disabled. Please set portmask.\n");
}
- l2fwd_poll_resource_setup(l2fwd_rsrc);
+ /* Configure eventdev parameters if required */
+ if (l2fwd_rsrc->event_mode)
+ l2fwd_event_resource_setup(l2fwd_rsrc);
+ else
+ l2fwd_poll_resource_setup(l2fwd_rsrc);
/* initialize port stats */
memset(&l2fwd_rsrc->port_stats, 0,
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index f482e1817..84ee1af84 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -11,4 +11,5 @@ sources = files(
'main.c',
'l2fwd_poll.c',
'l2fwd_common.c',
+ 'l2fwd_event.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 03/10] examples/l2fwd-event: add infra to split eventdev framework
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 04/10] examples/l2fwd-event: add event device setup pbhagavatula
` (9 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 2 ++
examples/l2fwd-event/l2fwd_event.c | 27 +++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 7 +++++
examples/l2fwd-event/l2fwd_event_generic.c | 23 ++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 23 ++++++++++++++++
examples/l2fwd-event/meson.build | 2 ++
6 files changed, 84 insertions(+)
create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index 08ba1835d..6f4176882 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -10,6 +10,8 @@ SRCS-y := main.c
SRCS-y += l2fwd_poll.c
SRCS-y += l2fwd_event.c
SRCS-y += l2fwd_common.c
+SRCS-y += l2fwd_event_generic.c
+SRCS-y += l2fwd_event_internal_port.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 621ff63f0..1fb4dbeb6 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,30 @@
#include "l2fwd_event.h"
+static void
+l2fwd_event_capability_setup(struct l2fwd_event_resources *event_rsrc)
+{
+ uint32_t caps = 0;
+ uint16_t i;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(i) {
+ ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Invalid capability for Tx adptr port %d\n",
+ i);
+
+ event_rsrc->tx_mode_q |= !(caps &
+ RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+ }
+
+ if (event_rsrc->tx_mode_q)
+ l2fwd_event_set_generic_ops(&event_rsrc->ops);
+ else
+ l2fwd_event_set_internal_port_ops(&event_rsrc->ops);
+}
+
void
l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
{
@@ -31,4 +55,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
rte_exit(EXIT_FAILURE, "failed to allocate memory\n");
l2fwd_rsrc->event_rsrc = event_rsrc;
+
+ /* Setup eventdev capability callbacks */
+ l2fwd_event_capability_setup(event_rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 8ac2dc266..b9d089c2f 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -13,9 +13,16 @@
#include "l2fwd_common.h"
+struct event_setup_ops {
+};
+
struct l2fwd_event_resources {
+ uint8_t tx_mode_q;
+ struct event_setup_ops ops;
};
void l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc);
+void l2fwd_event_set_generic_ops(struct event_setup_ops *ops);
+void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops);
#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
new file mode 100644
index 000000000..9afade7d2
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
new file mode 100644
index 000000000..ce95b8e6d
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index 84ee1af84..69e90ab2d 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -12,4 +12,6 @@ sources = files(
'l2fwd_poll.c',
'l2fwd_common.c',
'l2fwd_event.c',
+ 'l2fwd_event_internal_port.c',
+ 'l2fwd_event_generic.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 04/10] examples/l2fwd-event: add event device setup
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (2 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
` (8 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event device device setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 3 +
examples/l2fwd-event/l2fwd_event.h | 16 ++++
examples/l2fwd-event/l2fwd_event_generic.c | 75 +++++++++++++++++-
.../l2fwd-event/l2fwd_event_internal_port.c | 77 ++++++++++++++++++-
4 files changed, 169 insertions(+), 2 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 1fb4dbeb6..188f06fd2 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -58,4 +58,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
/* Setup eventdev capability callbacks */
l2fwd_event_capability_setup(event_rsrc);
+
+ /* Event device configuration */
+ event_rsrc->ops.event_device_setup(l2fwd_rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index b9d089c2f..7ec985ae7 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -13,11 +13,27 @@
#include "l2fwd_common.h"
+typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
+
+struct event_queues {
+ uint8_t nb_queues;
+};
+
+struct event_ports {
+ uint8_t nb_ports;
+};
+
struct event_setup_ops {
+ event_device_setup_cb event_device_setup;
};
struct l2fwd_event_resources {
uint8_t tx_mode_q;
+ uint8_t has_burst;
+ uint8_t event_d_id;
+ uint8_t disable_implicit_release;
+ struct event_ports evp;
+ struct event_queues evq;
struct event_setup_ops ops;
};
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 9afade7d2..2e82c6bdf 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -16,8 +16,81 @@
#include "l2fwd_common.h"
#include "l2fwd_event.h"
+static uint32_t
+l2fwd_event_device_setup_generic(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t ethdev_count = 0;
+ uint16_t num_workers = 0;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ethdev_count++;
+ }
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+ event_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ /* One queue for each ethdev port + one Tx adapter Single link queue. */
+ event_d_conf.nb_event_queues = ethdev_count + 1;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count() - rte_service_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ event_rsrc->evp.nb_ports = num_workers;
+ event_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+ event_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ event_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->event_device_setup = l2fwd_event_device_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index ce95b8e6d..f95b1762a 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -16,8 +16,83 @@
#include "l2fwd_common.h"
#include "l2fwd_event.h"
+static uint32_t
+l2fwd_event_device_setup_internal_port(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ uint8_t disable_implicit_release;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t ethdev_count = 0;
+ uint16_t num_workers = 0;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ethdev_count++;
+ }
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+
+ disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+ event_rsrc->disable_implicit_release =
+ disable_implicit_release;
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ event_d_conf.nb_event_queues = ethdev_count;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ event_rsrc->evp.nb_ports = num_workers;
+ event_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+ event_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+ event_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->event_device_setup = l2fwd_event_device_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (3 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 04/10] examples/l2fwd-event: add event device setup pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
` (7 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event device queue and port setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 9 +-
examples/l2fwd-event/l2fwd_event.h | 10 ++
examples/l2fwd-event/l2fwd_event_generic.c | 111 ++++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 106 +++++++++++++++++
4 files changed, 235 insertions(+), 1 deletion(-)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 188f06fd2..9b315e5e1 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -45,6 +45,7 @@ void
l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
{
struct l2fwd_event_resources *event_rsrc;
+ uint32_t event_queue_cfg;
if (!rte_event_dev_count())
rte_exit(EXIT_FAILURE, "No Eventdev found\n");
@@ -60,5 +61,11 @@ l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
l2fwd_event_capability_setup(event_rsrc);
/* Event device configuration */
- event_rsrc->ops.event_device_setup(l2fwd_rsrc);
+ event_queue_cfg = event_rsrc->ops.event_device_setup(l2fwd_rsrc);
+
+ /* Event queue configuration */
+ event_rsrc->ops.event_queue_setup(l2fwd_rsrc, event_queue_cfg);
+
+ /* Event port configuration */
+ event_rsrc->ops.event_port_setup(l2fwd_rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 7ec985ae7..93d254cf3 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -14,27 +14,37 @@
#include "l2fwd_common.h"
typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
+typedef void (*event_port_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
+typedef void (*event_queue_setup_cb)(struct l2fwd_resources *l2fwd_rsrc,
+ uint32_t event_queue_cfg);
struct event_queues {
+ uint8_t *event_q_id;
uint8_t nb_queues;
};
struct event_ports {
+ uint8_t *event_p_id;
uint8_t nb_ports;
+ rte_spinlock_t lock;
};
struct event_setup_ops {
event_device_setup_cb event_device_setup;
+ event_queue_setup_cb event_queue_setup;
+ event_port_setup_cb event_port_setup;
};
struct l2fwd_event_resources {
uint8_t tx_mode_q;
+ uint8_t deq_depth;
uint8_t has_burst;
uint8_t event_d_id;
uint8_t disable_implicit_release;
struct event_ports evp;
struct event_queues evq;
struct event_setup_ops ops;
+ struct rte_event_port_conf def_p_conf;
};
void l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc);
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 2e82c6bdf..6922e2e5e 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -89,8 +89,119 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *l2fwd_rsrc)
return event_queue_cfg;
}
+static void
+l2fwd_event_port_setup_generic(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ uint8_t event_d_id = event_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ event_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->evp.nb_ports);
+ if (!event_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE, " No space is available");
+
+ memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ event_rsrc->disable_implicit_release;
+ event_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+ for (event_p_id = 0; event_p_id < event_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id,
+ event_rsrc->evq.event_q_id,
+ NULL,
+ event_rsrc->evq.nb_queues - 1);
+ if (ret != (event_rsrc->evq.nb_queues - 1)) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ event_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+ }
+ /* init spinlock */
+ rte_spinlock_init(&event_rsrc->evp.lock);
+
+ event_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l2fwd_event_queue_setup_generic(struct l2fwd_resources *l2fwd_rsrc,
+ uint32_t event_queue_cfg)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ uint8_t event_d_id = event_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id;
+ int32_t ret;
+
+ event_q_conf.schedule_type = l2fwd_rsrc->sched_type;
+ event_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->evq.nb_queues);
+ if (!event_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ for (event_q_id = 0; event_q_id < (event_rsrc->evq.nb_queues - 1);
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ event_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+
+ event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+ event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue for Tx adapter");
+ }
+ event_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_generic;
+ ops->event_queue_setup = l2fwd_event_queue_setup_generic;
+ ops->event_port_setup = l2fwd_event_port_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index f95b1762a..6e3c1da5b 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -91,8 +91,114 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *l2fwd_rsrc)
return event_queue_cfg;
}
+static void
+l2fwd_event_port_setup_internal_port(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ uint8_t event_d_id = event_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ event_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->evp.nb_ports);
+ if (!event_rsrc->evp.event_p_id)
+ rte_exit(EXIT_FAILURE,
+ "Failed to allocate memory for Event Ports");
+
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ event_rsrc->disable_implicit_release;
+
+ for (event_p_id = 0; event_p_id < event_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ event_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+ /* init spinlock */
+ rte_spinlock_init(&event_rsrc->evp.lock);
+ }
+
+ event_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l2fwd_event_queue_setup_internal_port(struct l2fwd_resources *l2fwd_rsrc,
+ uint32_t event_queue_cfg)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ uint8_t event_d_id = event_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id = 0;
+ int32_t ret;
+
+ rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ if (def_q_conf.nb_atomic_order_sequences <
+ event_q_conf.nb_atomic_order_sequences)
+ event_q_conf.nb_atomic_order_sequences =
+ def_q_conf.nb_atomic_order_sequences;
+
+ event_q_conf.event_queue_cfg = event_queue_cfg;
+ event_q_conf.schedule_type = l2fwd_rsrc->sched_type;
+ event_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->evq.nb_queues);
+ if (!event_rsrc->evq.event_q_id)
+ rte_exit(EXIT_FAILURE, "Memory allocation failure");
+
+ for (event_q_id = 0; event_q_id < event_rsrc->evq.nb_queues;
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ event_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_internal_port;
+ ops->event_queue_setup = l2fwd_event_queue_setup_internal_port;
+ ops->event_port_setup = l2fwd_event_port_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (4 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 07/10] examples/l2fwd-event: add service core setup pbhagavatula
` (6 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 3 +
examples/l2fwd-event/l2fwd_event.h | 16 +++
examples/l2fwd-event/l2fwd_event_generic.c | 125 ++++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 102 ++++++++++++++
4 files changed, 246 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 9b315e5e1..293ecc129 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -68,4 +68,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
/* Event port configuration */
event_rsrc->ops.event_port_setup(l2fwd_rsrc);
+
+ /* Rx/Tx adapters configuration */
+ event_rsrc->ops.adapter_setup(l2fwd_rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 93d254cf3..e0cfee8b8 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -17,6 +17,7 @@ typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
typedef void (*event_port_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
typedef void (*event_queue_setup_cb)(struct l2fwd_resources *l2fwd_rsrc,
uint32_t event_queue_cfg);
+typedef void (*adapter_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
struct event_queues {
uint8_t *event_q_id;
@@ -29,10 +30,23 @@ struct event_ports {
rte_spinlock_t lock;
};
+struct event_rx_adptr {
+ uint32_t service_id;
+ uint8_t nb_rx_adptr;
+ uint8_t *rx_adptr;
+};
+
+struct event_tx_adptr {
+ uint32_t service_id;
+ uint8_t nb_tx_adptr;
+ uint8_t *tx_adptr;
+};
+
struct event_setup_ops {
event_device_setup_cb event_device_setup;
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
+ adapter_setup_cb adapter_setup;
};
struct l2fwd_event_resources {
@@ -44,6 +58,8 @@ struct l2fwd_event_resources {
struct event_ports evp;
struct event_queues evq;
struct event_setup_ops ops;
+ struct event_rx_adptr rx_adptr;
+ struct event_tx_adptr tx_adptr;
struct rte_event_port_conf def_p_conf;
};
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 6922e2e5e..bce00f631 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -198,10 +198,135 @@ l2fwd_event_queue_setup_generic(struct l2fwd_resources *l2fwd_rsrc,
event_rsrc->evq.event_q_id[event_q_id] = event_q_id;
}
+static void
+l2fwd_rx_tx_adapter_setup_generic(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = event_rsrc->event_d_id;
+ uint8_t rx_adptr_id = 0;
+ uint8_t tx_adptr_id = 0;
+ uint8_t tx_port_id = 0;
+ uint16_t port_id;
+ uint32_t service_id;
+ int32_t ret, i = 0;
+
+ /* Rx adapter setup */
+ event_rsrc->rx_adptr.nb_rx_adptr = 1;
+ event_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->rx_adptr.nb_rx_adptr);
+ if (!event_rsrc->rx_adptr.rx_adptr) {
+ free(event_rsrc->evp.event_p_id);
+ free(event_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+ &event_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create rx adapter");
+
+ /* Configure user requested sched type */
+ eth_q_conf.ev.sched_type = l2fwd_rsrc->sched_type;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ eth_q_conf.ev.queue_id = event_rsrc->evq.event_q_id[i];
+ ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+ if (i < event_rsrc->evq.nb_queues)
+ i++;
+ }
+
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ event_rsrc->rx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed",
+ rx_adptr_id);
+
+ event_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+ /* Tx adapter setup */
+ event_rsrc->tx_adptr.nb_tx_adptr = 1;
+ event_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->tx_adptr.nb_tx_adptr);
+ if (!event_rsrc->tx_adptr.tx_adptr) {
+ free(event_rsrc->rx_adptr.rx_adptr);
+ free(event_rsrc->evp.event_p_id);
+ free(event_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+ &event_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "failed to create tx adapter");
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+ -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+ }
+
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ event_rsrc->tx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &event_rsrc->evq.event_q_id[
+ event_rsrc->evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+ ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed",
+ tx_adptr_id);
+
+ event_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_generic;
ops->event_queue_setup = l2fwd_event_queue_setup_generic;
ops->event_port_setup = l2fwd_event_port_setup_generic;
+ ops->adapter_setup = l2fwd_rx_tx_adapter_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index 6e3c1da5b..71652986d 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -195,10 +195,112 @@ l2fwd_event_queue_setup_internal_port(struct l2fwd_resources *l2fwd_rsrc,
}
}
+static void
+l2fwd_rx_tx_adapter_setup_internal_port(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = event_rsrc->event_d_id;
+ uint16_t adapter_id = 0;
+ uint16_t nb_adapter = 0;
+ uint16_t port_id;
+ uint8_t q_id = 0;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ nb_adapter++;
+ }
+
+ event_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+ event_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->rx_adptr.nb_rx_adptr);
+ if (!event_rsrc->rx_adptr.rx_adptr) {
+ free(event_rsrc->evp.event_p_id);
+ free(event_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+ &event_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", adapter_id);
+
+ /* Configure user requested sched type*/
+ eth_q_conf.ev.sched_type = l2fwd_rsrc->sched_type;
+ eth_q_conf.ev.queue_id = event_rsrc->evq.event_q_id[q_id];
+ ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(adapter_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", adapter_id);
+
+ event_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ if (q_id < event_rsrc->evq.nb_queues)
+ q_id++;
+ }
+
+ event_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+ event_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ event_rsrc->tx_adptr.nb_tx_adptr);
+ if (!event_rsrc->tx_adptr.tx_adptr) {
+ free(event_rsrc->rx_adptr.rx_adptr);
+ free(event_rsrc->evp.event_p_id);
+ free(event_rsrc->evq.event_q_id);
+ rte_exit(EXIT_FAILURE,
+ "failed to allocate memery for Rx adapter");
+ }
+
+ adapter_id = 0;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+ &event_rsrc->def_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", adapter_id);
+
+ ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+ -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(adapter_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", adapter_id);
+
+ event_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ }
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_internal_port;
ops->event_queue_setup = l2fwd_event_queue_setup_internal_port;
ops->event_port_setup = l2fwd_event_port_setup_internal_port;
+ ops->adapter_setup = l2fwd_rx_tx_adapter_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 07/10] examples/l2fwd-event: add service core setup
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (5 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
` (5 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Setup service cores for eventdev and Rx/Tx adapter when they don't have
internal port capability.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 87 ++++++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 1 +
examples/l2fwd-event/main.c | 3 ++
3 files changed, 91 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 293ecc129..adba40069 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,93 @@
#include "l2fwd_event.h"
+static inline int
+l2fwd_event_service_enable(uint32_t service_id)
+{
+ uint8_t min_service_count = UINT8_MAX;
+ uint32_t slcore_array[RTE_MAX_LCORE];
+ unsigned int slcore = 0;
+ uint8_t service_count;
+ int32_t slcore_count;
+
+ if (!rte_service_lcore_count())
+ return -ENOENT;
+
+ slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+ if (slcore_count < 0)
+ return -ENOENT;
+ /* Get the core which has least number of services running. */
+ while (slcore_count--) {
+ /* Reset default mapping */
+ rte_service_map_lcore_set(service_id,
+ slcore_array[slcore_count], 0);
+ service_count = rte_service_lcore_count_services(
+ slcore_array[slcore_count]);
+ if (service_count < min_service_count) {
+ slcore = slcore_array[slcore_count];
+ min_service_count = service_count;
+ }
+ }
+ if (rte_service_map_lcore_set(service_id, slcore, 1))
+ return -ENOENT;
+ rte_service_lcore_start(slcore);
+
+ return 0;
+}
+
+void
+l2fwd_event_service_setup(struct l2fwd_resources *l2fwd_rsrc)
+{
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ struct rte_event_dev_info evdev_info;
+ uint32_t service_id, caps;
+ int ret, i;
+
+ rte_event_dev_info_get(event_rsrc->event_d_id, &evdev_info);
+ if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+ ret = rte_event_dev_service_id_get(event_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Error in starting eventdev service\n");
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < event_rsrc->rx_adptr.nb_rx_adptr; i++) {
+ ret = rte_event_eth_rx_adapter_caps_get(event_rsrc->event_d_id,
+ event_rsrc->rx_adptr.rx_adptr[i], &caps);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Rx adapter[%d] caps\n",
+ event_rsrc->rx_adptr.rx_adptr[i]);
+ ret = rte_event_eth_rx_adapter_service_id_get(
+ event_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Error in starting Rx adapter[%d] service\n",
+ event_rsrc->rx_adptr.rx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < event_rsrc->tx_adptr.nb_tx_adptr; i++) {
+ ret = rte_event_eth_tx_adapter_caps_get(event_rsrc->event_d_id,
+ event_rsrc->tx_adptr.tx_adptr[i], &caps);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Rx adapter[%d] caps\n",
+ event_rsrc->tx_adptr.tx_adptr[i]);
+ ret = rte_event_eth_tx_adapter_service_id_get(
+ event_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE,
+ "Error in starting Rx adapter[%d] service\n",
+ event_rsrc->tx_adptr.tx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+}
+
static void
l2fwd_event_capability_setup(struct l2fwd_event_resources *event_rsrc)
{
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index e0cfee8b8..e0c589f93 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -66,5 +66,6 @@ struct l2fwd_event_resources {
void l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc);
void l2fwd_event_set_generic_ops(struct event_setup_ops *ops);
void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops);
+void l2fwd_event_service_setup(struct l2fwd_resources *l2fwd_rsrc);
#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 01b1d531d..3c27bfb4f 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -398,6 +398,9 @@ main(int argc, char **argv)
port_id);
}
+ if (l2fwd_rsrc->event_mode)
+ l2fwd_event_service_setup(l2fwd_rsrc);
+
check_all_ports_link_status(l2fwd_rsrc, l2fwd_rsrc->enabled_port_mask);
/* launch per-lcore init on every lcore */
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (6 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 07/10] examples/l2fwd-event: add service core setup pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-11 14:52 ` Jerin Jacob
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
` (4 subsequent siblings)
12 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event dev main loop based on enabled l2fwd options and eventdev
capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_common.c | 6 +
examples/l2fwd-event/l2fwd_common.h | 2 +
examples/l2fwd-event/l2fwd_event.c | 294 ++++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 2 +
examples/l2fwd-event/main.c | 6 +-
5 files changed, 309 insertions(+), 1 deletion(-)
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 213652d72..40e933c91 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -65,6 +65,12 @@ l2fwd_event_init_ports(struct l2fwd_resources *l2fwd_rsrc)
uint16_t port_id;
int ret;
+ if (l2fwd_rsrc->event_mode) {
+ port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ }
+
/* Initialise each port */
RTE_ETH_FOREACH_DEV(port_id) {
struct rte_eth_conf local_port_conf = port_conf;
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index cdafa52c7..852c6d321 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -114,7 +114,9 @@ l2fwd_get_rsrc(void)
memset(l2fwd_rsrc, 0, sizeof(struct l2fwd_resources));
l2fwd_rsrc->mac_updating = true;
+ l2fwd_rsrc->event_mode = true;
l2fwd_rsrc->rx_queue_per_lcore = 1;
+ l2fwd_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
l2fwd_rsrc->timer_period = 10 * rte_get_timer_hz();
return mz->addr;
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index adba40069..df0b56773 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,12 @@
#include "l2fwd_event.h"
+#define L2FWD_EVENT_SINGLE 0x1
+#define L2FWD_EVENT_BURST 0x2
+#define L2FWD_EVENT_TX_DIRECT 0x4
+#define L2FWD_EVENT_TX_ENQ 0x8
+#define L2FWD_EVENT_UPDT_MAC 0x10
+
static inline int
l2fwd_event_service_enable(uint32_t service_id)
{
@@ -128,11 +134,289 @@ l2fwd_event_capability_setup(struct l2fwd_event_resources *event_rsrc)
l2fwd_event_set_internal_port_ops(&event_rsrc->ops);
}
+static __rte_noinline int
+l2fwd_get_free_event_port(struct l2fwd_event_resources *event_rsrc)
+{
+ static int index;
+ int port_id;
+
+ rte_spinlock_lock(&event_rsrc->evp.lock);
+ if (index >= event_rsrc->evp.nb_ports) {
+ printf("No free event port is available\n");
+ return -1;
+ }
+
+ port_id = event_rsrc->evp.event_p_id[index];
+ index++;
+ rte_spinlock_unlock(&event_rsrc->evp.lock);
+
+ return port_id;
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_single(struct l2fwd_resources *l2fwd_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ const int port_id = l2fwd_get_free_event_port(event_rsrc);
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const uint64_t timer_period = l2fwd_rsrc->timer_period;
+ const uint8_t tx_q_id = event_rsrc->evq.event_q_id[
+ event_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = event_rsrc->event_d_id;
+ struct rte_mbuf *mbuf;
+ uint16_t dst_port;
+ struct rte_event ev;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!l2fwd_rsrc->force_quit) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats(l2fwd_rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
+ continue;
+
+
+ mbuf = ev.mbuf;
+ dst_port = l2fwd_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0)
+ __atomic_fetch_add(
+ &l2fwd_rsrc->port_stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_mac_updating(mbuf, dst_port,
+ &l2fwd_rsrc->eth_addr[dst_port]);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev.queue_id = tx_q_id;
+ ev.op = RTE_EVENT_OP_FORWARD;
+ while (rte_event_enqueue_burst(event_d_id, port_id,
+ &ev, 1) &&
+ !l2fwd_rsrc->force_quit)
+ ;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+ while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id,
+ &ev, 1) &&
+ !l2fwd_rsrc->force_quit)
+ ;
+ }
+
+ if (timer_period > 0)
+ __atomic_fetch_add(
+ &l2fwd_rsrc->port_stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_burst(struct l2fwd_resources *l2fwd_rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
+ const int port_id = l2fwd_get_free_event_port(event_rsrc);
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const uint64_t timer_period = l2fwd_rsrc->timer_period;
+ const uint8_t tx_q_id = event_rsrc->evq.event_q_id[
+ event_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = event_rsrc->event_d_id;
+ const uint8_t deq_len = event_rsrc->deq_depth;
+ struct rte_event ev[MAX_PKT_BURST];
+ struct rte_mbuf *mbuf;
+ uint16_t nb_rx, nb_tx;
+ uint16_t dst_port;
+ uint8_t i;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!l2fwd_rsrc->force_quit) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats(l2fwd_rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
+ deq_len, 0);
+ if (nb_rx == 0)
+ continue;
+
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf = ev[i].mbuf;
+ dst_port = l2fwd_rsrc->dst_ports[mbuf->port];
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+
+ if (timer_period > 0) {
+ __atomic_fetch_add(
+ &l2fwd_rsrc->port_stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+ __atomic_fetch_add(
+ &l2fwd_rsrc->port_stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+ }
+ mbuf->port = dst_port;
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_mac_updating(mbuf, dst_port,
+ &l2fwd_rsrc->eth_addr[
+ dst_port]);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev[i].queue_id = tx_q_id;
+ ev[i].op = RTE_EVENT_OP_FORWARD;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT)
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+
+ }
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
+ ev, nb_rx);
+ while (nb_tx < nb_rx && !l2fwd_rsrc->force_quit)
+ nb_tx += rte_event_enqueue_burst(event_d_id,
+ port_id, ev + nb_tx,
+ nb_rx - nb_tx);
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id, ev,
+ nb_rx);
+ while (nb_tx < nb_rx && !l2fwd_rsrc->force_quit)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx);
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop(struct l2fwd_resources *l2fwd_rsrc,
+ const uint32_t flags)
+{
+ if (flags & L2FWD_EVENT_SINGLE)
+ l2fwd_event_loop_single(l2fwd_rsrc, flags);
+ if (flags & L2FWD_EVENT_BURST)
+ l2fwd_event_loop_burst(l2fwd_rsrc, flags);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc,
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_brst(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_brst(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_mac(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_brst_mac(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_mac(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_brst_mac(struct l2fwd_resources *l2fwd_rsrc)
+{
+ l2fwd_event_loop(l2fwd_rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST);
+}
+
void
l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
{
+ /* [MAC_UPDT][TX_MODE][BURST] */
+ const event_loop_cb event_loop[2][2][2] = {
+ [0][0][0] = l2fwd_event_main_loop_tx_d,
+ [0][0][1] = l2fwd_event_main_loop_tx_d_brst,
+ [0][1][0] = l2fwd_event_main_loop_tx_q,
+ [0][1][1] = l2fwd_event_main_loop_tx_q_brst,
+ [1][0][0] = l2fwd_event_main_loop_tx_d_mac,
+ [1][0][1] = l2fwd_event_main_loop_tx_d_brst_mac,
+ [1][1][0] = l2fwd_event_main_loop_tx_q_mac,
+ [1][1][1] = l2fwd_event_main_loop_tx_q_brst_mac,
+ };
struct l2fwd_event_resources *event_rsrc;
uint32_t event_queue_cfg;
+ int ret;
if (!rte_event_dev_count())
rte_exit(EXIT_FAILURE, "No Eventdev found\n");
@@ -158,4 +442,14 @@ l2fwd_event_resource_setup(struct l2fwd_resources *l2fwd_rsrc)
/* Rx/Tx adapters configuration */
event_rsrc->ops.adapter_setup(l2fwd_rsrc);
+
+ /* Start event device */
+ ret = rte_event_dev_start(event_rsrc->event_d_id);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ event_rsrc->ops.l2fwd_event_loop = event_loop
+ [l2fwd_rsrc->mac_updating]
+ [event_rsrc->tx_mode_q]
+ [event_rsrc->has_burst];
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index e0c589f93..cef7b5d1a 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -18,6 +18,7 @@ typedef void (*event_port_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
typedef void (*event_queue_setup_cb)(struct l2fwd_resources *l2fwd_rsrc,
uint32_t event_queue_cfg);
typedef void (*adapter_setup_cb)(struct l2fwd_resources *l2fwd_rsrc);
+typedef void (*event_loop_cb)(struct l2fwd_resources *l2fwd_rsrc);
struct event_queues {
uint8_t *event_q_id;
@@ -47,6 +48,7 @@ struct event_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
adapter_setup_cb adapter_setup;
+ event_loop_cb l2fwd_event_loop;
};
struct l2fwd_event_resources {
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 3c27bfb4f..958a6e0b7 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -212,8 +212,12 @@ l2fwd_launch_one_lcore(void *args)
{
struct l2fwd_resources *l2fwd_rsrc = args;
struct l2fwd_poll_resources *poll_rsrc = l2fwd_rsrc->poll_rsrc;
+ struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
- poll_rsrc->poll_main_loop(l2fwd_rsrc);
+ if (l2fwd_rsrc->event_mode)
+ event_rsrc->ops.l2fwd_event_loop(l2fwd_rsrc);
+ else
+ poll_rsrc->poll_main_loop(l2fwd_rsrc);
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 09/10] examples/l2fwd-event: add graceful teardown
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (7 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
` (3 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Marko Kovacevic, Ori Kam,
Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add graceful teardown that addresses both event mode and poll mode.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/main.c | 50 +++++++++++++++++++++++++++++--------
1 file changed, 40 insertions(+), 10 deletions(-)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 958a6e0b7..f075b3f10 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -302,7 +302,7 @@ main(int argc, char **argv)
uint16_t port_id, last_port;
uint32_t nb_mbufs;
uint16_t nb_ports;
- int ret;
+ int i, ret;
/* init EAL */
ret = rte_eal_init(argc, argv);
@@ -410,16 +410,46 @@ main(int argc, char **argv)
/* launch per-lcore init on every lcore */
rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, l2fwd_rsrc,
CALL_MASTER);
- rte_eal_mp_wait_lcore();
+ if (l2fwd_rsrc->event_mode) {
+ struct l2fwd_event_resources *event_rsrc =
+ l2fwd_rsrc->event_rsrc;
+ for (i = 0; i < event_rsrc->rx_adptr.nb_rx_adptr; i++)
+ rte_event_eth_rx_adapter_stop(
+ event_rsrc->rx_adptr.rx_adptr[i]);
+ for (i = 0; i < event_rsrc->tx_adptr.nb_tx_adptr; i++)
+ rte_event_eth_tx_adapter_stop(
+ event_rsrc->tx_adptr.tx_adptr[i]);
- RTE_ETH_FOREACH_DEV(port_id) {
- if ((l2fwd_rsrc->enabled_port_mask &
- (1 << port_id)) == 0)
- continue;
- printf("Closing port %d...", port_id);
- rte_eth_dev_stop(port_id);
- rte_eth_dev_close(port_id);
- printf(" Done\n");
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ rte_eth_dev_stop(port_id);
+ }
+
+ rte_eal_mp_wait_lcore();
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ rte_eth_dev_close(port_id);
+ }
+
+ rte_event_dev_stop(event_rsrc->event_d_id);
+ rte_event_dev_close(event_rsrc->event_d_id);
+
+ } else {
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((l2fwd_rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ printf("Closing port %d...", port_id);
+ rte_eth_dev_stop(port_id);
+ rte_eth_dev_close(port_id);
+ printf(" Done\n");
+ }
}
printf("Bye...\n");
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v5 10/10] doc: add application usage guide for l2fwd-event
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (8 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
@ 2019-10-02 20:57 ` pbhagavatula
2019-10-11 14:11 ` Jerin Jacob
2019-10-03 10:33 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
` (2 subsequent siblings)
12 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-02 20:57 UTC (permalink / raw)
To: jerinj, bruce.richardson, akhil.goyal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add documentation for l2fwd-event example.
Update release notes.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/rel_notes/release_19_11.rst | 6 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
doc/guides/sample_app_ug/l2_forward_event.rst | 755 ++++++++++++++++++
5 files changed, 768 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 292ac10c3..94a49b812 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1461,6 +1461,7 @@ F: examples/l2fwd-cat/
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
F: examples/l2fwd-event/
+F: doc/guides/sample_app_ug/l2_forward_event.rst
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l3fwd/
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 27cfbd9e3..071593e4d 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -56,6 +56,12 @@ New Features
Also, make sure to start the actual text at the margin.
=========================================================
+* **Added new example l2fwd-event**
+
+ Added an example application `l2fwd-event` that adds event device support to
+ traditional l2fwd example. The default poll mode is also preserved for
+ readability.
+
Removed Items
-------------
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index f23f8f59e..41388231a 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -26,6 +26,7 @@ Sample Applications User Guides
l2_forward_crypto
l2_forward_job_stats
l2_forward_real_virtual
+ l2_forward_event
l2_forward_cat
l3_forward
l3_forward_power_man
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index 90704194a..84591c0a1 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -87,6 +87,11 @@ examples are highlighted below.
forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
addresses like a simple switch.
+* :doc:`Network Layer 2 forwarding<l2_forward_event>`: The Network Layer 2
+ forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
+ addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx
+ mechanism.
+
* :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
forwarding, or ``l3fwd`` application does forwarding based on Internet
Protocol, IPv4 or IPv6 like a simple router.
diff --git a/doc/guides/sample_app_ug/l2_forward_event.rst b/doc/guides/sample_app_ug/l2_forward_event.rst
new file mode 100644
index 000000000..250d16887
--- /dev/null
+++ b/doc/guides/sample_app_ug/l2_forward_event.rst
@@ -0,0 +1,755 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2014 Intel Corporation.
+
+.. _l2_fwd_event_app:
+
+L2 Forwarding Eventdev Sample Application
+=========================================
+
+The L2 Forwarding eventdev sample application is a simple example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
+poll and event mode packet I/O mechanism.
+
+Overview
+--------
+
+The L2 Forwarding eventdev sample application, performs L2 forwarding for each
+packet that is received on an RX_PORT. The destination port is the adjacent port
+from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
+ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
+Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows:
+
+* The source MAC address is replaced by the TX_PORT MAC address
+
+* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
+
+Appliation receives packets from RX_PORT using below mentioned methods:
+
+* Poll mode
+
+* Eventdev mode (default)
+
+This application can be used to benchmark performance using a traffic-generator,
+as shown in the :numref:`figure_l2_fwd_benchmark_setup`.
+
+.. _figure_l2_fwd_benchmark_setup:
+
+.. figure:: img/l2_fwd_benchmark_setup.*
+
+ Performance Benchmark Setup (Basic Environment)
+
+Compiling the Application
+-------------------------
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``l2fwd-event`` sub-directory.
+
+Running the Application
+-----------------------
+
+The application requires a number of command line options:
+
+.. code-block:: console
+
+ ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sync=SYNC_MODE
+
+where,
+
+* p PORTMASK: A hexadecimal bitmask of the ports to configure
+
+* q NQ: A number of queues (=ports) per lcore (default is 1)
+
+* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
+
+* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
+
+* --eventq-sync=SYNC_MODE: Event queue synchronization method, Ordered or Atomic. Atomic by default.
+
+Sample usage commands are given below to run the application into different mode:
+
+Poll mode on linux environment with 4 lcores, 16 ports and 8 RX queues per lcore
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
+
+Eventdev mode on linux environment with 4 lcores, 16 ports , sync method ordered
+and MAC address updating enabled, issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sync=ordered
+
+or
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+To run application with S/W scheduler, it uses following DPDK services:
+
+* Software scheduler
+* Rx adapter service function
+* Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sync=ordered
+
+Explanation
+-----------
+
+The following sections provide some explanation of the code.
+
+.. _l2_fwd_event_app_cmd_arguments:
+
+Command Line Arguments
+~~~~~~~~~~~~~~~~~~~~~~
+
+The L2 Forwarding eventdev sample application takes specific parameters,
+in addition to Environment Abstraction Layer (EAL) arguments.
+The preferred way to parse parameters is to use the getopt() function,
+since it is part of a well-defined and portable library.
+
+The parsing of arguments is done in the **l2fwd_parse_args()** function for non
+eventdev parameteres and in **parse_eventdev_args()** for eventded parameters.
+The method of argument parsing is not described here. Refer to the
+*glibc getopt(3)* man page for details.
+
+EAL arguments are parsed first, then application-specific arguments.
+This is done at the beginning of the main() function and eventdev parameters
+are parsed in eventdev_resource_setup() function during eventdev setup:
+
+.. code-block:: c
+
+ /* init EAL */
+
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
+
+ argc -= ret;
+ argv += ret;
+
+ /* parse application arguments (after the EAL ones) */
+
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
+ .
+ .
+ .
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(argc, argv);
+ if (ret < 0)
+ return ret;
+
+
+
+
+.. _l2_fwd_event_app_mbuf_init:
+
+Mbuf Pool Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the arguments are parsed, the mbuf pool is created.
+The mbuf pool contains a set of mbuf objects that will be used by the driver
+and the application to store network packet data:
+
+.. code-block:: c
+
+ /* create the mbuf pool */
+
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+The rte_mempool is a generic structure used to handle pools of objects.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of 32 mbufs is kept.
+The memory is allocated in NUMA socket 0,
+but it is possible to extend this code to allocate one mbuf pool per socket.
+
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
+
+.. _l2_fwd_event_app_dvr_init:
+
+Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~
+
+The main part of the code in the main() function relates to the initialization
+of the driver. To fully understand this code, it is recommended to study the
+chapters that related to the Poll Mode and Event mode Driver in the
+*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
+
+.. code-block:: c
+
+ if (rte_pci_probe() < 0)
+ rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
+
+ /* reset l2fwd_dst_ports */
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ }
+ else
+ last_port = portid;
+
+ nb_ports_in_mask++;
+
+ rte_eth_dev_info_get((uint8_t) portid, &dev_info);
+ }
+
+Observe that:
+
+* rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver
+ and as an Ethernet Poll Mode Driver.
+
+* rte_pci_probe() parses the devices on the PCI bus and initializes recognized
+ devices.
+
+The next step is to configure the RX and TX queues. For each port, there is only
+one RX queue (only one lcore is able to poll a given port). The number of TX
+queues depends on the number of available lcores. The rte_eth_dev_configure()
+function is used to configure the number of queues for a port:
+
+.. code-block:: c
+
+ ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Cannot configure device: "
+ "err=%d, port=%u\n",
+ ret, portid);
+
+.. _l2_fwd_event_app_rx_init:
+
+RX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The application uses one lcore to poll one or several ports, depending on the -q
+option, which specifies the number of queues per lcore.
+
+For example, if the user specifies -q 4, the application is able to poll four
+ports with one lcore. If there are 16 ports on the target (and if the portmask
+argument is -p ffff ), the application will need four lcores to poll all the
+ports.
+
+.. code-block:: c
+
+ ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
+ &rx_conf, l2fwd_pktmbuf_pool);
+ if (ret < 0)
+
+ rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup: "
+ "err=%d, port=%u\n",
+ ret, portid);
+
+The list of queues that must be polled for a given lcore is stored in a private
+structure called struct lcore_queue_conf.
+
+.. code-block:: c
+
+ struct lcore_queue_conf {
+ unsigned n_rx_port;
+ unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
+ } rte_cache_aligned;
+
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+The values n_rx_port and rx_port_list[] are used in the main packet processing
+loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
+
+.. _l2_fwd_event_app_tx_init:
+
+TX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Each lcore should be able to transmit on any port. For every port, a single TX
+queue is initialized.
+
+.. code-block:: c
+
+ /* init one TX queue on each port */
+
+ fflush(stdout);
+
+ ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid), &tx_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, (unsigned) portid);
+
+The global configuration for TX queues is stored in a static structure:
+
+.. code-block:: c
+
+ static const struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
+ };
+
+To configure eventdev support, application setups following components:
+
+* Event dev
+* Event queue
+* Event Port
+* Rx/Tx adapters
+* Ethernet ports
+
+.. _l2_fwd_event_app_event_dev_init:
+
+Event device Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Application can use either H/W or S/W based event device scheduler
+implementation and supports single instance of event device. It configures event
+device as per below configuration
+
+.. code-block:: c
+
+ struct rte_event_dev_config event_d_conf = {
+ .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
+ .nb_event_ports = num_workers, /* Dedicated to each lcore */
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_exit(EXIT_FAILURE, "Error in configuring event device");
+
+In case of S/W scheduler, application runs eventdev scheduler service on service
+core. Application retrieves service id and later on it starts the same on a
+given lcore.
+
+.. code-block:: c
+
+ /* Start event device service */
+ ret = rte_event_dev_service_id_get(eventdev_rsrc.event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.service_id = service_id;
+
+ /* Start eventdev scheduler service */
+ rte_service_map_lcore_set(eventdev_rsrc.service_id, lcore_id[0], 1);
+ rte_service_lcore_start(lcore_id[0]);
+
+.. _l2_fwd_app_event_queue_init:
+
+Event queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet device is assigned a dedicated event queue which will be linked
+to all available event ports i.e. each lcore can dequeue packets from any of the
+Ethernet ports.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = 0,
+ .schedule_type = RTE_SCHED_TYPE_ATOMIC,
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
+ };
+
+ /* User requested sync mode */
+ event_q_conf.schedule_type = eventq_sync_mode;
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event queue");
+ }
+ }
+
+In case of S/W scheduler, an extra event queue is created which will be used for
+Tx adapter service function for enqueue operation.
+
+.. _l2_fwd_app_event_port_init:
+
+Event port Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Each worker thread is assigned a dedicated event port for enq/deq operations
+to/from an event device. All event ports are linked with all available event
+queues.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error in configuring event port %d\n",
+ event_p_id);
+ }
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0) {
+ rte_exit(EXIT_FAILURE, "Error in linking event port %d "
+ "to event queue", event_p_id);
+ }
+ }
+
+In case of S/W scheduler, an extra event port is created by DPDK library which
+is retrieved by the application and same will be used by Tx adapter service.
+
+.. code-block:: c
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &eventdev_rsrc.evq.event_q_id[
+ eventdev_rsrc.evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_exit(EXIT_FAILURE,
+ "Unable to link Tx adapter port to Tx queue:err = %d",
+ ret);
+
+.. _l2_fwd_event_app_adapter_init:
+
+Rx/Tx adapter Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
+Ethernet port's Rx queues are connected to its respective event queue at
+priority 0 via Rx adapter configuration and Ethernet port's tx queues are
+connected via Tx adapter.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_rx_adapter_create(i, event_d_id,
+ &event_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create rx adapter[%d]", i);
+
+ /* Configure user requested sync mode */
+ eth_q_conf.ev.queue_id = eventdev_rsrc.evq.event_q_id[i];
+ eth_q_conf.ev.sched_type = eventq_sync_mode;
+ ret = rte_event_eth_rx_adapter_queue_add(i, i, -1, ð_q_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Failed to add queues to Rx adapter");
+
+ ret = rte_event_eth_rx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Rx adapter[%d] start failed", i);
+
+ eventdev_rsrc.rx_adptr.rx_adptr[i] = i;
+ }
+
+ for (i = 0; i < ethdev_count; i++) {
+ ret = rte_event_eth_tx_adapter_create(i, event_d_id,
+ &event_p_conf);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to create tx adapter[%d]", i);
+
+ ret = rte_event_eth_tx_adapter_queue_add(i, i, -1);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "failed to add queues to Tx adapter");
+
+ ret = rte_event_eth_tx_adapter_start(i);
+ if (ret)
+ rte_exit(EXIT_FAILURE,
+ "Tx adapter[%d] start failed", i);
+
+ eventdev_rsrc.tx_adptr.tx_adptr[i] = i;
+ }
+
+For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
+configured which will be shared among all the Ethernet ports. Also DPDK library
+need service cores to run internal services for Rx/Tx adapters. Application gets
+service id for Rx/Tx adapters and after successful setup it runs the services
+on dedicated service cores.
+
+.. code-block:: c
+
+ /* retrieving service Id for Rx adapter */
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0) {
+ rte_exit(EXIT_FAILURE,
+ "Error getting the service ID for rx adptr\n");
+ }
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.rx_adptr.service_id = service_id;
+
+ /* Start eventdev Rx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc.rx_adptr.service_id,
+ lcore_id[1], 1);
+ rte_service_lcore_start(lcore_id[1]);
+
+ /* retrieving service Id for Tx adapter */
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_exit(EXIT_FAILURE, "Failed to get Tx adapter service ID");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ eventdev_rsrc.tx_adptr.service_id = service_id;
+
+ /* Start eventdev Tx adapter service */
+ rte_service_map_lcore_set(eventdev_rsrc.tx_adptr.service_id,
+ lcore_id[2], 1);
+ rte_service_lcore_start(lcore_id[2]);
+
+.. _l2_fwd_event_app_rx_tx_packets:
+
+Receive, Process and Transmit Packets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
+the RX queues. This is done using the following code:
+
+.. code-block:: c
+
+ /*
+ * Read packet from RX queues
+ */
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+
+Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
+function writes the mbuf pointers in a local table and returns the number of
+available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_simple_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+ In the following code, one line for getting the output port requires some
+ explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+Also to optimize enqueue opeartion, l2fwd_simple_forward() stores incoming mbus
+upto MAX_PKT_BURST. Once it reaches upto limit, all packets are transmitted to
+destination ports.
+
+.. code-block:: c
+
+ static void
+ l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+ {
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+ }
+
+For this test application, the processing is exactly the same for all packets
+arriving on the same RX port. Therefore, it would have been possible to call
+the rte_eth_tx_buffer() function directly from the main loop to send all the
+received packets on the same TX port, using the burst-oriented send function,
+which is more efficient.
+
+However, in real-life applications (such as, L3 routing),
+packet N is not necessarily forwarded on the same port as packet N-1.
+The application is implemented to illustrate that, so the same approach can be
+reused in a more complex application.
+
+To ensure that no packets remain in the tables, each lcore does a draining of TX
+queue in its main loop. This technique introduces some latency when there are
+not many packets to send, however it improves performance:
+
+.. code-block:: c
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id == rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+In the **l2fwd_main_loop_eventdev()** function, the main task is to read ingress
+packets from the event ports. This is done using the following code:
+
+.. code-block:: c
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
+ events, deq_len, 0);
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf[i] = events[i].mbuf;
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
+ }
+
+
+Before reading packets, deq_len is fetched to ensure correct allowed deq length
+by the eventdev.
+The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
+and returns the number of available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+.. note::
+
+ In the following code, one line for getting the output port requires some
+ explanation.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
+be to destination ports via Tx adapter or generic event dev enqueue API
+depending H/W or S/W scheduler is used.
+
+.. code-block:: c
+
+ static inline void
+ l2fwd_eventdev_forward(struct rte_mbuf *m[], uint32_t portid,
+ uint16_t nb_rx, uint16_t event_p_id)
+ {
+ uint32_t dst_port, i;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ for (i = 0; i < nb_rx; i++) {
+ if (mac_updating)
+ l2fwd_mac_updating(m[i], dst_port);
+
+ m[i]->port = dst_port;
+ }
+
+ if (timer_period > 0) {
+ rte_spinlock_lock(&port_stats_lock);
+ port_statistics[dst_port].tx += nb_rx;
+ rte_spinlock_unlock(&port_stats_lock);
+ }
+ /* Registered callback is invoked for Tx */
+ eventdev_rsrc.send_burst_eventdev(m, nb_rx, event_p_id);
+ }
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (9 preceding siblings ...)
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-10-03 10:33 ` Jerin Jacob
2019-10-03 12:40 ` Hemant Agrawal
2019-10-09 7:50 ` Nipun Gupta
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
12 siblings, 1 reply; 107+ messages in thread
From: Jerin Jacob @ 2019-10-03 10:33 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, Akhil Goyal, dpdk-dev,
Van Haaren, Harry, mattias.ronnblom, liang.j.ma, Gujjar,
Abhinandan S, Rao, Nikhil, Hemant Agrawal, erik.g.carrillo
On Thu, Oct 3, 2019 at 2:28 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds a new application to demonstrate the usage of event
> mode. The poll mode is also available to help with the transition.
>
> The following new command line parameters are added:
> --mode: Dictates the mode of operation either poll or event.
> --eventq_sync: Dictates event synchronization mode either atomic or
> ordered.
>
> Based on event device capability the configuration is done as follows:
> - A single event device is enabled.
> - The number of event ports is equal to the number of worker
> cores enabled in the core mask. Additional event ports might
> be configured based on Rx/Tx adapter capability.
> - The number of event queues is equal to the number of ethernet
> ports. If Tx adapter doesn't have internal port capability then
> an additional single link event queue is used to enqueue events
> to Tx adapter.
> - Each event port is linked to all existing event queues.
> - Dedicated Rx/Tx adapters for each Ethernet port.
>
> v5 Changes:
> - Redo poll mode datapath by removing all the static globals.
> - Fix event queue configuration when required queues are not available.
> - Fix Rx/Tx adapter creation based on portmask.
> - Update release notes.
> - Unroll macro used to generate event mode functions.
Adding all eventdev maintainers.
I have some minor comments on Documentation. Other than that, The
series looks good to me in general.
Anyone else planning to review this code. If yes, We will wait for
merging this patch after RC1.
If no, then we can merge in RC1 if no objection.
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-03 10:33 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
@ 2019-10-03 12:40 ` Hemant Agrawal
2019-10-03 12:47 ` Jerin Jacob
0 siblings, 1 reply; 107+ messages in thread
From: Hemant Agrawal @ 2019-10-03 12:40 UTC (permalink / raw)
To: Jerin Jacob, Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, Akhil Goyal, dpdk-dev,
Van Haaren, Harry, mattias.ronnblom, liang.j.ma, Gujjar,
Abhinandan S, Rao, Nikhil, erik.g.carrillo
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, October 3, 2019 4:04 PM
> To: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: Jerin Jacob <jerinj@marvell.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dpdk-
> dev <dev@dpdk.org>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> mattias.ronnblom@ericsson.com; liang.j.ma@intel.com; Gujjar, Abhinandan
> S <abhinandan.gujjar@intel.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; erik.g.carrillo@intel.com
> Subject: Re: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce
> l2fwd-event example
> Importance: High
>
> On Thu, Oct 3, 2019 at 2:28 AM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > This patchset adds a new application to demonstrate the usage of event
> > mode. The poll mode is also available to help with the transition.
> >
> > The following new command line parameters are added:
> > --mode: Dictates the mode of operation either poll or event.
> > --eventq_sync: Dictates event synchronization mode either atomic or
> > ordered.
> >
> > Based on event device capability the configuration is done as follows:
> > - A single event device is enabled.
> > - The number of event ports is equal to the number of worker
> > cores enabled in the core mask. Additional event ports might
> > be configured based on Rx/Tx adapter capability.
> > - The number of event queues is equal to the number of ethernet
> > ports. If Tx adapter doesn't have internal port capability then
> > an additional single link event queue is used to enqueue events
> > to Tx adapter.
> > - Each event port is linked to all existing event queues.
> > - Dedicated Rx/Tx adapters for each Ethernet port.
> >
> > v5 Changes:
> > - Redo poll mode datapath by removing all the static globals.
> > - Fix event queue configuration when required queues are not available.
> > - Fix Rx/Tx adapter creation based on portmask.
> > - Update release notes.
> > - Unroll macro used to generate event mode functions.
>
>
>
>
> Adding all eventdev maintainers.
>
> I have some minor comments on Documentation. Other than that, The series
> looks good to me in general.
> Anyone else planning to review this code. If yes, We will wait for merging this
> patch after RC1.
> If no, then we can merge in RC1 if no objection.
[Hemant]
On a high level this series looks good to us. However currently we are in the process of testing it.
Will you please wait for our ack? Currently we are trying to completed it before RC1
Regards,
Hemant
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-03 12:40 ` Hemant Agrawal
@ 2019-10-03 12:47 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-03 12:47 UTC (permalink / raw)
To: Hemant Agrawal
Cc: Pavan Nikhilesh, Jerin Jacob, Richardson, Bruce, Akhil Goyal,
dpdk-dev, Van Haaren, Harry, mattias.ronnblom, liang.j.ma,
Gujjar, Abhinandan S, Rao, Nikhil, erik.g.carrillo
On Thu, Oct 3, 2019 at 6:10 PM Hemant Agrawal <hemant.agrawal@nxp.com> wrote:
>
> Hi Jerin,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, October 3, 2019 4:04 PM
> > To: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > Cc: Jerin Jacob <jerinj@marvell.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; dpdk-
> > dev <dev@dpdk.org>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> > mattias.ronnblom@ericsson.com; liang.j.ma@intel.com; Gujjar, Abhinandan
> > S <abhinandan.gujjar@intel.com>; Rao, Nikhil <nikhil.rao@intel.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>; erik.g.carrillo@intel.com
> > Subject: Re: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce
> > l2fwd-event example
> > Importance: High
> >
> > On Thu, Oct 3, 2019 at 2:28 AM <pbhagavatula@marvell.com> wrote:
> > >
> > > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> > >
> > > This patchset adds a new application to demonstrate the usage of event
> > > mode. The poll mode is also available to help with the transition.
> > >
> > > The following new command line parameters are added:
> > > --mode: Dictates the mode of operation either poll or event.
> > > --eventq_sync: Dictates event synchronization mode either atomic or
> > > ordered.
> > >
> > > Based on event device capability the configuration is done as follows:
> > > - A single event device is enabled.
> > > - The number of event ports is equal to the number of worker
> > > cores enabled in the core mask. Additional event ports might
> > > be configured based on Rx/Tx adapter capability.
> > > - The number of event queues is equal to the number of ethernet
> > > ports. If Tx adapter doesn't have internal port capability then
> > > an additional single link event queue is used to enqueue events
> > > to Tx adapter.
> > > - Each event port is linked to all existing event queues.
> > > - Dedicated Rx/Tx adapters for each Ethernet port.
> > >
> > > v5 Changes:
> > > - Redo poll mode datapath by removing all the static globals.
> > > - Fix event queue configuration when required queues are not available.
> > > - Fix Rx/Tx adapter creation based on portmask.
> > > - Update release notes.
> > > - Unroll macro used to generate event mode functions.
> >
> >
> >
> >
> > Adding all eventdev maintainers.
> >
> > I have some minor comments on Documentation. Other than that, The series
> > looks good to me in general.
> > Anyone else planning to review this code. If yes, We will wait for merging this
> > patch after RC1.
> > If no, then we can merge in RC1 if no objection.
> [Hemant]
> On a high level this series looks good to us. However currently we are in the process of testing it.
> Will you please wait for our ack? Currently we are trying to completed it before RC1
OK. Will for NXP's Ack.
>
> Regards,
> Hemant
>
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for eventdev
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-10-04 12:30 ` Nipun Gupta
0 siblings, 0 replies; 107+ messages in thread
From: Nipun Gupta @ 2019-10-04 12:30 UTC (permalink / raw)
To: pbhagavatula, jerinj, bruce.richardson, Akhil Goyal,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori
Cc: dev
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Thursday, October 3, 2019 2:28 AM
> To: jerinj@marvell.com; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>; Marko Kovacevic <marko.kovacevic@intel.com>;
> Ori Kam <orika@mellanox.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Tomasz Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> <skori@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for
> eventdev
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add infra to select event device as a mode to process packets through
> command line arguments. Also, allow the user to select the schedule type
> to be either RTE_SCHED_TYPE_ORDERED or RTE_SCHED_TYPE_ATOMIC.
>
> Usage:
>
> `--mode="eventdev"` or `--mode="poll"`
> `--eventq-sched="ordered"` or `--eventq-sched="atomic"`
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> examples/l2fwd-event/Makefile | 1 +
> examples/l2fwd-event/l2fwd_common.h | 3 ++
> examples/l2fwd-event/l2fwd_event.c | 34 ++++++++++++++++++++
> examples/l2fwd-event/l2fwd_event.h | 21 ++++++++++++
> examples/l2fwd-event/main.c | 50 +++++++++++++++++++++++++++--
> examples/l2fwd-event/meson.build | 1 +
> 6 files changed, 108 insertions(+), 2 deletions(-)
> create mode 100644 examples/l2fwd-event/l2fwd_event.c
> create mode 100644 examples/l2fwd-event/l2fwd_event.h
>
<snip>
> index 887a979d5..01b1d531d 100644
> --- a/examples/l2fwd-event/main.c
> +++ b/examples/l2fwd-event/main.c
> @@ -2,6 +2,7 @@
> * Copyright(C) 2019 Marvell International Ltd.
> */
>
> +#include "l2fwd_event.h"
> #include "l2fwd_poll.h"
>
> /* display usage */
> @@ -16,7 +17,12 @@ l2fwd_event_usage(const char *prgname)
> " --[no-]mac-updating: Enable or disable MAC addresses
> updating (enabled by default)\n"
> " When enabled:\n"
> " - The source MAC address is replaced by the TX port MAC
> address\n"
> - " - The destination MAC address is replaced by
> 02:00:00:00:00:TX_PORT_ID\n",
> + " - The destination MAC address is replaced by
> 02:00:00:00:00:TX_PORT_ID\n"
> + " --mode: Packet transfer mode for I/O, poll or eventdev\n"
> + " Default mode = eventdev\n"
> + " --eventq-sched: Event queue schedule type, ordered or
> atomic.\n"
> + " Default: atomic\n"
> + " Valid only if --mode=eventdev\n\n",
> prgname);
Please also add parallel mode for completeness.
> }
>
> @@ -71,6 +77,26 @@ l2fwd_event_parse_timer_period(const char *q_arg)
> return n;
> }
>
> +static void
> +l2fwd_event_parse_mode(const char *optarg,
> + struct l2fwd_resources *l2fwd_rsrc)
> +{
> + if (!strncmp(optarg, "poll", 4))
> + l2fwd_rsrc->event_mode = false;
> + else if (!strncmp(optarg, "eventdev", 8))
> + l2fwd_rsrc->event_mode = true;
> +}
> +
> +static void
> +l2fwd_event_parse_eventq_sched(const char *optarg,
> + struct l2fwd_resources *l2fwd_rsrc)
> +{
> + if (!strncmp(optarg, "ordered", 7))
> + l2fwd_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
> + else if (!strncmp(optarg, "atomic", 6))
> + l2fwd_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
> +}
> +
> static const char short_options[] =
> "p:" /* portmask */
> "q:" /* number of queues */
> @@ -79,6 +105,8 @@ static const char short_options[] =
>
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (10 preceding siblings ...)
2019-10-03 10:33 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
@ 2019-10-09 7:50 ` Nipun Gupta
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
12 siblings, 0 replies; 107+ messages in thread
From: Nipun Gupta @ 2019-10-09 7:50 UTC (permalink / raw)
To: pbhagavatula, jerinj, bruce.richardson, Akhil Goyal; +Cc: dev
With parallel mode support and series based over top of:
https://patchwork.dpdk.org/patch/60762/
Series
Acked-by: Nipun Gupta <nipun.gupta@nxp.com>
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Thursday, October 3, 2019 2:28 AM
> To: jerinj@marvell.com; bruce.richardson@intel.com; Akhil Goyal
> <akhil.goyal@nxp.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce
> l2fwd-event example
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds a new application to demonstrate the usage of event
> mode. The poll mode is also available to help with the transition.
>
> The following new command line parameters are added:
> --mode: Dictates the mode of operation either poll or event.
> --eventq_sync: Dictates event synchronization mode either atomic or
> ordered.
>
> Based on event device capability the configuration is done as follows:
> - A single event device is enabled.
> - The number of event ports is equal to the number of worker
> cores enabled in the core mask. Additional event ports might
> be configured based on Rx/Tx adapter capability.
> - The number of event queues is equal to the number of ethernet
> ports. If Tx adapter doesn't have internal port capability then
> an additional single link event queue is used to enqueue events
> to Tx adapter.
> - Each event port is linked to all existing event queues.
> - Dedicated Rx/Tx adapters for each Ethernet port.
>
> v5 Changes:
> - Redo poll mode datapath by removing all the static globals.
> - Fix event queue configuration when required queues are not available.
> - Fix Rx/Tx adapter creation based on portmask.
> - Update release notes.
> - Unroll macro used to generate event mode functions.
>
> v4 Changes:
> - Fix missing eventdev args parsing.
>
> v3 Changes:
> - Remove unwanted change to example/l2fwd.
> - Fix checkpatch issue
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2
> Fmails.dpdk.org%2Farchives%2Ftest-report%2F2019-
> September%2F098053.html&data=02%7C01%7Cnipun.gupta%40nxp.co
> m%7C059055e10dcd4ef59a8808d7477b3894%7C686ea1d3bc2b4c6fa92cd99c5
> c301635%7C0%7C0%7C637056466870802064&sdata=q6qdS3IELoe9ry3mf
> TVjtZgRLxcYIWW%2FnYaeacHpixM%3D&reserved=0
>
> v2 Changes:
> - Remove global variables.
> - Split patches to make reviews friendlier.
> - Split datapath based on eventdev capability.
>
> Pavan Nikhilesh (5):
> examples/l2fwd-event: add default poll mode routines
> examples/l2fwd-event: add infra for eventdev
> examples/l2fwd-event: add service core setup
> examples/l2fwd-event: add eventdev main loop
> examples/l2fwd-event: add graceful teardown
>
> Sunil Kumar Kori (5):
> examples/l2fwd-event: add infra to split eventdev framework
> examples/l2fwd-event: add event device setup
> examples/l2fwd-event: add eventdev queue and port setup
> examples/l2fwd-event: add event Rx/Tx adapter setup
> doc: add application usage guide for l2fwd-event
>
> MAINTAINERS | 6 +
> doc/guides/rel_notes/release_19_11.rst | 6 +
> doc/guides/sample_app_ug/index.rst | 1 +
> doc/guides/sample_app_ug/intro.rst | 5 +
> doc/guides/sample_app_ug/l2_forward_event.rst | 755
> ++++++++++++++++++
> examples/Makefile | 1 +
> examples/l2fwd-event/Makefile | 62 ++
> examples/l2fwd-event/l2fwd_common.c | 148 ++++
> examples/l2fwd-event/l2fwd_common.h | 134 ++++
> examples/l2fwd-event/l2fwd_event.c | 455 +++++++++++
> examples/l2fwd-event/l2fwd_event.h | 73 ++
> examples/l2fwd-event/l2fwd_event_generic.c | 331 ++++++++
> .../l2fwd-event/l2fwd_event_internal_port.c | 306 +++++++
> examples/l2fwd-event/l2fwd_poll.c | 197 +++++
> examples/l2fwd-event/l2fwd_poll.h | 25 +
> examples/l2fwd-event/main.c | 456 +++++++++++
> examples/l2fwd-event/meson.build | 17 +
> 17 files changed, 2978 insertions(+)
> create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
> create mode 100644 examples/l2fwd-event/Makefile
> create mode 100644 examples/l2fwd-event/l2fwd_common.c
> create mode 100644 examples/l2fwd-event/l2fwd_common.h
> create mode 100644 examples/l2fwd-event/l2fwd_event.c
> create mode 100644 examples/l2fwd-event/l2fwd_event.h
> create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
> create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
> create mode 100644 examples/l2fwd-event/l2fwd_poll.c
> create mode 100644 examples/l2fwd-event/l2fwd_poll.h
> create mode 100644 examples/l2fwd-event/main.c
> create mode 100644 examples/l2fwd-event/meson.build
>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 10/10] doc: add application usage guide for l2fwd-event
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-10-11 14:11 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-11 14:11 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, Akhil Goyal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Tomasz Kantecki, Sunil Kumar Kori, dpdk-dev
On Thu, Oct 3, 2019 at 2:29 AM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add documentation for l2fwd-event example.
> Update release notes.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
# Please fix the typos through with the following command
aspell --lang=en_US --check doc/guides/sample_app_ug/l2_forward_event.rst
# Remove the following warning.
$ make doc-guides-html
doc/guides/sample_app_ug/l2_forward_real_virtual.rst:39: WARNING:
duplicate label figure_l2_fwd_benchmark_setup,
> ---
> MAINTAINERS | 1 +
> doc/guides/rel_notes/release_19_11.rst | 6 +
> doc/guides/sample_app_ug/index.rst | 1 +
> doc/guides/sample_app_ug/intro.rst | 5 +
> doc/guides/sample_app_ug/l2_forward_event.rst | 755 ++++++++++++++++++
> 5 files changed, 768 insertions(+)
> create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 292ac10c3..94a49b812 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1461,6 +1461,7 @@ F: examples/l2fwd-cat/
> M: Sunil Kumar Kori <skori@marvell.com>
> M: Pavan Nikhilesh <pbhagavatula@marvell.com>
> F: examples/l2fwd-event/
> +F: doc/guides/sample_app_ug/l2_forward_event.rst
> T: git://dpdk.org/next/dpdk-next-eventdev
>
> F: examples/l3fwd/
> diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
> index 27cfbd9e3..071593e4d 100644
> --- a/doc/guides/rel_notes/release_19_11.rst
> +++ b/doc/guides/rel_notes/release_19_11.rst
> @@ -56,6 +56,12 @@ New Features
> Also, make sure to start the actual text at the margin.
> =========================================================
>
> +* **Added new example l2fwd-event**
Added new example l2fwd-event application
> +
> + Added an example application `l2fwd-event` that adds event device support to
> + traditional l2fwd example. The default poll mode is also preserved for
> + readability.
Please change the last sentence to the following or something similar
"It demonstrates usage of poll and event mode IO mechanism under a
single application"
> +
>
> Removed Items
> -------------
> diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
> index f23f8f59e..41388231a 100644
> --- a/doc/guides/sample_app_ug/index.rst
> +++ b/doc/guides/sample_app_ug/index.rst
> @@ -26,6 +26,7 @@ Sample Applications User Guides
> l2_forward_crypto
> l2_forward_job_stats
> l2_forward_real_virtual
> + l2_forward_event
> l2_forward_cat
> l3_forward
> l3_forward_power_man
> diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
> index 90704194a..84591c0a1 100644
> --- a/doc/guides/sample_app_ug/intro.rst
> +++ b/doc/guides/sample_app_ug/intro.rst
> @@ -87,6 +87,11 @@ examples are highlighted below.
> forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
> addresses like a simple switch.
>
> +* :doc:`Network Layer 2 forwarding<l2_forward_event>`: The Network Layer 2
> + forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
> + addresses like a simple switch. It demonstrate usage of poll and event mode Rx/Tx
> + mechanism.
Please change the last sentence to the following or something similar
"It demonstrates usage of poll and event mode IO mechanism under a
single application"
> +
> * :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
> forwarding, or ``l3fwd`` application does forwarding based on Internet
> Protocol, IPv4 or IPv6 like a simple router.
> diff --git a/doc/guides/sample_app_ug/l2_forward_event.rst b/doc/guides/sample_app_ug/l2_forward_event.rst
> new file mode 100644
> index 000000000..250d16887
> --- /dev/null
> +++ b/doc/guides/sample_app_ug/l2_forward_event.rst
> @@ -0,0 +1,755 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2010-2014 Intel Corporation.
> +
> +.. _l2_fwd_event_app:
> +
> +L2 Forwarding Eventdev Sample Application
> +=========================================
> +
> +The L2 Forwarding eventdev sample application is a simple example of packet
> +processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
> +poll and event mode packet I/O mechanism.
> +
> +Overview
> +--------
> +
> +The L2 Forwarding eventdev sample application, performs L2 forwarding for each
> +packet that is received on an RX_PORT. The destination port is the adjacent port
> +from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
> +ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
> +Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows:
> +
> +* The source MAC address is replaced by the TX_PORT MAC address
> +
> +* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
> +
> +Appliation receives packets from RX_PORT using below mentioned methods:
> +
> +* Poll mode
> +
> +* Eventdev mode (default)
> +
> +This application can be used to benchmark performance using a traffic-generator,
> +as shown in the :numref:`figure_l2_fwd_benchmark_setup`.
> +
> +.. _figure_l2_fwd_benchmark_setup:
> +
> +.. figure:: img/l2_fwd_benchmark_setup.*
Looks like there is a typo in the original l2fwd application image.
The diagram shows NUT instead of DUT. Please send a separate patch.
> +
> + Performance Benchmark Setup (Basic Environment)
> +
> +Compiling the Application
> +-------------------------
> +
> +To compile the sample application see :doc:`compiling`.
> +
> +The application is located in the ``l2fwd-event`` sub-directory.
> +
> +Running the Application
> +-----------------------
> +
> +The application requires a number of command line options:
> +
> +.. code-block:: console
> +
> + ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sync=SYNC_MODE
> +
> +where,
> +
> +* p PORTMASK: A hexadecimal bitmask of the ports to configure
> +
> +* q NQ: A number of queues (=ports) per lcore (default is 1)
> +
> +* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
> +
> +* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
> +
> +* --eventq-sync=SYNC_MODE: Event queue synchronization method, Ordered or Atomic. Atomic by default.
> +
> +Sample usage commands are given below to run the application into different mode:
> +
> +Poll mode on linux environment with 4 lcores, 16 ports and 8 RX queues per lcore
Nothing specific to Linux here, so please remove Linux environment
> +Driver Initialization
> +~~~~~~~~~~~~~~~~~~~~~
> +
> +The main part of the code in the main() function relates to the initialization
> +of the driver. To fully understand this code, it is recommended to study the
> +chapters that related to the Poll Mode and Event mode Driver in the
> +*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
> +
> +.. code-block:: c
> +
> + if (rte_pci_probe() < 0)
> + rte_exit(EXIT_FAILURE, "Cannot probe PCI\n");
> +
> + /* reset l2fwd_dst_ports */
> +
> + for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
> + l2fwd_dst_ports[portid] = 0;
> +
> + last_port = 0;
> +
> + /*
> + * Each logical core is assigned a dedicated TX queue on each port.
> + */
> +
> + RTE_ETH_FOREACH_DEV(portid) {
> + /* skip ports that are not enabled */
> +
> + if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
> + continue;
> +
> + if (nb_ports_in_mask % 2) {
> + l2fwd_dst_ports[portid] = last_port;
> + l2fwd_dst_ports[last_port] = portid;
> + }
> + else
> + last_port = portid;
> +
> + nb_ports_in_mask++;
> +
> + rte_eth_dev_info_get((uint8_t) portid, &dev_info);
> + }
> +
> +Observe that:
> +
> +* rte_igb_pmd_init_all() simultaneously registers the driver as a PCI driver
Looks like rte_igb_pmd_init_all() from legacy l2fwd doc, Please fix in
both docs.
The original l2fwd doc fix send it as a separate patch.
> +.. code-block:: c
> +
> + /* Start event device service */
> + ret = rte_event_dev_service_id_get(eventdev_rsrc.event_d_id,
> + &service_id);
> + if (ret != -ESRCH && ret != 0)
> + rte_exit(EXIT_FAILURE, "Error in starting eventdev");
> +
> + rte_service_runstate_set(service_id, 1);
> + rte_service_set_runstate_mapped_check(service_id, 0);
> + eventdev_rsrc.service_id = service_id;
eventdev_rsrc removed recently in code review, Please sync code with
sync in all code sections.
> +.. note::
> +
> + In the following code, one line for getting the output port requires some
> + explanation.
Do we need this note: ?
> +
> +During the initialization process, a static array of destination ports
> +(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
> +is assigned that is either the next or previous enabled port from the portmask.
> +If number of ports are odd in portmask then packet from last port will be
> +forwarded to first port i.e. if portmask=0x07, then forwarding will take place
> +like p0--->p1, p1--->p2, p2--->p0.
> +
> +Also to optimize enqueue opeartion, l2fwd_simple_forward() stores incoming mbus
> +upto MAX_PKT_BURST. Once it reaches upto limit, all packets are transmitted to
> +destination ports.
> +
> +.. code-block:: c
> +
> + static void
> + l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
> + {
> + uint32_t dst_port;
> + int32_t sent;
> + struct rte_eth_dev_tx_buffer *buffer;
> +
> + dst_port = l2fwd_dst_ports[portid];
> +
> + if (mac_updating)
> + l2fwd_mac_updating(m, dst_port);
> +
> + buffer = tx_buffer[dst_port];
> + sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
> + if (sent)
> + port_statistics[dst_port].tx += sent;
> + }
> +
> +For this test application, the processing is exactly the same for all packets
> +arriving on the same RX port. Therefore, it would have been possible to call
> +the rte_eth_tx_buffer() function directly from the main loop to send all the
> +received packets on the same TX port, using the burst-oriented send function,
> +which is more efficient.
> +
> +However, in real-life applications (such as, L3 routing),
> +packet N is not necessarily forwarded on the same port as packet N-1.
> +The application is implemented to illustrate that, so the same approach can be
> +reused in a more complex application.
> +
> +To ensure that no packets remain in the tables, each lcore does a draining of TX
> +queue in its main loop. This technique introduces some latency when there are
> +not many packets to send, however it improves performance:
> +
> +.. code-block:: c
> +
> + cur_tsc = rte_rdtsc();
> +
> + /*
> + * TX burst queue drain
> + */
> + diff_tsc = cur_tsc - prev_tsc;
> + if (unlikely(diff_tsc > drain_tsc)) {
> + for (i = 0; i < qconf->n_rx_port; i++) {
> + portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
> + buffer = tx_buffer[portid];
> + sent = rte_eth_tx_buffer_flush(portid, 0,
> + buffer);
> + if (sent)
> + port_statistics[portid].tx += sent;
> + }
> +
> + /* if timer is enabled */
> + if (timer_period > 0) {
> + /* advance the timer */
> + timer_tsc += diff_tsc;
> +
> + /* if timer has reached its timeout */
> + if (unlikely(timer_tsc >= timer_period)) {
> + /* do this only on master core */
> + if (lcore_id == rte_get_master_lcore()) {
> + print_stats();
> + /* reset the timer */
> + timer_tsc = 0;
> + }
> + }
> + }
> +
> + prev_tsc = cur_tsc;
> + }
> +
> +In the **l2fwd_main_loop_eventdev()** function, the main task is to read ingress
> +packets from the event ports. This is done using the following code:
> +
> +.. code-block:: c
> +
> + /* Read packet from eventdev */
> + nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
> + events, deq_len, 0);
> + if (nb_rx == 0) {
> + rte_pause();
> + continue;
> + }
> +
> + for (i = 0; i < nb_rx; i++) {
> + mbuf[i] = events[i].mbuf;
> + rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
> + }
> +
> +
> +Before reading packets, deq_len is fetched to ensure correct allowed deq length
> +by the eventdev.
> +The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
> +and returns the number of available mbufs in the table.
> +
> +Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
> +function. The processing is very simple: process the TX port from the RX port,
> +then replace the source and destination MAC addresses if MAC addresses updating
> +is enabled.
> +
> +.. note::
> +
> + In the following code, one line for getting the output port requires some
> + explanation.
Do we need this note: ?
> +
> +During the initialization process, a static array of destination ports
> +(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
> +is assigned that is either the next or previous enabled port from the portmask.
> +If number of ports are odd in portmask then packet from last port will be
> +forwarded to first port i.e. if portmask=0x07, then forwarding will take place
> +like p0--->p1, p1--->p2, p2--->p0.
> +
> +l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
> +be to destination ports via Tx adapter or generic event dev enqueue API
> +depending H/W or S/W scheduler is used.
> +
> +.. code-block:: c
> +
> + static inline void
> + l2fwd_eventdev_forward(struct rte_mbuf *m[], uint32_t portid,
> + uint16_t nb_rx, uint16_t event_p_id)
> + {
> + uint32_t dst_port, i;
> +
> + dst_port = l2fwd_dst_ports[portid];
> +
> + for (i = 0; i < nb_rx; i++) {
> + if (mac_updating)
> + l2fwd_mac_updating(m[i], dst_port);
> +
> + m[i]->port = dst_port;
> + }
> +
> + if (timer_period > 0) {
> + rte_spinlock_lock(&port_stats_lock);
> + port_statistics[dst_port].tx += nb_rx;
> + rte_spinlock_unlock(&port_stats_lock);
> + }
> + /* Registered callback is invoked for Tx */
> + eventdev_rsrc.send_burst_eventdev(m, nb_rx, event_p_id);
> + }
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-10-11 14:41 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-11 14:41 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, Akhil Goyal, Thomas Monjalon,
Marko Kovacevic, Ori Kam, Radu Nicolau, Tomasz Kantecki,
Sunil Kumar Kori, dpdk-dev
On Thu, Oct 3, 2019 at 2:28 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add the default l2fwd poll mode routines similar to examples/l2fwd.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> MAINTAINERS | 5 +
> examples/Makefile | 1 +
examples/meson.build is not updated
Please test the meson output.
> examples/l2fwd-event/Makefile | 59 +++++
> examples/l2fwd-event/l2fwd_common.c | 142 +++++++++++
> examples/l2fwd-event/l2fwd_common.h | 129 ++++++++++
> examples/l2fwd-event/l2fwd_poll.c | 197 +++++++++++++++
> examples/l2fwd-event/l2fwd_poll.h | 25 ++
> examples/l2fwd-event/main.c | 374 ++++++++++++++++++++++++++++
> examples/l2fwd-event/meson.build | 14 ++
> 9 files changed, 946 insertions(+)
> create mode 100644 examples/l2fwd-event/Makefile
> create mode 100644 examples/l2fwd-event/l2fwd_common.c
> create mode 100644 examples/l2fwd-event/l2fwd_common.h
> create mode 100644 examples/l2fwd-event/l2fwd_poll.c
> create mode 100644 examples/l2fwd-event/l2fwd_poll.h
> create mode 100644 examples/l2fwd-event/main.c
> create mode 100644 examples/l2fwd-event/meson.build
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v5 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-10-11 14:52 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-11 14:52 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, Akhil Goyal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Tomasz Kantecki, Sunil Kumar Kori,
dpdk-dev
On Thu, Oct 3, 2019 at 2:29 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add event dev main loop based on enabled l2fwd options and eventdev
> capabilities.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> examples/l2fwd-event/l2fwd_common.c | 6 +
> examples/l2fwd-event/l2fwd_common.h | 2 +
> examples/l2fwd-event/l2fwd_event.c | 294 ++++++++++++++++++++++++++++
> examples/l2fwd-event/l2fwd_event.h | 2 +
> examples/l2fwd-event/main.c | 6 +-
> 5 files changed, 309 insertions(+), 1 deletion(-)
>
> diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
> index 213652d72..40e933c91 100644
> --- a/examples/l2fwd-event/l2fwd_common.c
> +++ b/examples/l2fwd-event/l2fwd_common.c
> @@ -65,6 +65,12 @@ l2fwd_event_init_ports(struct l2fwd_resources *l2fwd_rsrc)
> uint16_t port_id;
> int ret;
>
> + if (l2fwd_rsrc->event_mode) {
> + port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
> + port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> + port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
> + }
> +
> /* Initialise each port */
> RTE_ETH_FOREACH_DEV(port_id) {
> struct rte_eth_conf local_port_conf = port_conf;
> diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
> index cdafa52c7..852c6d321 100644
> --- a/examples/l2fwd-event/l2fwd_common.h
> +++ b/examples/l2fwd-event/l2fwd_common.h
> @@ -114,7 +114,9 @@ l2fwd_get_rsrc(void)
>
> memset(l2fwd_rsrc, 0, sizeof(struct l2fwd_resources));
> l2fwd_rsrc->mac_updating = true;
> + l2fwd_rsrc->event_mode = true;
> l2fwd_rsrc->rx_queue_per_lcore = 1;
> + l2fwd_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
> l2fwd_rsrc->timer_period = 10 * rte_get_timer_hz();
>
> return mz->addr;
> diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
> index adba40069..df0b56773 100644
> --- a/examples/l2fwd-event/l2fwd_event.c
> +++ b/examples/l2fwd-event/l2fwd_event.c
> @@ -17,6 +17,12 @@
>
> #include "l2fwd_event.h"
>
> +#define L2FWD_EVENT_SINGLE 0x1
> +#define L2FWD_EVENT_BURST 0x2
> +#define L2FWD_EVENT_TX_DIRECT 0x4
> +#define L2FWD_EVENT_TX_ENQ 0x8
> +#define L2FWD_EVENT_UPDT_MAC 0x10
> +
> static inline int
> l2fwd_event_service_enable(uint32_t service_id)
> {
> @@ -128,11 +134,289 @@ l2fwd_event_capability_setup(struct l2fwd_event_resources *event_rsrc)
> l2fwd_event_set_internal_port_ops(&event_rsrc->ops);
> }
>
> +static __rte_noinline int
> +l2fwd_get_free_event_port(struct l2fwd_event_resources *event_rsrc)
> +{
> + static int index;
> + int port_id;
> +
> + rte_spinlock_lock(&event_rsrc->evp.lock);
> + if (index >= event_rsrc->evp.nb_ports) {
> + printf("No free event port is available\n");
> + return -1;
> + }
> +
> + port_id = event_rsrc->evp.event_p_id[index];
> + index++;
> + rte_spinlock_unlock(&event_rsrc->evp.lock);
> +
> + return port_id;
> +}
> +
> +static __rte_always_inline void
> +l2fwd_event_loop_single(struct l2fwd_resources *l2fwd_rsrc,
> + const uint32_t flags)
> +{
> + const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
> + struct l2fwd_event_resources *event_rsrc = l2fwd_rsrc->event_rsrc;
> + const int port_id = l2fwd_get_free_event_port(event_rsrc);
> + uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
> + const uint64_t timer_period = l2fwd_rsrc->timer_period;
> + const uint8_t tx_q_id = event_rsrc->evq.event_q_id[
> + event_rsrc->evq.nb_queues - 1];
See below
> + const uint8_t event_d_id = event_rsrc->event_d_id;
> + struct rte_mbuf *mbuf;
> + uint16_t dst_port;
> + struct rte_event ev;
> +
> + if (port_id < 0)
> + return;
> +
> + printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
> + rte_lcore_id());
> +
> + while (!l2fwd_rsrc->force_quit) {
> + /* if timer is enabled */
> + if (is_master && timer_period > 0) {
> + cur_tsc = rte_rdtsc();
> + diff_tsc = cur_tsc - prev_tsc;
> +
> + /* advance the timer */
> + timer_tsc += diff_tsc;
> +
> + /* if timer has reached its timeout */
> + if (unlikely(timer_tsc >= timer_period)) {
> + print_stats(l2fwd_rsrc);
> + /* reset the timer */
> + timer_tsc = 0;
> + }
> + prev_tsc = cur_tsc;
> + }
> +
> + /* Read packet from eventdev */
> + if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
> + continue;
> +
> +
> + mbuf = ev.mbuf;
> + dst_port = l2fwd_rsrc->dst_ports[mbuf->port];
> + rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
> +
> + if (timer_period > 0)
> + __atomic_fetch_add(
> + &l2fwd_rsrc->port_stats[mbuf->port].rx,
> + 1, __ATOMIC_RELAXED);
> +
> + mbuf->port = dst_port;
> + if (flags & L2FWD_EVENT_UPDT_MAC)
> + l2fwd_mac_updating(mbuf, dst_port,
> + &l2fwd_rsrc->eth_addr[dst_port]);
See below
> +
> + if (flags & L2FWD_EVENT_TX_ENQ) {
> + ev.queue_id = tx_q_id;
> + ev.op = RTE_EVENT_OP_FORWARD;
> + while (rte_event_enqueue_burst(event_d_id, port_id,
> + &ev, 1) &&
> + !l2fwd_rsrc->force_quit)
> + ;
> + }
> +
> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> + rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> + while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> + port_id,
> + &ev, 1) &&
See below
> + !l2fwd_rsrc->force_quit)
> + ;
> + }
> +
> + if (timer_period > 0)
> + __atomic_fetch_add(
> + &l2fwd_rsrc->port_stats[mbuf->port].tx,
> + 1, __ATOMIC_RELAXED);
As style comment:
# There is a lot of multiline statements in the code, which reduce the
readability of the code. Please find below some options to reduce it.
Could you please check options to reduce it
1) shorten the structure name like
s/event_rsrc/evt_rsrc
s/l2fwd_rsrc/ rsrc
2) I think, rte_exit(EXIT_FAILURE can be replaced to rte_panic in the
application case
3) Adjusting the newline code starts
diff --git a/examples/l2fwd-event/l2fwd_event.c
b/examples/l2fwd-event/l2fwd_event.c
index df0b56773..49665a102 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -87,8 +87,8 @@ l2fwd_event_service_setup(struct l2fwd_resources *l2fwd_rsrc)
&service_id);
if (ret != -ESRCH && ret != 0)
rte_exit(EXIT_FAILURE,
- "Error in starting Rx
adapter[%d] service\n",
- event_rsrc->rx_adptr.rx_adptr[i]);
+ "Error in starting Rx adapter[%d] service\n",
+ event_rsrc->rx_adptr.rx_adptr[i]);
l2fwd_event_service_enable(service_id);
}
4) Replace code in the nested loop with static inline function so that code gets
enough space in new function.
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
` (11 preceding siblings ...)
2019-10-09 7:50 ` Nipun Gupta
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
` (12 more replies)
12 siblings, 13 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds a new application to demonstrate the usage of event
mode. The poll mode is also available to help with the transition.
The following new command line parameters are added:
--mode: Dictates the mode of operation either poll or event.
--eventq_sched: Dictates event scheduling mode ordered, atomic or
parallel.
Based on event device capability the configuration is done as follows:
- A single event device is enabled.
- The number of event ports is equal to the number of worker
cores enabled in the core mask. Additional event ports might
be configured based on Rx/Tx adapter capability.
- The number of event queues is equal to the number of ethernet
ports. If Tx adapter doesn't have internal port capability then
an additional single link event queue is used to enqueue events
to Tx adapter.
- Each event port is linked to all existing event queues.
- Dedicated Rx/Tx adapters for each Ethernet port.
v6 Changes:
- Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
- Use rte_panic instead of rte_exit.
- Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
- Update documentation and fix error, spellcheck.
- Fix meson build.
- Split functions into smaller functions for redability.
- Add parallel mode support.
v5 Changes:
- Redo poll mode datapath by removing all the static globals.
- Fix event queue configuration when required queues are not available.
- Fix Rx/Tx adapter creation based on portmask.
- Update release notes.
- Unroll macro used to generate event mode functions.
v4 Changes:
- Fix missing eventdev args parsing.
v3 Changes:
- Remove unwanted change to example/l2fwd.
- Fix checkpatch issue
http://mails.dpdk.org/archives/test-report/2019-September/098053.html
v2 Changes:
- Remove global variables.
- Split patches to make reviews friendlier.
- Split datapath based on eventdev capability.
Pavan Nikhilesh (5):
examples/l2fwd-event: add default poll mode routines
examples/l2fwd-event: add infra for eventdev
examples/l2fwd-event: add service core setup
examples/l2fwd-event: add eventdev main loop
examples/l2fwd-event: add graceful teardown
Sunil Kumar Kori (5):
examples/l2fwd-event: add infra to split eventdev framework
examples/l2fwd-event: add event device setup
examples/l2fwd-event: add eventdev queue and port setup
examples/l2fwd-event: add event Rx/Tx adapter setup
doc: add application usage guide for l2fwd-event
MAINTAINERS | 6 +
doc/guides/rel_notes/release_19_11.rst | 6 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
doc/guides/sample_app_ug/l2_forward_event.rst | 711 ++++++++++++++++++
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 62 ++
examples/l2fwd-event/l2fwd_common.c | 144 ++++
examples/l2fwd-event/l2fwd_common.h | 133 ++++
examples/l2fwd-event/l2fwd_event.c | 431 +++++++++++
examples/l2fwd-event/l2fwd_event.h | 73 ++
examples/l2fwd-event/l2fwd_event_generic.c | 315 ++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 294 ++++++++
examples/l2fwd-event/l2fwd_poll.c | 193 +++++
examples/l2fwd-event/l2fwd_poll.h | 25 +
examples/l2fwd-event/main.c | 456 +++++++++++
examples/l2fwd-event/meson.build | 18 +
examples/meson.build | 2 +-
18 files changed, 2875 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.c
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_event.c
create mode 100644 examples/l2fwd-event/l2fwd_event.h
create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-16 19:07 ` Stephen Hemminger
2019-10-21 3:29 ` Varghese, Vipin
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
` (11 subsequent siblings)
12 siblings, 2 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Thomas Monjalon,
Marko Kovacevic, Ori Kam, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the default l2fwd poll mode routines similar to examples/l2fwd.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
MAINTAINERS | 5 +
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 59 +++++
examples/l2fwd-event/l2fwd_common.c | 138 +++++++++++
examples/l2fwd-event/l2fwd_common.h | 128 ++++++++++
examples/l2fwd-event/l2fwd_poll.c | 193 +++++++++++++++
examples/l2fwd-event/l2fwd_poll.h | 25 ++
examples/l2fwd-event/main.c | 371 ++++++++++++++++++++++++++++
examples/l2fwd-event/meson.build | 15 ++
examples/meson.build | 2 +-
10 files changed, 936 insertions(+), 1 deletion(-)
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.c
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_poll.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
diff --git a/MAINTAINERS b/MAINTAINERS
index f8a56e2e2..6957b2a24 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1475,6 +1475,11 @@ M: Tomasz Kantecki <tomasz.kantecki@intel.com>
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
+M: Sunil Kumar Kori <skori@marvell.com>
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+F: examples/l2fwd-event/
+T: git://dpdk.org/next/dpdk-next-eventdev
+
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
diff --git a/examples/Makefile b/examples/Makefile
index de11dd487..d18504bd2 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -34,6 +34,7 @@ endif
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni
DIRS-y += l2fwd
+DIRS-y += l2fwd-event
ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
new file mode 100644
index 000000000..73f02dd3b
--- /dev/null
+++ b/examples/l2fwd-event/Makefile
@@ -0,0 +1,59 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# binary name
+APP = l2fwd-event
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+SRCS-y += l2fwd_poll.c
+SRCS-y += l2fwd_common.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
new file mode 100644
index 000000000..8edbe1ba5
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -0,0 +1,138 @@
+#include "l2fwd_common.h"
+
+/* Print out statistics on packets dropped */
+void
+print_stats(struct l2fwd_resources *rsrc)
+{
+ uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
+ uint32_t port_id;
+
+ total_packets_dropped = 0;
+ total_packets_tx = 0;
+ total_packets_rx = 0;
+
+ const char clr[] = {27, '[', '2', 'J', '\0' };
+ const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("\nPort statistics ====================================");
+
+ for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {
+ /* skip disabled ports */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ printf("\nStatistics for port %u ------------------------------"
+ "\nPackets sent: %24"PRIu64
+ "\nPackets received: %20"PRIu64
+ "\nPackets dropped: %21"PRIu64,
+ port_id,
+ rsrc->port_stats[port_id].tx,
+ rsrc->port_stats[port_id].rx,
+ rsrc->port_stats[port_id].dropped);
+
+ total_packets_dropped +=
+ rsrc->port_stats[port_id].dropped;
+ total_packets_tx += rsrc->port_stats[port_id].tx;
+ total_packets_rx += rsrc->port_stats[port_id].rx;
+ }
+ printf("\nAggregate statistics ==============================="
+ "\nTotal packets sent: %18"PRIu64
+ "\nTotal packets received: %14"PRIu64
+ "\nTotal packets dropped: %15"PRIu64,
+ total_packets_tx,
+ total_packets_rx,
+ total_packets_dropped);
+ printf("\n====================================================\n");
+}
+
+int
+l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
+{
+ uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+ struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+ };
+ uint16_t nb_ports_available = 0;
+ uint16_t port_id;
+ int ret;
+
+ /* Initialise each port */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_eth_conf local_port_conf = port_conf;
+ struct rte_eth_dev_info dev_info;
+ struct rte_eth_rxconf rxq_conf;
+ struct rte_eth_txconf txq_conf;
+
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0) {
+ printf("Skipping disabled port %u\n", port_id);
+ continue;
+ }
+ nb_ports_available++;
+
+ /* init port */
+ printf("Initializing port %u... ", port_id);
+ fflush(stdout);
+ rte_eth_dev_info_get(port_id, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_panic("Cannot configure device: err=%d, port=%u\n",
+ ret, port_id);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_panic("Cannot adjust number of descriptors: err=%d, port=%u\n",
+ ret, port_id);
+
+ rte_eth_macaddr_get(port_id, &rsrc->eth_addr[port_id]);
+
+ /* init one RX queue */
+ fflush(stdout);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd,
+ rte_eth_dev_socket_id(port_id),
+ &rxq_conf,
+ rsrc->pktmbuf_pool);
+ if (ret < 0)
+ rte_panic("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+ ret, port_id);
+
+ /* init one TX queue on each port */
+ fflush(stdout);
+ txq_conf = dev_info.default_txconf;
+ txq_conf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ rte_eth_dev_socket_id(port_id),
+ &txq_conf);
+ if (ret < 0)
+ rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, port_id);
+
+ rte_eth_promiscuous_enable(port_id);
+
+ printf("Port %u,MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ port_id,
+ rsrc->eth_addr[port_id].addr_bytes[0],
+ rsrc->eth_addr[port_id].addr_bytes[1],
+ rsrc->eth_addr[port_id].addr_bytes[2],
+ rsrc->eth_addr[port_id].addr_bytes[3],
+ rsrc->eth_addr[port_id].addr_bytes[4],
+ rsrc->eth_addr[port_id].addr_bytes[5]);
+ }
+
+ return nb_ports_available;
+}
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
new file mode 100644
index 000000000..0f6101fa5
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_COMMON_H__
+#define __L2FWD_COMMON_H__
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#define MAX_PKT_BURST 32
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define MEMPOOL_CACHE_SIZE 256
+
+#define DEFAULT_TIMER_PERIOD 10 /* default period is 10 seconds */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+ uint64_t dropped;
+ uint64_t tx;
+ uint64_t rx;
+} __rte_cache_aligned;
+
+struct l2fwd_resources {
+ volatile uint8_t force_quit;
+ uint8_t mac_updating;
+ uint8_t rx_queue_per_lcore;
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint32_t enabled_port_mask;
+ uint64_t timer_period;
+ struct rte_mempool *pktmbuf_pool;
+ uint32_t dst_ports[RTE_MAX_ETHPORTS];
+ struct rte_ether_addr eth_addr[RTE_MAX_ETHPORTS];
+ struct l2fwd_port_statistics port_stats[RTE_MAX_ETHPORTS];
+ void *poll_rsrc;
+} __rte_cache_aligned;
+
+static __rte_always_inline void
+l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_port_id,
+ struct rte_ether_addr *addr)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_port_id << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(addr, ð->s_addr);
+}
+
+static __rte_always_inline struct l2fwd_resources *
+l2fwd_get_rsrc(void)
+{
+ static const char name[RTE_MEMZONE_NAMESIZE] = "rsrc";
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ mz = rte_memzone_reserve(name, sizeof(struct l2fwd_resources), 0, 0);
+ if (mz != NULL) {
+ struct l2fwd_resources *rsrc = mz->addr;
+
+ memset(rsrc, 0, sizeof(struct l2fwd_resources));
+ rsrc->mac_updating = true;
+ rsrc->rx_queue_per_lcore = 1;
+ rsrc->timer_period = 10 * rte_get_timer_hz();
+
+ return mz->addr;
+ }
+
+ rte_panic("Unable to allocate memory for l2fwd resources\n");
+
+ return NULL;
+}
+
+void print_stats(struct l2fwd_resources *rsrc);
+int l2fwd_event_init_ports(struct l2fwd_resources *rsrc);
+
+#endif /* __L2FWD_COMMON_H__ */
diff --git a/examples/l2fwd-event/l2fwd_poll.c b/examples/l2fwd-event/l2fwd_poll.c
new file mode 100644
index 000000000..b4eb39be2
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_poll.c
@@ -0,0 +1,193 @@
+#include "l2fwd_poll.h"
+
+static inline void
+l2fwd_poll_simple_forward(struct l2fwd_resources *rsrc, struct rte_mbuf *m,
+ uint32_t portid)
+{
+ struct rte_eth_dev_tx_buffer *buffer;
+ uint32_t dst_port;
+ int sent;
+
+ dst_port = rsrc->dst_ports[portid];
+
+ if (rsrc->mac_updating)
+ l2fwd_mac_updating(m, dst_port, &rsrc->eth_addr[dst_port]);
+
+ buffer = ((struct l2fwd_poll_resources *)rsrc->poll_rsrc)->tx_buffer[
+ dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ rsrc->port_stats[dst_port].tx += sent;
+}
+
+/* main poll mode processing loop */
+static void
+l2fwd_poll_main_loop(struct l2fwd_resources *rsrc)
+{
+ uint8_t is_master = rte_lcore_id() == rte_get_master_lcore();
+ uint64_t prev_tsc, diff_tsc, cur_tsc, timer_tsc, drain_tsc;
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ uint64_t timer_period = rsrc->timer_period;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_tx_buffer *buf;
+ struct lcore_queue_conf *qconf;
+ uint32_t i, j, port_id, nb_rx;
+ struct rte_mbuf *m;
+ uint32_t lcore_id;
+ int32_t sent;
+
+ drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S *
+ BURST_TX_DRAIN_US;
+ prev_tsc = 0;
+ timer_tsc = 0;
+
+ lcore_id = rte_lcore_id();
+ qconf = &poll_rsrc->lcore_queue_conf[lcore_id];
+
+ if (qconf->n_rx_port == 0) {
+ printf("lcore %u has nothing to do\n", lcore_id);
+ return;
+ }
+
+ printf("entering main loop on lcore %u\n", lcore_id);
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ port_id = qconf->rx_port_list[i];
+ printf(" -- lcoreid=%u port_id=%u\n", lcore_id, port_id);
+
+ }
+
+ while (!rsrc->force_quit) {
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ port_id =
+ rsrc->dst_ports[qconf->rx_port_list[i]];
+ buf = poll_rsrc->tx_buffer[port_id];
+ sent = rte_eth_tx_buffer_flush(port_id, 0, buf);
+ if (sent)
+ rsrc->port_stats[port_id].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats(rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+ /*
+ * Read packet from RX queues
+ */
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ port_id = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst(port_id, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ rsrc->port_stats[port_id].rx += nb_rx;
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_poll_simple_forward(rsrc, m, port_id);
+ }
+ }
+ }
+}
+
+static void
+l2fwd_poll_lcore_config(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ struct lcore_queue_conf *qconf = NULL;
+ uint32_t rx_lcore_id = 0;
+ uint16_t port_id;
+
+ /* Initialize the port/queue configuration of each logical core */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+
+ /* get the lcore_id for this port */
+ while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+ poll_rsrc->lcore_queue_conf[rx_lcore_id].n_rx_port ==
+ rsrc->rx_queue_per_lcore) {
+ rx_lcore_id++;
+ if (rx_lcore_id >= RTE_MAX_LCORE)
+ rte_panic("Not enough cores\n");
+ }
+
+ if (qconf != &poll_rsrc->lcore_queue_conf[rx_lcore_id]) {
+ /* Assigned a new logical core in the loop above. */
+ qconf = &poll_rsrc->lcore_queue_conf[rx_lcore_id];
+ }
+
+ qconf->rx_port_list[qconf->n_rx_port] = port_id;
+ qconf->n_rx_port++;
+ printf("Lcore %u: RX port %u\n", rx_lcore_id, port_id);
+ }
+}
+
+static void
+l2fwd_poll_init_tx_buffers(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* Initialize TX buffers */
+ poll_rsrc->tx_buffer[port_id] = rte_zmalloc_socket("tx_buffer",
+ RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
+ rte_eth_dev_socket_id(port_id));
+ if (poll_rsrc->tx_buffer[port_id] == NULL)
+ rte_panic("Cannot allocate buffer for tx on port %u\n",
+ port_id);
+
+ rte_eth_tx_buffer_init(poll_rsrc->tx_buffer[port_id],
+ MAX_PKT_BURST);
+
+ ret = rte_eth_tx_buffer_set_err_callback(
+ poll_rsrc->tx_buffer[port_id],
+ rte_eth_tx_buffer_count_callback,
+ &rsrc->port_stats[port_id].dropped);
+ if (ret < 0)
+ rte_panic("Cannot set error callback for tx buffer on port %u\n",
+ port_id);
+ }
+}
+
+void
+l2fwd_poll_resource_setup(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc;
+
+ poll_rsrc = rte_zmalloc("l2fwd_poll_rsrc",
+ sizeof(struct l2fwd_poll_resources), 0);
+ if (poll_rsrc == NULL)
+ rte_panic("Failed to allocate resources for l2fwd poll mode\n");
+
+ rsrc->poll_rsrc = poll_rsrc;
+ l2fwd_poll_lcore_config(rsrc);
+ l2fwd_poll_init_tx_buffers(rsrc);
+
+ poll_rsrc->poll_main_loop = l2fwd_poll_main_loop;
+}
diff --git a/examples/l2fwd-event/l2fwd_poll.h b/examples/l2fwd-event/l2fwd_poll.h
new file mode 100644
index 000000000..d59b0c844
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_poll.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_POLL_H__
+#define __L2FWD_POLL_H__
+
+#include "l2fwd_common.h"
+
+typedef void (*poll_main_loop_cb)(struct l2fwd_resources *rsrc);
+
+struct lcore_queue_conf {
+ uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ uint32_t n_rx_port;
+} __rte_cache_aligned;
+
+struct l2fwd_poll_resources {
+ poll_main_loop_cb poll_main_loop;
+ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+};
+
+void l2fwd_poll_resource_setup(struct l2fwd_resources *rsrc);
+
+#endif
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
new file mode 100644
index 000000000..11092b0d6
--- /dev/null
+++ b/examples/l2fwd-event/main.c
@@ -0,0 +1,371 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "l2fwd_poll.h"
+
+/* display usage */
+static void
+l2fwd_event_usage(const char *prgname)
+{
+ printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
+ " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+ " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds "
+ " (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ prgname);
+}
+
+static int
+l2fwd_event_parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static unsigned int
+l2fwd_event_parse_nqueue(const char *q_arg)
+{
+ char *end = NULL;
+ unsigned long n;
+
+ /* parse hexadecimal string */
+ n = strtoul(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+ if (n == 0)
+ return 0;
+ if (n >= MAX_RX_QUEUE_PER_LCORE)
+ return 0;
+
+ return n;
+}
+
+static int
+l2fwd_event_parse_timer_period(const char *q_arg)
+{
+ char *end = NULL;
+ int n;
+
+ /* parse number string */
+ n = strtol(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+ if (n >= MAX_TIMER_PERIOD)
+ return -1;
+
+ return n;
+}
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "q:" /* number of queues */
+ "T:" /* timer period */
+ ;
+
+#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
+#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options
+ */
+ CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_event_parse_args(int argc, char **argv,
+ struct l2fwd_resources *rsrc)
+{
+ int mac_updating = 1;
+ struct option lgopts[] = {
+ { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
+ { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ {NULL, 0, 0, 0}
+ };
+ int opt, ret, timer_secs;
+ char *prgname = argv[0];
+ char **argvopt;
+ int option_index;
+
+ argvopt = argv;
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ rsrc->enabled_port_mask =
+ l2fwd_event_parse_portmask(optarg);
+ if (rsrc->enabled_port_mask == 0) {
+ printf("invalid portmask\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* nqueue */
+ case 'q':
+ rsrc->rx_queue_per_lcore =
+ l2fwd_event_parse_nqueue(optarg);
+ if (rsrc->rx_queue_per_lcore == 0) {
+ printf("invalid queue number\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* timer period */
+ case 'T':
+ timer_secs = l2fwd_event_parse_timer_period(optarg);
+ if (timer_secs < 0) {
+ printf("invalid timer period\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ rsrc->timer_period = timer_secs;
+ /* convert to number of cycles */
+ rsrc->timer_period *= rte_get_timer_hz();
+ break;
+
+ /* long options */
+ case 0:
+ break;
+
+ default:
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ }
+
+ rsrc->mac_updating = mac_updating;
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+static int
+l2fwd_launch_one_lcore(void *args)
+{
+ struct l2fwd_resources *rsrc = args;
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+
+ poll_rsrc->poll_main_loop(rsrc);
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(struct l2fwd_resources *rsrc,
+ uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t port_id;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status...");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (rsrc->force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if (rsrc->force_quit)
+ return;
+ if ((port_mask & (1 << port_id)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(port_id, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps - %s\n",
+ port_id, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", port_id);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+static void
+signal_handler(int signum)
+{
+ struct l2fwd_resources *rsrc = l2fwd_get_rsrc();
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ rsrc->force_quit = true;
+ }
+}
+
+int
+main(int argc, char **argv)
+{
+ struct l2fwd_resources *rsrc;
+ uint16_t nb_ports_available = 0;
+ uint32_t nb_ports_in_mask = 0;
+ uint16_t port_id, last_port;
+ uint32_t nb_mbufs;
+ uint16_t nb_ports;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_panic("Invalid EAL arguments\n");
+ argc -= ret;
+ argv += ret;
+
+ rsrc = l2fwd_get_rsrc();
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* parse application arguments (after the EAL ones) */
+ ret = l2fwd_event_parse_args(argc, argv, rsrc);
+ if (ret < 0)
+ rte_panic("Invalid L2FWD arguments\n");
+
+ printf("MAC updating %s\n", rsrc->mac_updating ? "enabled" :
+ "disabled");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports == 0)
+ rte_panic("No Ethernet ports - bye\n");
+
+ /* check port mask to possible port mask */
+ if (rsrc->enabled_port_mask & ~((1 << nb_ports) - 1))
+ rte_panic("Invalid portmask; possible (0x%x)\n",
+ (1 << nb_ports) - 1);
+
+ /* reset l2fwd_dst_ports */
+ for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++)
+ rsrc->dst_ports[port_id] = 0;
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ rsrc->dst_ports[port_id] = last_port;
+ rsrc->dst_ports[last_port] = port_id;
+ } else {
+ last_port = port_id;
+ }
+
+ nb_ports_in_mask++;
+ }
+ if (nb_ports_in_mask % 2) {
+ printf("Notice: odd number of ports in portmask.\n");
+ rsrc->dst_ports[last_port] = last_port;
+ }
+
+ nb_mbufs = RTE_MAX(nb_ports * (RTE_TEST_RX_DESC_DEFAULT +
+ RTE_TEST_TX_DESC_DEFAULT +
+ MAX_PKT_BURST + rte_lcore_count() *
+ MEMPOOL_CACHE_SIZE), 8192U);
+
+ /* create the mbuf pool */
+ rsrc->pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool",
+ nb_mbufs, MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+ if (rsrc->pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+ nb_ports_available = l2fwd_event_init_ports(rsrc);
+ if (!nb_ports_available)
+ rte_panic("All available ports are disabled. Please set portmask.\n");
+
+ l2fwd_poll_resource_setup(rsrc);
+
+ /* initialize port stats */
+ memset(&rsrc->port_stats, 0,
+ sizeof(struct l2fwd_port_statistics));
+
+ /* All settings are done. Now enable eth devices */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0)
+ rte_panic("rte_eth_dev_start:err=%d, port=%u\n", ret,
+ port_id);
+ }
+
+ check_all_ports_link_status(rsrc, rsrc->enabled_port_mask);
+
+ /* launch per-lcore init on every lcore */
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, rsrc,
+ CALL_MASTER);
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ printf("Closing port %d...", port_id);
+ rte_eth_dev_stop(port_id);
+ rte_eth_dev_close(port_id);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return 0;
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
new file mode 100644
index 000000000..c936aa72e
--- /dev/null
+++ b/examples/l2fwd-event/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+deps += 'eventdev'
+sources = files(
+ 'main.c',
+ 'l2fwd_poll.c',
+ 'l2fwd_common.c',
+)
diff --git a/examples/meson.build b/examples/meson.build
index a046b74ad..0b02640bf 100644
--- a/examples/meson.build
+++ b/examples/meson.build
@@ -19,7 +19,7 @@ all_examples = [
'ip_fragmentation', 'ip_pipeline',
'ip_reassembly', 'ipsec-secgw',
'ipv4_multicast', 'kni',
- 'l2fwd', 'l2fwd-cat',
+ 'l2fwd', 'l2fwd-cat', 'l2fwd-event',
'l2fwd-crypto', 'l2fwd-jobstats',
'l2fwd-keepalive', 'l3fwd',
'l3fwd-acl', 'l3fwd-power',
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 02/10] examples/l2fwd-event: add infra for eventdev
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
` (10 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add infra to select event device as a mode to process packets through
command line arguments. Also, allow the user to select the schedule type
to be RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or
RTE_SCHED_TYPE_PARALLEL.
Usage:
`--mode="eventdev"` or `--mode="poll"`
`--eventq-sched="ordered"`, `--eventq-sched="atomic"` or
`--event-sched=parallel`
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 1 +
examples/l2fwd-event/l2fwd_common.h | 4 +++
examples/l2fwd-event/l2fwd_event.c | 34 +++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 21 ++++++++++++
examples/l2fwd-event/main.c | 52 +++++++++++++++++++++++++++--
examples/l2fwd-event/meson.build | 1 +
6 files changed, 111 insertions(+), 2 deletions(-)
create mode 100644 examples/l2fwd-event/l2fwd_event.c
create mode 100644 examples/l2fwd-event/l2fwd_event.h
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index 73f02dd3b..08ba1835d 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -8,6 +8,7 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
SRCS-y += l2fwd_poll.c
+SRCS-y += l2fwd_event.c
SRCS-y += l2fwd_common.c
# Build using pkg-config variables if possible
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 0f6101fa5..7ce78da3d 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -65,6 +65,8 @@ struct l2fwd_port_statistics {
struct l2fwd_resources {
volatile uint8_t force_quit;
+ uint8_t event_mode;
+ uint8_t sched_type;
uint8_t mac_updating;
uint8_t rx_queue_per_lcore;
uint16_t nb_rxd;
@@ -75,6 +77,7 @@ struct l2fwd_resources {
uint32_t dst_ports[RTE_MAX_ETHPORTS];
struct rte_ether_addr eth_addr[RTE_MAX_ETHPORTS];
struct l2fwd_port_statistics port_stats[RTE_MAX_ETHPORTS];
+ void *evt_rsrc;
void *poll_rsrc;
} __rte_cache_aligned;
@@ -112,6 +115,7 @@ l2fwd_get_rsrc(void)
memset(rsrc, 0, sizeof(struct l2fwd_resources));
rsrc->mac_updating = true;
rsrc->rx_queue_per_lcore = 1;
+ rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
rsrc->timer_period = 10 * rte_get_timer_hz();
return mz->addr;
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
new file mode 100644
index 000000000..48d32d718
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc;
+
+ if (!rte_event_dev_count())
+ rte_panic("No Eventdev found\n");
+
+ evt_rsrc = rte_zmalloc("l2fwd_event",
+ sizeof(struct l2fwd_event_resources), 0);
+ if (evt_rsrc == NULL)
+ rte_panic("Failed to allocate memory\n");
+
+ rsrc->evt_rsrc = evt_rsrc;
+}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
new file mode 100644
index 000000000..9a1bb1612
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_EVENT_H__
+#define __L2FWD_EVENT_H__
+
+#include <rte_common.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+struct l2fwd_event_resources {
+};
+
+void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
+
+#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 11092b0d6..f60ab0c73 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -2,6 +2,7 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include "l2fwd_event.h"
#include "l2fwd_poll.h"
/* display usage */
@@ -16,7 +17,12 @@ l2fwd_event_usage(const char *prgname)
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
" When enabled:\n"
" - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n"
+ " --mode: Packet transfer mode for I/O, poll or eventdev\n"
+ " Default mode = eventdev\n"
+ " --eventq-sched: Event queue schedule type, ordered, atomic or parallel.\n"
+ " Default: atomic\n"
+ " Valid only if --mode=eventdev\n\n",
prgname);
}
@@ -71,6 +77,28 @@ l2fwd_event_parse_timer_period(const char *q_arg)
return n;
}
+static void
+l2fwd_event_parse_mode(const char *optarg,
+ struct l2fwd_resources *rsrc)
+{
+ if (!strncmp(optarg, "poll", 4))
+ rsrc->event_mode = false;
+ else if (!strncmp(optarg, "eventdev", 8))
+ rsrc->event_mode = true;
+}
+
+static void
+l2fwd_event_parse_eventq_sched(const char *optarg,
+ struct l2fwd_resources *rsrc)
+{
+ if (!strncmp(optarg, "ordered", 7))
+ rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+ else if (!strncmp(optarg, "atomic", 6))
+ rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+ else if (!strncmp(optarg, "parallel", 8))
+ rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
+}
+
static const char short_options[] =
"p:" /* portmask */
"q:" /* number of queues */
@@ -79,6 +107,8 @@ static const char short_options[] =
#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SCHED "eventq-sched"
enum {
/* long options mapped to a short option */
@@ -87,6 +117,8 @@ enum {
* conflict with short options
*/
CMD_LINE_OPT_MIN_NUM = 256,
+ CMD_LINE_OPT_MODE_NUM,
+ CMD_LINE_OPT_EVENTQ_SCHED_NUM,
};
/* Parse the argument given in the command line of the application */
@@ -98,6 +130,10 @@ l2fwd_event_parse_args(int argc, char **argv,
struct option lgopts[] = {
{ CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
{ CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ { CMD_LINE_OPT_MODE, required_argument, NULL,
+ CMD_LINE_OPT_MODE_NUM},
+ { CMD_LINE_OPT_EVENTQ_SCHED, required_argument, NULL,
+ CMD_LINE_OPT_EVENTQ_SCHED_NUM},
{NULL, 0, 0, 0}
};
int opt, ret, timer_secs;
@@ -145,6 +181,14 @@ l2fwd_event_parse_args(int argc, char **argv,
rsrc->timer_period *= rte_get_timer_hz();
break;
+ case CMD_LINE_OPT_MODE_NUM:
+ l2fwd_event_parse_mode(optarg, rsrc);
+ break;
+
+ case CMD_LINE_OPT_EVENTQ_SCHED_NUM:
+ l2fwd_event_parse_eventq_sched(optarg, rsrc);
+ break;
+
/* long options */
case 0:
break;
@@ -330,7 +374,11 @@ main(int argc, char **argv)
if (!nb_ports_available)
rte_panic("All available ports are disabled. Please set portmask.\n");
- l2fwd_poll_resource_setup(rsrc);
+ /* Configure eventdev parameters if required */
+ if (rsrc->event_mode)
+ l2fwd_event_resource_setup(rsrc);
+ else
+ l2fwd_poll_resource_setup(rsrc);
/* initialize port stats */
memset(&rsrc->port_stats, 0,
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index c936aa72e..c1ae2037c 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -12,4 +12,5 @@ sources = files(
'main.c',
'l2fwd_poll.c',
'l2fwd_common.c',
+ 'l2fwd_event.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 03/10] examples/l2fwd-event: add infra to split eventdev framework
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 04/10] examples/l2fwd-event: add event device setup pbhagavatula
` (9 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 2 ++
examples/l2fwd-event/l2fwd_event.c | 26 +++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 7 +++++
examples/l2fwd-event/l2fwd_event_generic.c | 23 ++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 23 ++++++++++++++++
examples/l2fwd-event/meson.build | 2 ++
6 files changed, 83 insertions(+)
create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index 08ba1835d..6f4176882 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -10,6 +10,8 @@ SRCS-y := main.c
SRCS-y += l2fwd_poll.c
SRCS-y += l2fwd_event.c
SRCS-y += l2fwd_common.c
+SRCS-y += l2fwd_event_generic.c
+SRCS-y += l2fwd_event_internal_port.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 48d32d718..7f90e6311 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,29 @@
#include "l2fwd_event.h"
+static void
+l2fwd_event_capability_setup(struct l2fwd_event_resources *evt_rsrc)
+{
+ uint32_t caps = 0;
+ uint16_t i;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(i) {
+ ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+ if (ret)
+ rte_panic("Invalid capability for Tx adptr port %d\n",
+ i);
+
+ evt_rsrc->tx_mode_q |= !(caps &
+ RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+ }
+
+ if (evt_rsrc->tx_mode_q)
+ l2fwd_event_set_generic_ops(&evt_rsrc->ops);
+ else
+ l2fwd_event_set_internal_port_ops(&evt_rsrc->ops);
+}
+
void
l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
{
@@ -31,4 +54,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
rte_panic("Failed to allocate memory\n");
rsrc->evt_rsrc = evt_rsrc;
+
+ /* Setup eventdev capability callbacks */
+ l2fwd_event_capability_setup(evt_rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 9a1bb1612..b7aaa39f9 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -13,9 +13,16 @@
#include "l2fwd_common.h"
+struct event_setup_ops {
+};
+
struct l2fwd_event_resources {
+ uint8_t tx_mode_q;
+ struct event_setup_ops ops;
};
void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
+void l2fwd_event_set_generic_ops(struct event_setup_ops *ops);
+void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops);
#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
new file mode 100644
index 000000000..9afade7d2
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
new file mode 100644
index 000000000..ce95b8e6d
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index c1ae2037c..4e9a069d6 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -13,4 +13,6 @@ sources = files(
'l2fwd_poll.c',
'l2fwd_common.c',
'l2fwd_event.c',
+ 'l2fwd_event_internal_port.c',
+ 'l2fwd_event_generic.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 04/10] examples/l2fwd-event: add event device setup
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (2 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
` (8 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event device device setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 3 +
examples/l2fwd-event/l2fwd_event.h | 16 ++++
examples/l2fwd-event/l2fwd_event_generic.c | 75 +++++++++++++++++-
.../l2fwd-event/l2fwd_event_internal_port.c | 77 ++++++++++++++++++-
4 files changed, 169 insertions(+), 2 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 7f90e6311..a5c1c2c40 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -57,4 +57,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
/* Setup eventdev capability callbacks */
l2fwd_event_capability_setup(evt_rsrc);
+
+ /* Event device configuration */
+ evt_rsrc->ops.event_device_setup(rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index b7aaa39f9..6b5beb041 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -13,11 +13,27 @@
#include "l2fwd_common.h"
+typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc);
+
+struct event_queues {
+ uint8_t nb_queues;
+};
+
+struct event_ports {
+ uint8_t nb_ports;
+};
+
struct event_setup_ops {
+ event_device_setup_cb event_device_setup;
};
struct l2fwd_event_resources {
uint8_t tx_mode_q;
+ uint8_t has_burst;
+ uint8_t event_d_id;
+ uint8_t disable_implicit_release;
+ struct event_ports evp;
+ struct event_queues evq;
struct event_setup_ops ops;
};
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 9afade7d2..33e570585 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -16,8 +16,81 @@
#include "l2fwd_common.h"
#include "l2fwd_event.h"
+static uint32_t
+l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t ethdev_count = 0;
+ uint16_t num_workers = 0;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ethdev_count++;
+ }
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+ evt_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ /* One queue for each ethdev port + one Tx adapter Single link queue. */
+ event_d_conf.nb_event_queues = ethdev_count + 1;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count() - rte_service_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ evt_rsrc->evp.nb_ports = num_workers;
+ evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+ evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event device\n");
+
+ evt_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->event_device_setup = l2fwd_event_device_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index ce95b8e6d..acd98798e 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -16,8 +16,83 @@
#include "l2fwd_common.h"
#include "l2fwd_event.h"
+static uint32_t
+l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ uint8_t disable_implicit_release;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t ethdev_count = 0;
+ uint16_t num_workers = 0;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ethdev_count++;
+ }
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+
+ disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+ evt_rsrc->disable_implicit_release =
+ disable_implicit_release;
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ event_d_conf.nb_event_queues = ethdev_count;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ evt_rsrc->evp.nb_ports = num_workers;
+ evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+ evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event device\n");
+
+ evt_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->event_device_setup = l2fwd_event_device_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (3 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 04/10] examples/l2fwd-event: add event device setup pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
` (7 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event device queue and port setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 9 +-
examples/l2fwd-event/l2fwd_event.h | 10 ++
examples/l2fwd-event/l2fwd_event_generic.c | 104 ++++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 100 +++++++++++++++++
4 files changed, 222 insertions(+), 1 deletion(-)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index a5c1c2c40..8dd00a6d3 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -44,6 +44,7 @@ void
l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
{
struct l2fwd_event_resources *evt_rsrc;
+ uint32_t event_queue_cfg;
if (!rte_event_dev_count())
rte_panic("No Eventdev found\n");
@@ -59,5 +60,11 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
l2fwd_event_capability_setup(evt_rsrc);
/* Event device configuration */
- evt_rsrc->ops.event_device_setup(rsrc);
+ event_queue_cfg = evt_rsrc->ops.event_device_setup(rsrc);
+
+ /* Event queue configuration */
+ evt_rsrc->ops.event_queue_setup(rsrc, event_queue_cfg);
+
+ /* Event port configuration */
+ evt_rsrc->ops.event_port_setup(rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 6b5beb041..fe7857cdf 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -14,27 +14,37 @@
#include "l2fwd_common.h"
typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc);
+typedef void (*event_port_setup_cb)(struct l2fwd_resources *rsrc);
+typedef void (*event_queue_setup_cb)(struct l2fwd_resources *rsrc,
+ uint32_t event_queue_cfg);
struct event_queues {
+ uint8_t *event_q_id;
uint8_t nb_queues;
};
struct event_ports {
+ uint8_t *event_p_id;
uint8_t nb_ports;
+ rte_spinlock_t lock;
};
struct event_setup_ops {
event_device_setup_cb event_device_setup;
+ event_queue_setup_cb event_queue_setup;
+ event_port_setup_cb event_port_setup;
};
struct l2fwd_event_resources {
uint8_t tx_mode_q;
+ uint8_t deq_depth;
uint8_t has_burst;
uint8_t event_d_id;
uint8_t disable_implicit_release;
struct event_ports evp;
struct event_queues evq;
struct event_setup_ops ops;
+ struct rte_event_port_conf def_p_conf;
};
void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 33e570585..f72d21c0b 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -89,8 +89,112 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc)
return event_queue_cfg;
}
+static void
+l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evp.nb_ports);
+ if (!evt_rsrc->evp.event_p_id)
+ rte_panic("No space is available\n");
+
+ memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ evt_rsrc->disable_implicit_release;
+ evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+ for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event port %d\n",
+ event_p_id);
+
+ ret = rte_event_port_link(event_d_id, event_p_id,
+ evt_rsrc->evq.event_q_id,
+ NULL,
+ evt_rsrc->evq.nb_queues - 1);
+ if (ret != (evt_rsrc->evq.nb_queues - 1))
+ rte_panic("Error in linking event port %d to queues\n",
+ event_p_id);
+ evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+ }
+ /* init spinlock */
+ rte_spinlock_init(&evt_rsrc->evp.lock);
+
+ evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l2fwd_event_queue_setup_generic(struct l2fwd_resources *rsrc,
+ uint32_t event_queue_cfg)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id;
+ int32_t ret;
+
+ event_q_conf.schedule_type = rsrc->sched_type;
+ evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evq.nb_queues);
+ if (!evt_rsrc->evq.event_q_id)
+ rte_panic("Memory allocation failure\n");
+
+ rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ for (event_q_id = 0; event_q_id < (evt_rsrc->evq.nb_queues - 1);
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue\n");
+ evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+
+ event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+ event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue for Tx adapter\n");
+ evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_generic;
+ ops->event_queue_setup = l2fwd_event_queue_setup_generic;
+ ops->event_port_setup = l2fwd_event_port_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index acd98798e..dab3f24ee 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -91,8 +91,108 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc)
return event_queue_cfg;
}
+static void
+l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evp.nb_ports);
+ if (!evt_rsrc->evp.event_p_id)
+ rte_panic("Failed to allocate memory for Event Ports\n");
+
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ evt_rsrc->disable_implicit_release;
+
+ for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event port %d\n",
+ event_p_id);
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0)
+ rte_panic("Error in linking event port %d to queue\n",
+ event_p_id);
+ evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+ /* init spinlock */
+ rte_spinlock_init(&evt_rsrc->evp.lock);
+ }
+
+ evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l2fwd_event_queue_setup_internal_port(struct l2fwd_resources *rsrc,
+ uint32_t event_queue_cfg)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id = 0;
+ int32_t ret;
+
+ rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ if (def_q_conf.nb_atomic_order_sequences <
+ event_q_conf.nb_atomic_order_sequences)
+ event_q_conf.nb_atomic_order_sequences =
+ def_q_conf.nb_atomic_order_sequences;
+
+ event_q_conf.event_queue_cfg = event_queue_cfg;
+ event_q_conf.schedule_type = rsrc->sched_type;
+ evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evq.nb_queues);
+ if (!evt_rsrc->evq.event_q_id)
+ rte_panic("Memory allocation failure\n");
+
+ for (event_q_id = 0; event_q_id < evt_rsrc->evq.nb_queues;
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue\n");
+ evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_internal_port;
+ ops->event_queue_setup = l2fwd_event_queue_setup_internal_port;
+ ops->event_port_setup = l2fwd_event_port_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (4 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 07/10] examples/l2fwd-event: add service core setup pbhagavatula
` (6 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 3 +
examples/l2fwd-event/l2fwd_event.h | 16 +++
examples/l2fwd-event/l2fwd_event_generic.c | 115 ++++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 96 +++++++++++++++
4 files changed, 230 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 8dd00a6d3..33c702739 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -67,4 +67,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
/* Event port configuration */
evt_rsrc->ops.event_port_setup(rsrc);
+
+ /* Rx/Tx adapters configuration */
+ evt_rsrc->ops.adapter_setup(rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index fe7857cdf..1d7090ddf 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -17,6 +17,7 @@ typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc);
typedef void (*event_port_setup_cb)(struct l2fwd_resources *rsrc);
typedef void (*event_queue_setup_cb)(struct l2fwd_resources *rsrc,
uint32_t event_queue_cfg);
+typedef void (*adapter_setup_cb)(struct l2fwd_resources *rsrc);
struct event_queues {
uint8_t *event_q_id;
@@ -29,10 +30,23 @@ struct event_ports {
rte_spinlock_t lock;
};
+struct event_rx_adptr {
+ uint32_t service_id;
+ uint8_t nb_rx_adptr;
+ uint8_t *rx_adptr;
+};
+
+struct event_tx_adptr {
+ uint32_t service_id;
+ uint8_t nb_tx_adptr;
+ uint8_t *tx_adptr;
+};
+
struct event_setup_ops {
event_device_setup_cb event_device_setup;
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
+ adapter_setup_cb adapter_setup;
};
struct l2fwd_event_resources {
@@ -44,6 +58,8 @@ struct l2fwd_event_resources {
struct event_ports evp;
struct event_queues evq;
struct event_setup_ops ops;
+ struct event_rx_adptr rx_adptr;
+ struct event_tx_adptr tx_adptr;
struct rte_event_port_conf def_p_conf;
};
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index f72d21c0b..f99608173 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -191,10 +191,125 @@ l2fwd_event_queue_setup_generic(struct l2fwd_resources *rsrc,
evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
}
+static void
+l2fwd_rx_tx_adapter_setup_generic(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ uint8_t rx_adptr_id = 0;
+ uint8_t tx_adptr_id = 0;
+ uint8_t tx_port_id = 0;
+ uint16_t port_id;
+ uint32_t service_id;
+ int32_t ret, i = 0;
+
+ /* Rx adapter setup */
+ evt_rsrc->rx_adptr.nb_rx_adptr = 1;
+ evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->rx_adptr.nb_rx_adptr);
+ if (!evt_rsrc->rx_adptr.rx_adptr) {
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+ ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create rx adapter\n");
+
+ /* Configure user requested sched type */
+ eth_q_conf.ev.sched_type = rsrc->sched_type;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[i];
+ ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_panic("Failed to add queues to Rx adapter\n");
+ if (i < evt_rsrc->evq.nb_queues)
+ i++;
+ }
+
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error getting the service ID for rx adptr\n");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ evt_rsrc->rx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+ if (ret)
+ rte_panic("Rx adapter[%d] start Failed\n", rx_adptr_id);
+
+ evt_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+ /* Tx adapter setup */
+ evt_rsrc->tx_adptr.nb_tx_adptr = 1;
+ evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->tx_adptr.nb_tx_adptr);
+ if (!evt_rsrc->tx_adptr.tx_adptr) {
+ free(evt_rsrc->rx_adptr.rx_adptr);
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+ ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create tx adapter\n");
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+ -1);
+ if (ret)
+ rte_panic("Failed to add queues to Tx adapter\n");
+ }
+
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Failed to get Tx adapter service ID\n");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ evt_rsrc->tx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &evt_rsrc->evq.event_q_id[
+ evt_rsrc->evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+ ret);
+
+ ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+ if (ret)
+ rte_panic("Tx adapter[%d] start Failed\n", tx_adptr_id);
+
+ evt_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_generic;
ops->event_queue_setup = l2fwd_event_queue_setup_generic;
ops->event_port_setup = l2fwd_event_port_setup_generic;
+ ops->adapter_setup = l2fwd_rx_tx_adapter_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index dab3f24ee..bed94754f 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -189,10 +189,106 @@ l2fwd_event_queue_setup_internal_port(struct l2fwd_resources *rsrc,
}
}
+static void
+l2fwd_rx_tx_adapter_setup_internal_port(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ uint16_t adapter_id = 0;
+ uint16_t nb_adapter = 0;
+ uint16_t port_id;
+ uint8_t q_id = 0;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ nb_adapter++;
+ }
+
+ evt_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+ evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->rx_adptr.nb_rx_adptr);
+ if (!evt_rsrc->rx_adptr.rx_adptr) {
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create rx adapter[%d]\n",
+ adapter_id);
+
+ /* Configure user requested sched type*/
+ eth_q_conf.ev.sched_type = rsrc->sched_type;
+ eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+ ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_panic("Failed to add queues to Rx adapter\n");
+
+ ret = rte_event_eth_rx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ if (q_id < evt_rsrc->evq.nb_queues)
+ q_id++;
+ }
+
+ evt_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+ evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->tx_adptr.nb_tx_adptr);
+ if (!evt_rsrc->tx_adptr.tx_adptr) {
+ free(evt_rsrc->rx_adptr.rx_adptr);
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+ adapter_id = 0;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create tx adapter[%d]\n",
+ adapter_id);
+
+ ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+ -1);
+ if (ret)
+ rte_panic("Failed to add queues to Tx adapter\n");
+
+ ret = rte_event_eth_tx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ }
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_internal_port;
ops->event_queue_setup = l2fwd_event_queue_setup_internal_port;
ops->event_port_setup = l2fwd_event_port_setup_internal_port;
+ ops->adapter_setup = l2fwd_rx_tx_adapter_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 07/10] examples/l2fwd-event: add service core setup
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (5 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
` (5 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Setup service cores for eventdev and Rx/Tx adapter when they don't have
internal port capability.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 82 ++++++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 1 +
examples/l2fwd-event/main.c | 3 ++
3 files changed, 86 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 33c702739..562d61292 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,88 @@
#include "l2fwd_event.h"
+static inline int
+l2fwd_event_service_enable(uint32_t service_id)
+{
+ uint8_t min_service_count = UINT8_MAX;
+ uint32_t slcore_array[RTE_MAX_LCORE];
+ unsigned int slcore = 0;
+ uint8_t service_count;
+ int32_t slcore_count;
+
+ if (!rte_service_lcore_count())
+ return -ENOENT;
+
+ slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+ if (slcore_count < 0)
+ return -ENOENT;
+ /* Get the core which has least number of services running. */
+ while (slcore_count--) {
+ /* Reset default mapping */
+ rte_service_map_lcore_set(service_id,
+ slcore_array[slcore_count], 0);
+ service_count = rte_service_lcore_count_services(
+ slcore_array[slcore_count]);
+ if (service_count < min_service_count) {
+ slcore = slcore_array[slcore_count];
+ min_service_count = service_count;
+ }
+ }
+ if (rte_service_map_lcore_set(service_id, slcore, 1))
+ return -ENOENT;
+ rte_service_lcore_start(slcore);
+
+ return 0;
+}
+
+void
+l2fwd_event_service_setup(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_dev_info evdev_info;
+ uint32_t service_id, caps;
+ int ret, i;
+
+ rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+ if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+ ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting eventdev service\n");
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+ ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ ret = rte_event_eth_rx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+ ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ ret = rte_event_eth_tx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+}
+
static void
l2fwd_event_capability_setup(struct l2fwd_event_resources *evt_rsrc)
{
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 1d7090ddf..ebfbfe460 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -66,5 +66,6 @@ struct l2fwd_event_resources {
void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
void l2fwd_event_set_generic_ops(struct event_setup_ops *ops);
void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops);
+void l2fwd_event_service_setup(struct l2fwd_resources *rsrc);
#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index f60ab0c73..48a91dc70 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -397,6 +397,9 @@ main(int argc, char **argv)
port_id);
}
+ if (rsrc->event_mode)
+ l2fwd_event_service_setup(rsrc);
+
check_all_ports_link_status(rsrc, rsrc->enabled_port_mask);
/* launch per-lcore init on every lcore */
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (6 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 07/10] examples/l2fwd-event: add service core setup pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-21 3:19 ` Varghese, Vipin
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
` (4 subsequent siblings)
12 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event dev main loop based on enabled l2fwd options and eventdev
capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_common.c | 6 +
examples/l2fwd-event/l2fwd_common.h | 1 +
examples/l2fwd-event/l2fwd_event.c | 276 ++++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 2 +
examples/l2fwd-event/main.c | 6 +-
5 files changed, 290 insertions(+), 1 deletion(-)
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index 8edbe1ba5..67441a316 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -65,6 +65,12 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t port_id;
int ret;
+ if (rsrc->event_mode) {
+ port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ }
+
/* Initialise each port */
RTE_ETH_FOREACH_DEV(port_id) {
struct rte_eth_conf local_port_conf = port_conf;
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 7ce78da3d..e484946dc 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -114,6 +114,7 @@ l2fwd_get_rsrc(void)
memset(rsrc, 0, sizeof(struct l2fwd_resources));
rsrc->mac_updating = true;
+ rsrc->event_mode = true;
rsrc->rx_queue_per_lcore = 1;
rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
rsrc->timer_period = 10 * rte_get_timer_hz();
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 562d61292..634f9974e 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,12 @@
#include "l2fwd_event.h"
+#define L2FWD_EVENT_SINGLE 0x1
+#define L2FWD_EVENT_BURST 0x2
+#define L2FWD_EVENT_TX_DIRECT 0x4
+#define L2FWD_EVENT_TX_ENQ 0x8
+#define L2FWD_EVENT_UPDT_MAC 0x10
+
static inline int
l2fwd_event_service_enable(uint32_t service_id)
{
@@ -122,11 +128,271 @@ l2fwd_event_capability_setup(struct l2fwd_event_resources *evt_rsrc)
l2fwd_event_set_internal_port_ops(&evt_rsrc->ops);
}
+static __rte_noinline int
+l2fwd_get_free_event_port(struct l2fwd_event_resources *evt_rsrc)
+{
+ static int index;
+ int port_id;
+
+ rte_spinlock_lock(&evt_rsrc->evp.lock);
+ if (index >= evt_rsrc->evp.nb_ports) {
+ printf("No free event port is available\n");
+ return -1;
+ }
+
+ port_id = evt_rsrc->evp.event_p_id[index];
+ index++;
+ rte_spinlock_unlock(&evt_rsrc->evp.lock);
+
+ return port_id;
+}
+
+static __rte_always_inline void
+l2fwd_event_fwd(struct l2fwd_resources *rsrc, struct rte_event *ev,
+ const uint8_t tx_q_id, const uint64_t timer_period,
+ const uint32_t flags)
+{
+ struct rte_mbuf *mbuf = ev->mbuf;
+ uint16_t dst_port;
+
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+ dst_port = rsrc->dst_ports[mbuf->port];
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&rsrc->port_stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+ mbuf->port = dst_port;
+
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_mac_updating(mbuf, dst_port, &rsrc->eth_addr[dst_port]);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev->queue_id = tx_q_id;
+ ev->op = RTE_EVENT_OP_FORWARD;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT)
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&rsrc->port_stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_single(struct l2fwd_resources *rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ const int port_id = l2fwd_get_free_event_port(evt_rsrc);
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const uint64_t timer_period = rsrc->timer_period;
+ const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+ evt_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event ev;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!rsrc->force_quit) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats(rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
+ continue;
+
+ l2fwd_event_fwd(rsrc, &ev, tx_q_id, timer_period, flags);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ while (rte_event_enqueue_burst(event_d_id, port_id,
+ &ev, 1) &&
+ !rsrc->force_quit)
+ ;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id,
+ &ev, 1, 0) &&
+ !rsrc->force_quit)
+ ;
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_burst(struct l2fwd_resources *rsrc,
+ const uint32_t flags)
+{
+ const uint8_t is_master = rte_get_master_lcore() == rte_lcore_id();
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ const int port_id = l2fwd_get_free_event_port(evt_rsrc);
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const uint64_t timer_period = rsrc->timer_period;
+ const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+ evt_rsrc->evq.nb_queues - 1];
+ const uint8_t event_d_id = evt_rsrc->event_d_id;
+ const uint8_t deq_len = evt_rsrc->deq_depth;
+ struct rte_event ev[MAX_PKT_BURST];
+ uint16_t nb_rx, nb_tx;
+ uint8_t i;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!rsrc->force_quit) {
+ /* if timer is enabled */
+ if (is_master && timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats(rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
+ deq_len, 0);
+ if (nb_rx == 0)
+ continue;
+
+ for (i = 0; i < nb_rx; i++) {
+ l2fwd_event_fwd(rsrc, &ev[i], tx_q_id, timer_period,
+ flags);
+ }
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
+ ev, nb_rx);
+ while (nb_tx < nb_rx && !rsrc->force_quit)
+ nb_tx += rte_event_enqueue_burst(event_d_id,
+ port_id, ev + nb_tx,
+ nb_rx - nb_tx);
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id, ev,
+ nb_rx, 0);
+ while (nb_tx < nb_rx && !rsrc->force_quit)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx, 0);
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop(struct l2fwd_resources *rsrc,
+ const uint32_t flags)
+{
+ if (flags & L2FWD_EVENT_SINGLE)
+ l2fwd_event_loop_single(rsrc, flags);
+ if (flags & L2FWD_EVENT_BURST)
+ l2fwd_event_loop_burst(rsrc, flags);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc,
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_brst(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_brst(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_brst_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_brst_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST);
+}
+
void
l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
{
+ /* [MAC_UPDT][TX_MODE][BURST] */
+ const event_loop_cb event_loop[2][2][2] = {
+ [0][0][0] = l2fwd_event_main_loop_tx_d,
+ [0][0][1] = l2fwd_event_main_loop_tx_d_brst,
+ [0][1][0] = l2fwd_event_main_loop_tx_q,
+ [0][1][1] = l2fwd_event_main_loop_tx_q_brst,
+ [1][0][0] = l2fwd_event_main_loop_tx_d_mac,
+ [1][0][1] = l2fwd_event_main_loop_tx_d_brst_mac,
+ [1][1][0] = l2fwd_event_main_loop_tx_q_mac,
+ [1][1][1] = l2fwd_event_main_loop_tx_q_brst_mac,
+ };
struct l2fwd_event_resources *evt_rsrc;
uint32_t event_queue_cfg;
+ int ret;
if (!rte_event_dev_count())
rte_panic("No Eventdev found\n");
@@ -152,4 +418,14 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
/* Rx/Tx adapters configuration */
evt_rsrc->ops.adapter_setup(rsrc);
+
+ /* Start event device */
+ ret = rte_event_dev_start(evt_rsrc->event_d_id);
+ if (ret < 0)
+ rte_panic("Error in starting eventdev\n");
+
+ evt_rsrc->ops.l2fwd_event_loop = event_loop
+ [rsrc->mac_updating]
+ [evt_rsrc->tx_mode_q]
+ [evt_rsrc->has_burst];
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index ebfbfe460..78f22e5f9 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -18,6 +18,7 @@ typedef void (*event_port_setup_cb)(struct l2fwd_resources *rsrc);
typedef void (*event_queue_setup_cb)(struct l2fwd_resources *rsrc,
uint32_t event_queue_cfg);
typedef void (*adapter_setup_cb)(struct l2fwd_resources *rsrc);
+typedef void (*event_loop_cb)(struct l2fwd_resources *rsrc);
struct event_queues {
uint8_t *event_q_id;
@@ -47,6 +48,7 @@ struct event_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
adapter_setup_cb adapter_setup;
+ event_loop_cb l2fwd_event_loop;
};
struct l2fwd_event_resources {
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 48a91dc70..c11b057aa 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -214,8 +214,12 @@ l2fwd_launch_one_lcore(void *args)
{
struct l2fwd_resources *rsrc = args;
struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
- poll_rsrc->poll_main_loop(rsrc);
+ if (rsrc->event_mode)
+ evt_rsrc->ops.l2fwd_event_loop(rsrc);
+ else
+ poll_rsrc->poll_main_loop(rsrc);
return 0;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (7 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-21 3:12 ` Varghese, Vipin
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
` (3 subsequent siblings)
12 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add graceful teardown that addresses both event mode and poll mode.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/main.c | 50 +++++++++++++++++++++++++++++--------
1 file changed, 40 insertions(+), 10 deletions(-)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index c11b057aa..6d4215890 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -304,7 +304,7 @@ main(int argc, char **argv)
uint16_t port_id, last_port;
uint32_t nb_mbufs;
uint16_t nb_ports;
- int ret;
+ int i, ret;
/* init EAL */
ret = rte_eal_init(argc, argv);
@@ -409,16 +409,46 @@ main(int argc, char **argv)
/* launch per-lcore init on every lcore */
rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, rsrc,
CALL_MASTER);
- rte_eal_mp_wait_lcore();
+ if (rsrc->event_mode) {
+ struct l2fwd_event_resources *evt_rsrc =
+ rsrc->evt_rsrc;
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
+ rte_event_eth_rx_adapter_stop(
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
+ rte_event_eth_tx_adapter_stop(
+ evt_rsrc->tx_adptr.tx_adptr[i]);
- RTE_ETH_FOREACH_DEV(port_id) {
- if ((rsrc->enabled_port_mask &
- (1 << port_id)) == 0)
- continue;
- printf("Closing port %d...", port_id);
- rte_eth_dev_stop(port_id);
- rte_eth_dev_close(port_id);
- printf(" Done\n");
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ rte_eth_dev_stop(port_id);
+ }
+
+ rte_eal_mp_wait_lcore();
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ rte_eth_dev_close(port_id);
+ }
+
+ rte_event_dev_stop(evt_rsrc->event_d_id);
+ rte_event_dev_close(evt_rsrc->event_d_id);
+
+ } else {
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ printf("Closing port %d...", port_id);
+ rte_eth_dev_stop(port_id);
+ rte_eth_dev_close(port_id);
+ printf(" Done\n");
+ }
}
printf("Bye...\n");
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v6 10/10] doc: add application usage guide for l2fwd-event
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (8 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
@ 2019-10-14 18:22 ` pbhagavatula
2019-10-16 12:38 ` [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
` (2 subsequent siblings)
12 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-14 18:22 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add documentation for l2fwd-event example.
Update release notes.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/rel_notes/release_19_11.rst | 6 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
doc/guides/sample_app_ug/l2_forward_event.rst | 711 ++++++++++++++++++
5 files changed, 724 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 6957b2a24..8898ff252 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1478,6 +1478,7 @@ F: examples/l2fwd-cat/
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
F: examples/l2fwd-event/
+F: doc/guides/sample_app_ug/l2_forward_event.rst
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l3fwd/
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 85953b962..ae49b3005 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -115,6 +115,12 @@ New Features
Added eBPF JIT support for arm64 architecture to improve the eBPF program
performance.
+* **Added new example l2fwd-event application.**
+
+ Added an example application `l2fwd-event` that adds event device support to
+ traditional l2fwd example. It demonstrates usage of poll and event mode IO
+ mechanism under a single application.
+
Removed Items
-------------
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index f23f8f59e..41388231a 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -26,6 +26,7 @@ Sample Applications User Guides
l2_forward_crypto
l2_forward_job_stats
l2_forward_real_virtual
+ l2_forward_event
l2_forward_cat
l3_forward
l3_forward_power_man
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index 90704194a..15bbbec44 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -87,6 +87,11 @@ examples are highlighted below.
forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
addresses like a simple switch.
+* :doc:`Network Layer 2 forwarding<l2_forward_event>`: The Network Layer 2
+ forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
+ addresses like a simple switch. It demonstrates usage of poll and event mode
+ IO mechanism under a single application.
+
* :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
forwarding, or ``l3fwd`` application does forwarding based on Internet
Protocol, IPv4 or IPv6 like a simple router.
diff --git a/doc/guides/sample_app_ug/l2_forward_event.rst b/doc/guides/sample_app_ug/l2_forward_event.rst
new file mode 100644
index 000000000..3e618d22b
--- /dev/null
+++ b/doc/guides/sample_app_ug/l2_forward_event.rst
@@ -0,0 +1,711 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2014 Intel Corporation.
+
+.. _l2_fwd_event_app:
+
+L2 Forwarding Eventdev Sample Application
+=========================================
+
+The L2 Forwarding eventdev sample application is a simple example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
+poll and event mode packet I/O mechanism.
+
+Overview
+--------
+
+The L2 Forwarding eventdev sample application, performs L2 forwarding for each
+packet that is received on an RX_PORT. The destination port is the adjacent port
+from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
+ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
+Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows:
+
+* The source MAC address is replaced by the TX_PORT MAC address
+
+* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
+
+Application receives packets from RX_PORT using below mentioned methods:
+
+* Poll mode
+
+* Eventdev mode (default)
+
+This application can be used to benchmark performance using a traffic-generator,
+as shown in the :numref:`figure_l2fwd_event_benchmark_setup`.
+
+.. _figure_l2fwd_event_benchmark_setup:
+
+.. figure:: img/l2_fwd_benchmark_setup.*
+
+ Performance Benchmark Setup (Basic Environment)
+
+Compiling the Application
+-------------------------
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``l2fwd-event`` sub-directory.
+
+Running the Application
+-----------------------
+
+The application requires a number of command line options:
+
+.. code-block:: console
+
+ ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sched=SCHED_MODE
+
+where,
+
+* p PORTMASK: A hexadecimal bitmask of the ports to configure
+
+* q NQ: A number of queues (=ports) per lcore (default is 1)
+
+* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
+
+* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
+
+* --eventq-sched=SCHED_MODE: Event queue schedule mode, Ordered, Atomic or Parallel. Atomic by default.
+
+Sample usage commands are given below to run the application into different mode:
+
+Poll mode with 4 lcores, 16 ports and 8 RX queues per lcore and MAC address updating enabled,
+issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
+
+Eventdev mode with 4 lcores, 16 ports , sched method ordered and MAC address updating enabled,
+issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sched=ordered
+
+or
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+To run application with S/W scheduler, it uses following DPDK services:
+
+* Software scheduler
+* Rx adapter service function
+* Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sched=ordered
+
+Explanation
+-----------
+
+The following sections provide some explanation of the code.
+
+.. _l2_fwd_event_app_cmd_arguments:
+
+Command Line Arguments
+~~~~~~~~~~~~~~~~~~~~~~
+
+The L2 Forwarding eventdev sample application takes specific parameters,
+in addition to Environment Abstraction Layer (EAL) arguments.
+The preferred way to parse parameters is to use the getopt() function,
+since it is part of a well-defined and portable library.
+
+The parsing of arguments is done in the **l2fwd_parse_args()** function for non
+eventdev parameters and in **parse_eventdev_args()** for eventdev parameters.
+The method of argument parsing is not described here. Refer to the
+*glibc getopt(3)* man page for details.
+
+EAL arguments are parsed first, then application-specific arguments.
+This is done at the beginning of the main() function and eventdev parameters
+are parsed in eventdev_resource_setup() function during eventdev setup:
+
+.. code-block:: c
+
+ /* init EAL */
+
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= ret;
+ argv += ret;
+
+ /* parse application arguments (after the EAL ones) */
+
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_panic("Invalid L2FWD arguments\n");
+ .
+ .
+ .
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(argc, argv);
+ if (ret < 0)
+ return ret;
+
+
+
+
+.. _l2_fwd_event_app_mbuf_init:
+
+Mbuf Pool Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the arguments are parsed, the mbuf pool is created.
+The mbuf pool contains a set of mbuf objects that will be used by the driver
+and the application to store network packet data:
+
+.. code-block:: c
+
+ /* create the mbuf pool */
+
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+The rte_mempool is a generic structure used to handle pools of objects.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of 32 mbufs is kept.
+The memory is allocated in NUMA socket 0,
+but it is possible to extend this code to allocate one mbuf pool per socket.
+
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
+
+.. _l2_fwd_event_app_drv_init:
+
+Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~
+
+The main part of the code in the main() function relates to the initialization
+of the driver. To fully understand this code, it is recommended to study the
+chapters that related to the Poll Mode and Event mode Driver in the
+*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
+
+.. code-block:: c
+
+ if (rte_pci_probe() < 0)
+ rte_panic("Cannot probe PCI\n");
+
+ /* reset l2fwd_dst_ports */
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ }
+ else
+ last_port = portid;
+
+ nb_ports_in_mask++;
+
+ rte_eth_dev_info_get((uint8_t) portid, &dev_info);
+ }
+
+Observe that:
+
+* rte_pci_probe() parses the devices on the PCI bus and initializes recognized
+ devices.
+
+The next step is to configure the RX and TX queues. For each port, there is only
+one RX queue (only one lcore is able to poll a given port). The number of TX
+queues depends on the number of available lcores. The rte_eth_dev_configure()
+function is used to configure the number of queues for a port:
+
+.. code-block:: c
+
+ ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
+ if (ret < 0)
+ rte_panic("Cannot configure device: err=%d, port=%u\n",
+ ret, portid);
+
+.. _l2_fwd_event_app_rx_init:
+
+RX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The application uses one lcore to poll one or several ports, depending on the -q
+option, which specifies the number of queues per lcore.
+
+For example, if the user specifies -q 4, the application is able to poll four
+ports with one lcore. If there are 16 ports on the target (and if the portmask
+argument is -p ffff ), the application will need four lcores to poll all the
+ports.
+
+.. code-block:: c
+
+ ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
+ &rx_conf, l2fwd_pktmbuf_pool);
+ if (ret < 0)
+
+ rte_panic("rte_eth_rx_queue_setup: err=%d, port=%u\n",
+ ret, portid);
+
+The list of queues that must be polled for a given lcore is stored in a private
+structure called struct lcore_queue_conf.
+
+.. code-block:: c
+
+ struct lcore_queue_conf {
+ unsigned n_rx_port;
+ unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
+ } rte_cache_aligned;
+
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+The values n_rx_port and rx_port_list[] are used in the main packet processing
+loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
+
+.. _l2_fwd_event_app_tx_init:
+
+TX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Each lcore should be able to transmit on any port. For every port, a single TX
+queue is initialized.
+
+.. code-block:: c
+
+ /* init one TX queue on each port */
+
+ fflush(stdout);
+
+ ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid), &tx_conf);
+ if (ret < 0)
+ rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, (unsigned) portid);
+
+The global configuration for TX queues is stored in a static structure:
+
+.. code-block:: c
+
+ static const struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = RTE_TEST_TX_DESC_DEFAULT + 1, /* disable feature */
+ };
+
+To configure eventdev support, application setups following components:
+
+* Event dev
+* Event queue
+* Event Port
+* Rx/Tx adapters
+* Ethernet ports
+
+.. _l2_fwd_event_app_event_dev_init:
+
+Event device Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Application can use either H/W or S/W based event device scheduler
+implementation and supports single instance of event device. It configures event
+device as per below configuration
+
+.. code-block:: c
+
+ struct rte_event_dev_config event_d_conf = {
+ .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
+ .nb_event_ports = num_workers, /* Dedicated to each lcore */
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event device\n");
+
+In case of S/W scheduler, application runs eventdev scheduler service on service
+core. Application retrieves service id and finds the best possible service core to
+run S/W scheduler.
+
+.. code-block:: c
+
+ rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+ if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+ ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting eventdev service\n");
+ l2fwd_event_service_enable(service_id);
+ }
+
+.. _l2_fwd_app_event_queue_init:
+
+Event queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet device is assigned a dedicated event queue which will be linked
+to all available event ports i.e. each lcore can dequeue packets from any of the
+Ethernet ports.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = 0,
+ .schedule_type = RTE_SCHED_TYPE_ATOMIC,
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
+ };
+
+ /* User requested sched mode */
+ event_q_conf.schedule_type = eventq_sched_mode;
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue\n");
+ }
+
+In case of S/W scheduler, an extra event queue is created which will be used for
+Tx adapter service function for enqueue operation.
+
+.. _l2_fwd_app_event_port_init:
+
+Event port Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Each worker thread is assigned a dedicated event port for enq/deq operations
+to/from an event device. All event ports are linked with all available event
+queues.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event port %d\n", event_p_id);
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0)
+ rte_panic("Error in linking event port %d to queue\n",
+ event_p_id);
+ }
+
+In case of S/W scheduler, an extra event port is created by DPDK library which
+is retrieved by the application and same will be used by Tx adapter service.
+
+.. code-block:: c
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &evt_rsrc.evq.event_q_id[
+ evt_rsrc.evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+ ret);
+
+.. _l2_fwd_event_app_adapter_init:
+
+Rx/Tx adapter Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
+Ethernet port's Rx queues are connected to its respective event queue at
+priority 0 via Rx adapter configuration and Ethernet port's tx queues are
+connected via Tx adapter.
+
+.. code-block:: c
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create rx adapter[%d]\n",
+ adapter_id);
+
+ /* Configure user requested sched type*/
+ eth_q_conf.ev.sched_type = rsrc->sched_type;
+ eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+ ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_panic("Failed to add queues to Rx adapter\n");
+
+ ret = rte_event_eth_rx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ if (q_id < evt_rsrc->evq.nb_queues)
+ q_id++;
+ }
+
+ adapter_id = 0;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create tx adapter[%d]\n",
+ adapter_id);
+
+ ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+ -1);
+ if (ret)
+ rte_panic("Failed to add queues to Tx adapter\n");
+
+ ret = rte_event_eth_tx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ }
+
+For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
+configured which will be shared among all the Ethernet ports. Also DPDK library
+need service cores to run internal services for Rx/Tx adapters. Application gets
+service id for Rx/Tx adapters and after successful setup it runs the services
+on dedicated service cores.
+
+.. code-block:: c
+
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+ ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ ret = rte_event_eth_rx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+ ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ ret = rte_event_eth_tx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+.. _l2_fwd_event_app_rx_tx_packets:
+
+Receive, Process and Transmit Packets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
+the RX queues. This is done using the following code:
+
+.. code-block:: c
+
+ /*
+ * Read packet from RX queues
+ */
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+
+Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
+function writes the mbuf pointers in a local table and returns the number of
+available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_simple_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+Also to optimize enqueue operation, l2fwd_simple_forward() stores incoming mbufs
+up to MAX_PKT_BURST. Once it reaches up to limit, all packets are transmitted to
+destination ports.
+
+.. code-block:: c
+
+ static void
+ l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+ {
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+ }
+
+For this test application, the processing is exactly the same for all packets
+arriving on the same RX port. Therefore, it would have been possible to call
+the rte_eth_tx_buffer() function directly from the main loop to send all the
+received packets on the same TX port, using the burst-oriented send function,
+which is more efficient.
+
+However, in real-life applications (such as, L3 routing),
+packet N is not necessarily forwarded on the same port as packet N-1.
+The application is implemented to illustrate that, so the same approach can be
+reused in a more complex application.
+
+To ensure that no packets remain in the tables, each lcore does a draining of TX
+queue in its main loop. This technique introduces some latency when there are
+not many packets to send, however it improves performance:
+
+.. code-block:: c
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id == rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+In the **l2fwd_event_loop()** function, the main task is to read ingress
+packets from the event ports. This is done using the following code:
+
+.. code-block:: c
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
+ events, deq_len, 0);
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf[i] = events[i].mbuf;
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
+ }
+
+
+Before reading packets, deq_len is fetched to ensure correct allowed deq length
+by the eventdev.
+The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
+and returns the number of available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
+be to destination ports via Tx adapter or generic event dev enqueue API
+depending H/W or S/W scheduler is used.
+
+.. code-block:: c
+
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id, port_id, ev,
+ nb_rx);
+ while (nb_tx < nb_rx && !rsrc->force_quit)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx);
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (9 preceding siblings ...)
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-10-16 12:38 ` Jerin Jacob
2019-10-21 3:25 ` Varghese, Vipin
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
12 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-16 12:38 UTC (permalink / raw)
To: Pavan Nikhilesh, Nipun Gupta, Marko Kovacevic, Sunil Kumar Kori,
Rao, Nikhil, Van Haaren, Harry, Ori Kam, anoobj, lbartosik
Cc: Jerin Jacob, Richardson, Bruce, Hemant Agrawal, dpdk-dev,
mattias.ronnblom, Erik Gabriel Carrillo, liang.j.ma
On Mon, Oct 14, 2019 at 11:52 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds a new application to demonstrate the usage of event
> mode. The poll mode is also available to help with the transition.
>
> The following new command line parameters are added:
> --mode: Dictates the mode of operation either poll or event.
> --eventq_sched: Dictates event scheduling mode ordered, atomic or
> parallel.
>
> Based on event device capability the configuration is done as follows:
> - A single event device is enabled.
> - The number of event ports is equal to the number of worker
> cores enabled in the core mask. Additional event ports might
> be configured based on Rx/Tx adapter capability.
> - The number of event queues is equal to the number of ethernet
> ports. If Tx adapter doesn't have internal port capability then
> an additional single link event queue is used to enqueue events
> to Tx adapter.
> - Each event port is linked to all existing event queues.
> - Dedicated Rx/Tx adapters for each Ethernet port.
+ Adding all eventdev PMD maintainers.
# Got the ACK from NXP after testing with their HW.
# I will merge this patch after the RC1. i.e End of next week if there
are no more review comments.
# Thanks Anoob and Lukas for the initial versions of this l2fwd-event support.
>
> v6 Changes:
> - Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
> - Use rte_panic instead of rte_exit.
> - Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
> - Update documentation and fix error, spellcheck.
> - Fix meson build.
> - Split functions into smaller functions for redability.
> - Add parallel mode support.
>
> v5 Changes:
> - Redo poll mode datapath by removing all the static globals.
> - Fix event queue configuration when required queues are not available.
> - Fix Rx/Tx adapter creation based on portmask.
> - Update release notes.
> - Unroll macro used to generate event mode functions.
>
> v4 Changes:
> - Fix missing eventdev args parsing.
>
> v3 Changes:
> - Remove unwanted change to example/l2fwd.
> - Fix checkpatch issue
> http://mails.dpdk.org/archives/test-report/2019-September/098053.html
>
> v2 Changes:
> - Remove global variables.
> - Split patches to make reviews friendlier.
> - Split datapath based on eventdev capability.
>
> Pavan Nikhilesh (5):
> examples/l2fwd-event: add default poll mode routines
> examples/l2fwd-event: add infra for eventdev
> examples/l2fwd-event: add service core setup
> examples/l2fwd-event: add eventdev main loop
> examples/l2fwd-event: add graceful teardown
>
> Sunil Kumar Kori (5):
> examples/l2fwd-event: add infra to split eventdev framework
> examples/l2fwd-event: add event device setup
> examples/l2fwd-event: add eventdev queue and port setup
> examples/l2fwd-event: add event Rx/Tx adapter setup
> doc: add application usage guide for l2fwd-event
>
> MAINTAINERS | 6 +
> doc/guides/rel_notes/release_19_11.rst | 6 +
> doc/guides/sample_app_ug/index.rst | 1 +
> doc/guides/sample_app_ug/intro.rst | 5 +
> doc/guides/sample_app_ug/l2_forward_event.rst | 711 ++++++++++++++++++
> examples/Makefile | 1 +
> examples/l2fwd-event/Makefile | 62 ++
> examples/l2fwd-event/l2fwd_common.c | 144 ++++
> examples/l2fwd-event/l2fwd_common.h | 133 ++++
> examples/l2fwd-event/l2fwd_event.c | 431 +++++++++++
> examples/l2fwd-event/l2fwd_event.h | 73 ++
> examples/l2fwd-event/l2fwd_event_generic.c | 315 ++++++++
> .../l2fwd-event/l2fwd_event_internal_port.c | 294 ++++++++
> examples/l2fwd-event/l2fwd_poll.c | 193 +++++
> examples/l2fwd-event/l2fwd_poll.h | 25 +
> examples/l2fwd-event/main.c | 456 +++++++++++
> examples/l2fwd-event/meson.build | 18 +
> examples/meson.build | 2 +-
> 18 files changed, 2875 insertions(+), 1 deletion(-)
> create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
> create mode 100644 examples/l2fwd-event/Makefile
> create mode 100644 examples/l2fwd-event/l2fwd_common.c
> create mode 100644 examples/l2fwd-event/l2fwd_common.h
> create mode 100644 examples/l2fwd-event/l2fwd_event.c
> create mode 100644 examples/l2fwd-event/l2fwd_event.h
> create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
> create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
> create mode 100644 examples/l2fwd-event/l2fwd_poll.c
> create mode 100644 examples/l2fwd-event/l2fwd_poll.h
> create mode 100644 examples/l2fwd-event/main.c
> create mode 100644 examples/l2fwd-event/meson.build
>
> --
> 2.17.1
>
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-10-16 19:07 ` Stephen Hemminger
2019-10-21 3:29 ` Varghese, Vipin
1 sibling, 0 replies; 107+ messages in thread
From: Stephen Hemminger @ 2019-10-16 19:07 UTC (permalink / raw)
To: pbhagavatula
Cc: jerinj, bruce.richardson, hemant.agrawal, Thomas Monjalon,
Marko Kovacevic, Ori Kam, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Sunil Kumar Kori, dev
On Mon, 14 Oct 2019 23:52:38 +0530
<pbhagavatula@marvell.com> wrote:
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add the default l2fwd poll mode routines similar to examples/l2fwd.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Patchwork CI reports build failures on this.
Every patch must build, even if part of bigger set.
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
@ 2019-10-21 3:12 ` Varghese, Vipin
2019-10-21 16:56 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-21 3:12 UTC (permalink / raw)
To: pbhagavatula, jerinj, Richardson, Bruce, hemant.agrawal,
Kovacevic, Marko, Ori Kam, Nicolau, Radu, Akhil Goyal, Kantecki,
Tomasz, Sunil Kumar Kori
Cc: dev
Hi Pavan,
snipped
>
> Add graceful teardown that addresses both event mode and poll mode.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
snipped
> + if (rsrc->event_mode) {
> + struct l2fwd_event_resources *evt_rsrc =
> + rsrc->evt_rsrc;
> + for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
> + rte_event_eth_rx_adapter_stop(
> + evt_rsrc->rx_adptr.rx_adptr[i]);
Question from my end, for a graceful tear down first we stop the RX adapter then ensure after all events from worker are either dropped or transmit. Then we continue to TX adapter is stop. Is this right way?
> + for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
> + rte_event_eth_tx_adapter_stop(
> + evt_rsrc->tx_adptr.tx_adptr[i]);
Should we call `rte_cleanup` to clean up the service core usage?
> }
> printf("Bye...\n");
>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-10-21 3:19 ` Varghese, Vipin
2019-10-21 16:53 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-21 3:19 UTC (permalink / raw)
To: pbhagavatula, jerinj, Richardson, Bruce, hemant.agrawal,
Kovacevic, Marko, Ori Kam, Nicolau, Radu, Akhil Goyal, Kantecki,
Tomasz, Sunil Kumar Kori
Cc: dev
HI Pavan,
snipped
> Add event dev main loop based on enabled l2fwd options and eventdev
> capabilities.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
> + if (rsrc->event_mode) {
> + port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
> + port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> + port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
> + }
Question, is RSS hash configured for generating flow id for Eventdev? As my understanding. RSS for single RX port-queue pair does not require the same.
snipped
> + if (is_master && timer_period > 0) {
> + cur_tsc = rte_rdtsc();
> + diff_tsc = cur_tsc - prev_tsc;
> +
> + /* advance the timer */
> + timer_tsc += diff_tsc;
> +
> + /* if timer has reached its timeout */
> + if (unlikely(timer_tsc >= timer_period)) {
> + print_stats(rsrc);
> + /* reset the timer */
> + timer_tsc = 0;
> + }
> + prev_tsc = cur_tsc;
> + }
Is it possible to move the print_stats to service core, as 'CALL_MASTER' is enabled in remote_launch making this a potential worker?
> +
> + /* Read packet from eventdev */
> + if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
> + continue;
Is not this unlikely `nb_burst == 0`
> +
> + l2fwd_event_fwd(rsrc, &ev, tx_q_id, timer_period, flags);
> +
> + if (flags & L2FWD_EVENT_TX_ENQ) {
> + while (rte_event_enqueue_burst(event_d_id, port_id,
> + &ev, 1) &&
> + !rsrc->force_quit)
> + ;
Can we place a `continue` as we are not expecting ` L2FWD_EVENT_TX_DIRECT`?
> + }
> +
> + if (flags & L2FWD_EVENT_TX_DIRECT) {
> + while
> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> + port_id,
> + &ev, 1, 0) &&
> + !rsrc->force_quit)
> + ;
> + }
> + }
snipped
> +
> + while (!rsrc->force_quit) {
> + /* if timer is enabled */
> + if (is_master && timer_period > 0) {
> + cur_tsc = rte_rdtsc();
> + diff_tsc = cur_tsc - prev_tsc;
> +
> + /* advance the timer */
> + timer_tsc += diff_tsc;
> +
> + /* if timer has reached its timeout */
> + if (unlikely(timer_tsc >= timer_period)) {
> + print_stats(rsrc);
> + /* reset the timer */
> + timer_tsc = 0;
> + }
> + prev_tsc = cur_tsc;
> + }
Can we move `print_stats` logic to service core?
> +
> + /* Read packet from eventdev */
> + nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
> + deq_len, 0);
> + if (nb_rx == 0)
Can we use `unlikely`?
> + continue;
> +
> + for (i = 0; i < nb_rx; i++) {
> + l2fwd_event_fwd(rsrc, &ev[i], tx_q_id, timer_period,
> + flags);
> + }
> +
> + if (flags & L2FWD_EVENT_TX_ENQ) {
> + nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
> + ev, nb_rx);
> + while (nb_tx < nb_rx && !rsrc->force_quit)
> + nb_tx +=
> rte_event_enqueue_burst(event_d_id,
> + port_id, ev + nb_tx,
> + nb_rx - nb_tx);
Can we use `continue` as we do not transmit from the same worker int his case?
> + }
> +
snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (10 preceding siblings ...)
2019-10-16 12:38 ` [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
@ 2019-10-21 3:25 ` Varghese, Vipin
2019-10-21 17:02 ` Pavan Nikhilesh Bhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
12 siblings, 1 reply; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-21 3:25 UTC (permalink / raw)
To: pbhagavatula, jerinj, Richardson, Bruce, hemant.agrawal; +Cc: dev
Hi Pavan,
Thanks for sharing the write up, following are some of my thoughts.
Snipped
>
> Based on event device capability the configuration is done as follows:
> - A single event device is enabled.
> - The number of event ports is equal to the number of worker
> cores enabled in the core mask. Additional event ports might
> be configured based on Rx/Tx adapter capability.
> - The number of event queues is equal to the number of ethernet
> ports. If Tx adapter doesn't have internal port capability then
> an additional single link event queue is used to enqueue events
> to Tx adapter.
1. Are we support per port function as 'mac_updating' and 'non_macupdating'?
2. If no for above, why is that each port should have ` number of event queues is equal to the number of ethernet`?
3. With this work for vdev ports, if no are we adding check for the same in `main` function?
snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-16 19:07 ` Stephen Hemminger
@ 2019-10-21 3:29 ` Varghese, Vipin
1 sibling, 0 replies; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-21 3:29 UTC (permalink / raw)
To: pbhagavatula, jerinj, Richardson, Bruce, hemant.agrawal,
Thomas Monjalon, Kovacevic, Marko, Ori Kam, Nicolau, Radu,
Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
Cc: dev
HI Pavan,
snipped
> +
> +/* Print out statistics on packets dropped */ void print_stats(struct
> +l2fwd_resources *rsrc) {
> + uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
> + uint32_t port_id;
> +
> + total_packets_dropped = 0;
> + total_packets_tx = 0;
> + total_packets_rx = 0;
> +
> + const char clr[] = {27, '[', '2', 'J', '\0' };
> + const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
> +
> + /* Clear screen and move to top left */
> + printf("%s%s", clr, topLeft);
> +
> + printf("\nPort statistics ====================================");
> +
> + for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {
> + /* skip disabled ports */
> + if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
> + continue;
> + printf("\nStatistics for port %u ------------------------------"
> + "\nPackets sent: %24"PRIu64
> + "\nPackets received: %20"PRIu64
> + "\nPackets dropped: %21"PRIu64,
> + port_id,
> + rsrc->port_stats[port_id].tx,
> + rsrc->port_stats[port_id].rx,
> + rsrc->port_stats[port_id].dropped);
> +
> + total_packets_dropped +=
> + rsrc->port_stats[port_id].dropped;
> + total_packets_tx += rsrc->port_stats[port_id].tx;
> + total_packets_rx += rsrc->port_stats[port_id].rx;
> + }
> + printf("\nAggregate statistics ==============================="
> + "\nTotal packets sent: %18"PRIu64
> + "\nTotal packets received: %14"PRIu64
> + "\nTotal packets dropped: %15"PRIu64,
> + total_packets_tx,
> + total_packets_rx,
> + total_packets_dropped);
> +
> printf("\n================================================
> ====\n");
> +}
Would not it be useful to in co-operate Eventdev stats and RX-TX event adapter stats? So one can see drops at each stage as
`RX ports-queues stats ==> RX event adapter ==> Eventdev ==> TX adapter ==> TX ports-queues stats`
snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-21 3:19 ` Varghese, Vipin
@ 2019-10-21 16:53 ` Pavan Nikhilesh Bhagavatula
2019-10-22 3:13 ` Varghese, Vipin
0 siblings, 1 reply; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-10-21 16:53 UTC (permalink / raw)
To: Varghese, Vipin, Jerin Jacob Kollanukkaran, Richardson, Bruce,
hemant.agrawal, Kovacevic, Marko, Ori Kam, Nicolau, Radu,
Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
Cc: dev
Hi Vipin,
>HI Pavan,
>
>snipped
>> Add event dev main loop based on enabled l2fwd options and
>eventdev
>> capabilities.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>> + if (rsrc->event_mode) {
>> + port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
>> + port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
>> + port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
>> + }
>Question, is RSS hash configured for generating flow id for Eventdev?
>As my understanding. RSS for single RX port-queue pair does not
>require the same.
>snipped
In case of SW event device i.e. vdev=event_sw0 the software Rx adapter requires mbuf::rss:hash
to distribute packets else it has to calculate rss.
@see lib/librte_eventdev/rte_event_eth_rx_adapter.c +817
'
rss = do_rss ?
rxa_do_softrss(m, rx_adapter->rss_key_be) :
m->hash.rss;
'
>> + if (is_master && timer_period > 0) {
>> + cur_tsc = rte_rdtsc();
>> + diff_tsc = cur_tsc - prev_tsc;
>> +
>> + /* advance the timer */
>> + timer_tsc += diff_tsc;
>> +
>> + /* if timer has reached its timeout */
>> + if (unlikely(timer_tsc >= timer_period)) {
>> + print_stats(rsrc);
>> + /* reset the timer */
>> + timer_tsc = 0;
>> + }
>> + prev_tsc = cur_tsc;
>> + }
>Is it possible to move the print_stats to service core, as 'CALL_MASTER'
>is enabled in remote_launch making this a potential worker?
>
Since not every eventdevice requires Service core user might not pass service core mask.
Instead we could do SKIP_MASTER and prints stats here.
>> +
>> + /* Read packet from eventdev */
>> + if (!rte_event_dequeue_burst(event_d_id, port_id,
>&ev, 1, 0))
>> + continue;
>Is not this unlikely `nb_burst == 0`
>
Not necessarily refer
https://mails.dpdk.org/archives/dev/2018-July/108610.html
>> +
>> + l2fwd_event_fwd(rsrc, &ev, tx_q_id, timer_period,
>flags);
>> +
>> + if (flags & L2FWD_EVENT_TX_ENQ) {
>> + while (rte_event_enqueue_burst(event_d_id,
>port_id,
>> + &ev, 1) &&
>> + !rsrc->force_quit)
>> + ;
>Can we place a `continue` as we are not expecting `
>L2FWD_EVENT_TX_DIRECT`?
>> + }
>
>> +
>> + if (flags & L2FWD_EVENT_TX_DIRECT) {
>> + while
>> (!rte_event_eth_tx_adapter_enqueue(event_d_id,
>> + port_id,
>> + &ev, 1,
>0) &&
>> + !rsrc->force_quit)
>> + ;
>> + }
>> + }
>snipped
>> +
>> + while (!rsrc->force_quit) {
>> + /* if timer is enabled */
>> + if (is_master && timer_period > 0) {
>> + cur_tsc = rte_rdtsc();
>> + diff_tsc = cur_tsc - prev_tsc;
>> +
>> + /* advance the timer */
>> + timer_tsc += diff_tsc;
>> +
>> + /* if timer has reached its timeout */
>> + if (unlikely(timer_tsc >= timer_period)) {
>> + print_stats(rsrc);
>> + /* reset the timer */
>> + timer_tsc = 0;
>> + }
>> + prev_tsc = cur_tsc;
>> + }
>Can we move `print_stats` logic to service core?
>
>> +
>> + /* Read packet from eventdev */
>> + nb_rx = rte_event_dequeue_burst(event_d_id,
>port_id, ev,
>> + deq_len, 0);
>> + if (nb_rx == 0)
>Can we use `unlikely`?
Not necessarily refer
https://mails.dpdk.org/archives/dev/2018-July/108610.html
>> + continue;
>> +
>> + for (i = 0; i < nb_rx; i++) {
>> + l2fwd_event_fwd(rsrc, &ev[i], tx_q_id,
>timer_period,
>> + flags);
>> + }
>> +
>> + if (flags & L2FWD_EVENT_TX_ENQ) {
>> + nb_tx =
>rte_event_enqueue_burst(event_d_id, port_id,
>> + ev, nb_rx);
>> + while (nb_tx < nb_rx && !rsrc->force_quit)
>> + nb_tx +=
>> rte_event_enqueue_burst(event_d_id,
>> + port_id, ev + nb_tx,
>> + nb_rx - nb_tx);
>Can we use `continue` as we do not transmit from the same worker int
>his case?
I'm not sure I follow what you meant here. We are trying to transmit the work
present on the worker till we succeed, if we do a continue then we would loose
the untransmitted packets.
@see examples/eventdev_pipeline/pipeline_worker_generic.c +109
>> + }
>> +
>snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown
2019-10-21 3:12 ` Varghese, Vipin
@ 2019-10-21 16:56 ` Pavan Nikhilesh Bhagavatula
2019-10-22 2:48 ` Varghese, Vipin
0 siblings, 1 reply; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-10-21 16:56 UTC (permalink / raw)
To: Varghese, Vipin, Jerin Jacob Kollanukkaran, Richardson, Bruce,
hemant.agrawal, Kovacevic, Marko, Ori Kam, Nicolau, Radu,
Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
Cc: dev
>Hi Pavan,
>
>snipped
>>
>> Add graceful teardown that addresses both event mode and poll
>mode.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>snipped
>> + if (rsrc->event_mode) {
>> + struct l2fwd_event_resources *evt_rsrc =
>> + rsrc->evt_rsrc;
>> + for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
>> + rte_event_eth_rx_adapter_stop(
>> + evt_rsrc->rx_adptr.rx_adptr[i]);
>Question from my end, for a graceful tear down first we stop the RX
>adapter then ensure after all events from worker are either dropped or
>transmit. Then we continue to TX adapter is stop. Is this right way?
The general rule of thumb is to stop producers before consumers.
>> + for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
>> + rte_event_eth_tx_adapter_stop(
>> + evt_rsrc->tx_adptr.tx_adptr[i]);
>Should we call `rte_cleanup` to clean up the service core usage?
Since we are exiting from here I don't think we explicitly need to do a
cleanup of service config.
>
>> }
>> printf("Bye...\n");
>>
>> --
>> 2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-21 3:25 ` Varghese, Vipin
@ 2019-10-21 17:02 ` Pavan Nikhilesh Bhagavatula
2019-10-22 2:57 ` Varghese, Vipin
0 siblings, 1 reply; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-10-21 17:02 UTC (permalink / raw)
To: Varghese, Vipin, Jerin Jacob Kollanukkaran, Richardson, Bruce,
hemant.agrawal
Cc: dev
Hi Vipin,
>Hi Pavan,
>
>Thanks for sharing the write up, following are some of my thoughts.
>
>Snipped
>>
>> Based on event device capability the configuration is done as follows:
>> - A single event device is enabled.
>> - The number of event ports is equal to the number of worker
>> cores enabled in the core mask. Additional event ports might
>> be configured based on Rx/Tx adapter capability.
>> - The number of event queues is equal to the number of ethernet
>> ports. If Tx adapter doesn't have internal port capability then
>> an additional single link event queue is used to enqueue events
>> to Tx adapter.
>1. Are we support per port function as 'mac_updating' and
>'non_macupdating'?
No, its global.
>2. If no for above, why is that each port should have ` number of event
>queues is equal to the number of ethernet`?
It's not a quantifier of each port, It across the eventdevice that the number
of event queues is equal to number of ethdevices used.
This is to prevent event queues being overcrowded i.e. in case there is only
one event queue and multiple Eth devices then SW/HW will have a bottleneck
in enqueueing all the mbufs to a single event queue.
>3. With this work for vdev ports, if no are we adding check for the same
>in `main` function?
I have verified the functionality for --vdev=event_sw0 and it seems to work fine.
>
>snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown
2019-10-21 16:56 ` Pavan Nikhilesh Bhagavatula
@ 2019-10-22 2:48 ` Varghese, Vipin
0 siblings, 0 replies; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-22 2:48 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
Richardson, Bruce, hemant.agrawal, Kovacevic, Marko, Ori Kam,
Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
Cc: dev
Hi Pavan,
snipped
> >> Add graceful teardown that addresses both event mode and poll
> >mode.
> >>
> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> ---
> >snipped
> >> + if (rsrc->event_mode) {
> >> + struct l2fwd_event_resources *evt_rsrc =
> >> + rsrc->evt_rsrc;
> >> + for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
> >> + rte_event_eth_rx_adapter_stop(
> >> + evt_rsrc->rx_adptr.rx_adptr[i]);
> >Question from my end, for a graceful tear down first we stop the RX
> >adapter then ensure after all events from worker are either dropped or
> >transmit. Then we continue to TX adapter is stop. Is this right way?
>
> The general rule of thumb is to stop producers before consumers.
>
> >> + for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
> >> + rte_event_eth_tx_adapter_stop(
> >> + evt_rsrc->tx_adptr.tx_adptr[i]);
> >Should we call `rte_cleanup` to clean up the service core usage?
>
> Since we are exiting from here I don't think we explicitly need to do a cleanup of
> service config.
As I recollect in dpdk 18.11, there was a bug and fix done for service core cleanup with `rte_cleanup`. If this taken care implicity 19.11, then yes you are right there is no need of `rte_celanup` when service cores are in use.
>
> >
> >> }
> >> printf("Bye...\n");
> >>
> >> --
> >> 2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-21 17:02 ` Pavan Nikhilesh Bhagavatula
@ 2019-10-22 2:57 ` Varghese, Vipin
2019-10-22 15:55 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-22 2:57 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
Richardson, Bruce, hemant.agrawal
Cc: dev
Hi Pavan,
snipped
> >>
> >> Based on event device capability the configuration is done as follows:
> >> - A single event device is enabled.
> >> - The number of event ports is equal to the number of worker
> >> cores enabled in the core mask. Additional event ports might
> >> be configured based on Rx/Tx adapter capability.
> >> - The number of event queues is equal to the number of ethernet
> >> ports. If Tx adapter doesn't have internal port capability then
> >> an additional single link event queue is used to enqueue events
> >> to Tx adapter.
> >1. Are we support per port function as 'mac_updating' and
> >'non_macupdating'?
> No, its global.
Thanks for confirming, so all ports behaviour are affected by this flag.
>
> >2. If no for above, why is that each port should have ` number of event
> >queues is equal to the number of ethernet`?
>
> It's not a quantifier of each port, It across the eventdevice that the number of
> event queues is equal to number of ethdevices used.
>
> This is to prevent event queues being overcrowded i.e. in case there is only one
> event queue and multiple Eth devices then SW/HW will have a bottleneck in
> enqueueing all the mbufs to a single event queue.
Yes you are correct, but here is my confusion
1. let us assume there are 2 ports, ie: port 0 <==> port 1.
2. We have 4 workers and 2 event queue (2 ports, so Q-0 for port-0 and Q-11 for port-1)
3. The event mode (SW or HW) is parallel.
4. In this case we need to rely of Q-2 which can absorb the events for single-event TX adapter for L2FWD_EVENT_TX_ENQ.
5. But for L2FWD_EVENT_TX_DIRECT (which is to send packets out directly) this is not right form as multiple parallel workers may concurrently send out same destination port queue (as there is 1 TX configured).
Can you help me understand, how this is worked around?
>
> >3. With this work for vdev ports, if no are we adding check for the
> >same in `main` function?
>
> I have verified the functionality for --vdev=event_sw0 and it seems to work fine.
Thanks, so whether it is physical or virtual ethernet device all packets has to come to worker cores for 'no-mac-updating'.
>
> >
> >snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-21 16:53 ` Pavan Nikhilesh Bhagavatula
@ 2019-10-22 3:13 ` Varghese, Vipin
2019-10-22 17:02 ` Pavan Nikhilesh Bhagavatula
0 siblings, 1 reply; 107+ messages in thread
From: Varghese, Vipin @ 2019-10-22 3:13 UTC (permalink / raw)
To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
Richardson, Bruce, hemant.agrawal, Kovacevic, Marko, Ori Kam,
Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
Cc: dev
Hi Pavan,
snipped
> >> Add event dev main loop based on enabled l2fwd options and
> >eventdev
> >> capabilities.
> >>
> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> ---
> >> + if (rsrc->event_mode) {
> >> + port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
> >> + port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
> >> + port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
> >> + }
> >Question, is RSS hash configured for generating flow id for Eventdev?
> >As my understanding. RSS for single RX port-queue pair does not require
> >the same.
Thanks for the confirmation.
> >snipped
>
> In case of SW event device i.e. vdev=event_sw0 the software Rx adapter requires
> mbuf::rss:hash to distribute packets else it has to calculate rss.
>
> @see lib/librte_eventdev/rte_event_eth_rx_adapter.c +817 '
> rss = do_rss ?
> rxa_do_softrss(m, rx_adapter->rss_key_be) :
> m->hash.rss;
> '
>
> >> + if (is_master && timer_period > 0) {
> >> + cur_tsc = rte_rdtsc();
> >> + diff_tsc = cur_tsc - prev_tsc;
> >> +
> >> + /* advance the timer */
> >> + timer_tsc += diff_tsc;
> >> +
> >> + /* if timer has reached its timeout */
> >> + if (unlikely(timer_tsc >= timer_period)) {
> >> + print_stats(rsrc);
> >> + /* reset the timer */
> >> + timer_tsc = 0;
> >> + }
> >> + prev_tsc = cur_tsc;
> >> + }
> >Is it possible to move the print_stats to service core, as 'CALL_MASTER'
> >is enabled in remote_launch making this a potential worker?
> >
>
> Since not every eventdevice requires Service core user might not pass service
> core mask.
> Instead we could do SKIP_MASTER and prints stats here.
Thanks
>
> >> +
> >> + /* Read packet from eventdev */
> >> + if (!rte_event_dequeue_burst(event_d_id, port_id,
> >&ev, 1, 0))
> >> + continue;
> >Is not this unlikely `nb_burst == 0`
> >
>
> Not necessarily refer
> https://mails.dpdk.org/archives/dev/2018-July/108610.html
Thanks for the article and links it is helpful. Following the article suggestion, should not the code be enhanced for execute code rather than continue?
snipped
> >> + /* Read packet from eventdev */
> >> + nb_rx = rte_event_dequeue_burst(event_d_id,
> >port_id, ev,
> >> + deq_len, 0);
> >> + if (nb_rx == 0)
> >Can we use `unlikely`?
>
> Not necessarily refer
> https://mails.dpdk.org/archives/dev/2018-July/108610.html
Same as above.
>
> >> + continue;
> >> +
> >> + for (i = 0; i < nb_rx; i++) {
> >> + l2fwd_event_fwd(rsrc, &ev[i], tx_q_id,
> >timer_period,
> >> + flags);
> >> + }
> >> +
> >> + if (flags & L2FWD_EVENT_TX_ENQ) {
> >> + nb_tx =
> >rte_event_enqueue_burst(event_d_id, port_id,
> >> + ev, nb_rx);
> >> + while (nb_tx < nb_rx && !rsrc->force_quit)
> >> + nb_tx +=
> >> rte_event_enqueue_burst(event_d_id,
> >> + port_id, ev + nb_tx,
> >> + nb_rx - nb_tx);
> >Can we use `continue` as we do not transmit from the same worker int
> >his case?
>
> I'm not sure I follow what you meant here. We are trying to transmit the work
> present on the worker till we succeed, if we do a continue then we would loose
> the untransmitted packets.
Maybe I mistook the L2FWD against ETHDEV_EVENT flags. As per my current understanding L2FWD_EVENT_TX_ENQ uses extra queue stage for single-event-enqueue for TX adapter, while L2FWD_EVENT_TX_DIRECT allows the worker to do transmit via port. May be this is wrong expectation.
>
> @see examples/eventdev_pipeline/pipeline_worker_generic.c +109
I will check the same and get back as required.
>
> >> + }
> >> +
> >snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-22 2:57 ` Varghese, Vipin
@ 2019-10-22 15:55 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-10-22 15:55 UTC (permalink / raw)
To: Varghese, Vipin, Jerin Jacob Kollanukkaran, Richardson, Bruce,
hemant.agrawal
Cc: dev
<snip>
>>
>> It's not a quantifier of each port, It across the eventdevice that the
>number of
>> event queues is equal to number of ethdevices used.
>>
>> This is to prevent event queues being overcrowded i.e. in case there
>is only one
>> event queue and multiple Eth devices then SW/HW will have a
>bottleneck in
>> enqueueing all the mbufs to a single event queue.
>Yes you are correct, but here is my confusion
>
>1. let us assume there are 2 ports, ie: port 0 <==> port 1.
>2. We have 4 workers and 2 event queue (2 ports, so Q-0 for port-0 and
>Q-11 for port-1)
>3. The event mode (SW or HW) is parallel.
>4. In this case we need to rely of Q-2 which can absorb the events for
>single-event TX adapter for L2FWD_EVENT_TX_ENQ.
>5. But for L2FWD_EVENT_TX_DIRECT (which is to send packets out
>directly) this is not right form as multiple parallel workers may
>concurrently send out same destination port queue (as there is 1 TX
>configured).
>
>Can you help me understand, how this is worked around?
We only select TX_DIRECT when the eventdev coupled with ethdev has
RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT capability.
It is the PMDs responsibility to synchronize concurrent Tx queues access
across queues if it exposes the above capability.
In case of octeontx2 we expose RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT
only when both eventdev and ethdev are octeontx2 pmds.
Since octeontx2 has DEV_TX_OFFLOAD_MT_LOCKFREE multicore Tx queue atomicity is
taken care by HW.
>
>>
>> >3. With this work for vdev ports, if no are we adding check for the
>> >same in `main` function?
>>
>> I have verified the functionality for --vdev=event_sw0 and it seems to
>work fine.
>Thanks, so whether it is physical or virtual ethernet device all packets
>has to come to worker cores for 'no-mac-updating'.
>
>>
>> >
>> >snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-22 3:13 ` Varghese, Vipin
@ 2019-10-22 17:02 ` Pavan Nikhilesh Bhagavatula
0 siblings, 0 replies; 107+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-10-22 17:02 UTC (permalink / raw)
To: Varghese, Vipin, Jerin Jacob Kollanukkaran, Richardson, Bruce,
hemant.agrawal, Kovacevic, Marko, Ori Kam, Nicolau, Radu,
Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
Cc: dev
<snip>
>> >> + /* Read packet from eventdev */
>> >> + if (!rte_event_dequeue_burst(event_d_id, port_id,
>> >&ev, 1, 0))
>> >> + continue;
>> >Is not this unlikely `nb_burst == 0`
>> >
>>
>> Not necessarily refer
>> https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mails.dpdk.org_archives_dev_2018-
>2DJuly_108610.html&d=DwIFAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=E3Sg
>YMjtKCMVsB-fmvgGV3o-
>g_fjLhk5Pupi9ijohpc&m=HyS43be9AB6KIroLm8d2EK4M6_lE_fZua3CTxY
>5vbiU&s=FcZlWlkbhYkrQ3HApkxAX6A17gyHQZ0cYoDeG3KkwTw&e=
>
>Thanks for the article and links it is helpful. Following the article
>suggestion, should not the code be enhanced for execute code rather
>than continue?
>
Do you mean there would be a difference between the below cases?
void case1() {
while (1) {
if (!(random() & 0x1))
continue;
}
}
void case2() {
while (1) {
uint8_t is_one = random() & 0x1;
if (is_one == 0)
continue;
}
}
AFAIK both the above cases compile to the same asm.
>snipped
>> >> + /* Read packet from eventdev */
>> >> + nb_rx = rte_event_dequeue_burst(event_d_id,
>> >port_id, ev,
>> >> + deq_len, 0);
>> >> + if (nb_rx == 0)
>> >Can we use `unlikely`?
>>
>> Not necessarily refer
>> https://urldefense.proofpoint.com/v2/url?u=https-
>3A__mails.dpdk.org_archives_dev_2018-
>2DJuly_108610.html&d=DwIFAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=E3Sg
>YMjtKCMVsB-fmvgGV3o-
>g_fjLhk5Pupi9ijohpc&m=HyS43be9AB6KIroLm8d2EK4M6_lE_fZua3CTxY
>5vbiU&s=FcZlWlkbhYkrQ3HApkxAX6A17gyHQZ0cYoDeG3KkwTw&e=
>Same as above.
>
>>
>> >> + continue;
>> >> +
>> >> + for (i = 0; i < nb_rx; i++) {
>> >> + l2fwd_event_fwd(rsrc, &ev[i], tx_q_id,
>> >timer_period,
>> >> + flags);
>> >> + }
>> >> +
>> >> + if (flags & L2FWD_EVENT_TX_ENQ) {
>> >> + nb_tx =
>> >rte_event_enqueue_burst(event_d_id, port_id,
>> >> + ev, nb_rx);
>> >> + while (nb_tx < nb_rx && !rsrc->force_quit)
>> >> + nb_tx +=
>> >> rte_event_enqueue_burst(event_d_id,
>> >> + port_id, ev + nb_tx,
>> >> + nb_rx - nb_tx);
>> >Can we use `continue` as we do not transmit from the same worker
>int
>> >his case?
>>
>> I'm not sure I follow what you meant here. We are trying to transmit
>the work
>> present on the worker till we succeed, if we do a continue then we
>would loose
>> the untransmitted packets.
>Maybe I mistook the L2FWD against ETHDEV_EVENT flags. As per my
>current understanding L2FWD_EVENT_TX_ENQ uses extra queue stage
>for single-event-enqueue for TX adapter, while
>L2FWD_EVENT_TX_DIRECT allows the worker to do transmit via port.
>May be this is wrong expectation.
>
The above is understanding is correct. I just don't see how a continue out of
the while loop would help.
In the above case we are going to retry enqueue(TX_ENQ) or transmit(TX_DIRECT)
until we succeed.
>>
>> @see examples/eventdev_pipeline/pipeline_worker_generic.c +109
>I will check the same and get back as required.
>
>>
>> >> + }
>> >> +
>> >snipped
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
` (11 preceding siblings ...)
2019-10-21 3:25 ` Varghese, Vipin
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
` (10 more replies)
12 siblings, 11 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal; +Cc: dev, Pavan Nikhilesh
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
This patchset adds a new application to demonstrate the usage of event
mode. The poll mode is also available to help with the transition.
The following new command line parameters are added:
--mode: Dictates the mode of operation either poll or event.
--eventq_sched: Dictates event scheduling mode ordered, atomic or
parallel.
Based on event device capability the configuration is done as follows:
- A single event device is enabled.
- The number of event ports is equal to the number of worker
cores enabled in the core mask. Additional event ports might
be configured based on Rx/Tx adapter capability.
- The number of event queues is equal to the number of ethernet
ports. If Tx adapter doesn't have internal port capability then
an additional single link event queue is used to enqueue events
to Tx adapter.
- Each event port is linked to all existing event queues.
- Dedicated Rx/Tx adapters for each Ethernet port.
v7 Changes:
- Use master core to print stats.
- Remove print stats from worker cores.
- Add Rx/Tx adapter stats.
v6 Changes:
- Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
- Use rte_panic instead of rte_exit.
- Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
- Update documentation and fix error, spellcheck.
- Fix meson build.
- Split functions into smaller functions for redability.
- Add parallel mode support.
v5 Changes:
- Redo poll mode datapath by removing all the static globals.
- Fix event queue configuration when required queues are not available.
- Fix Rx/Tx adapter creation based on portmask.
- Update release notes.
- Unroll macro used to generate event mode functions.
v4 Changes:
- Fix missing eventdev args parsing.
v3 Changes:
- Remove unwanted change to example/l2fwd.
- Fix checkpatch issue
http://mails.dpdk.org/archives/test-report/2019-September/098053.html
v2 Changes:
- Remove global variables.
- Split patches to make reviews friendlier.
- Split datapath based on eventdev capability.
Pavan Nikhilesh (5):
examples/l2fwd-event: add default poll mode routines
examples/l2fwd-event: add infra for eventdev
examples/l2fwd-event: add service core setup
examples/l2fwd-event: add eventdev main loop
examples/l2fwd-event: add graceful teardown
Sunil Kumar Kori (5):
examples/l2fwd-event: add infra to split eventdev framework
examples/l2fwd-event: add event device setup
examples/l2fwd-event: add eventdev queue and port setup
examples/l2fwd-event: add event Rx/Tx adapter setup
doc: add application usage guide for l2fwd-event
MAINTAINERS | 6 +
doc/guides/rel_notes/release_19_11.rst | 6 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
doc/guides/sample_app_ug/l2_forward_event.rst | 698 ++++++++++++++++++
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 62 ++
examples/l2fwd-event/l2fwd_common.c | 97 +++
examples/l2fwd-event/l2fwd_common.h | 132 ++++
examples/l2fwd-event/l2fwd_event.c | 393 ++++++++++
examples/l2fwd-event/l2fwd_event.h | 73 ++
examples/l2fwd-event/l2fwd_event_generic.c | 315 ++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 294 ++++++++
examples/l2fwd-event/l2fwd_poll.c | 177 +++++
examples/l2fwd-event/l2fwd_poll.h | 25 +
examples/l2fwd-event/main.c | 579 +++++++++++++++
examples/l2fwd-event/meson.build | 18 +
examples/meson.build | 2 +-
18 files changed, 2883 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.c
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_event.c
create mode 100644 examples/l2fwd-event/l2fwd_event.h
create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 01/10] examples/l2fwd-event: add default poll mode routines
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
` (9 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Thomas Monjalon,
Marko Kovacevic, Ori Kam, Radu Nicolau, Akhil Goyal,
Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add the default l2fwd poll mode routines similar to examples/l2fwd.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
MAINTAINERS | 5 +
examples/Makefile | 1 +
examples/l2fwd-event/Makefile | 59 ++++
examples/l2fwd-event/l2fwd_common.c | 91 ++++++
examples/l2fwd-event/l2fwd_common.h | 127 ++++++++
examples/l2fwd-event/l2fwd_poll.c | 177 +++++++++++
examples/l2fwd-event/l2fwd_poll.h | 25 ++
examples/l2fwd-event/main.c | 446 ++++++++++++++++++++++++++++
examples/l2fwd-event/meson.build | 15 +
examples/meson.build | 2 +-
10 files changed, 947 insertions(+), 1 deletion(-)
create mode 100644 examples/l2fwd-event/Makefile
create mode 100644 examples/l2fwd-event/l2fwd_common.c
create mode 100644 examples/l2fwd-event/l2fwd_common.h
create mode 100644 examples/l2fwd-event/l2fwd_poll.c
create mode 100644 examples/l2fwd-event/l2fwd_poll.h
create mode 100644 examples/l2fwd-event/main.c
create mode 100644 examples/l2fwd-event/meson.build
diff --git a/MAINTAINERS b/MAINTAINERS
index f8a56e2e2..6957b2a24 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1475,6 +1475,11 @@ M: Tomasz Kantecki <tomasz.kantecki@intel.com>
F: doc/guides/sample_app_ug/l2_forward_cat.rst
F: examples/l2fwd-cat/
+M: Sunil Kumar Kori <skori@marvell.com>
+M: Pavan Nikhilesh <pbhagavatula@marvell.com>
+F: examples/l2fwd-event/
+T: git://dpdk.org/next/dpdk-next-eventdev
+
F: examples/l3fwd/
F: doc/guides/sample_app_ug/l3_forward.rst
diff --git a/examples/Makefile b/examples/Makefile
index de11dd487..d18504bd2 100644
--- a/examples/Makefile
+++ b/examples/Makefile
@@ -34,6 +34,7 @@ endif
DIRS-$(CONFIG_RTE_LIBRTE_HASH) += ipv4_multicast
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += kni
DIRS-y += l2fwd
+DIRS-y += l2fwd-event
ifneq ($(PQOS_INSTALL_PATH),)
DIRS-y += l2fwd-cat
endif
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
new file mode 100644
index 000000000..73f02dd3b
--- /dev/null
+++ b/examples/l2fwd-event/Makefile
@@ -0,0 +1,59 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# binary name
+APP = l2fwd-event
+
+# all source are stored in SRCS-y
+SRCS-y := main.c
+SRCS-y += l2fwd_poll.c
+SRCS-y += l2fwd_common.c
+
+# Build using pkg-config variables if possible
+ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
+
+all: shared
+.PHONY: shared static
+shared: build/$(APP)-shared
+ ln -sf $(APP)-shared build/$(APP)
+static: build/$(APP)-static
+ ln -sf $(APP)-static build/$(APP)
+
+PKGCONF=pkg-config --define-prefix
+
+PC_FILE := $(shell $(PKGCONF) --path libdpdk)
+CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)
+LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)
+LDFLAGS_STATIC = -Wl,-Bstatic $(shell $(PKGCONF) --static --libs libdpdk)
+
+build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)
+
+build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build
+ $(CC) $(CFLAGS) $(SRCS-y) -o $@ $(LDFLAGS) $(LDFLAGS_STATIC)
+
+build:
+ @mkdir -p $@
+
+.PHONY: clean
+clean:
+ rm -f build/$(APP) build/$(APP)-static build/$(APP)-shared
+ test -d build && rmdir -p build || true
+
+else # Build using legacy build system
+
+ifeq ($(RTE_SDK),)
+$(error "Please define RTE_SDK environment variable")
+endif
+
+# Default target, detect a build directory, by looking for a path with a .config
+RTE_TARGET ?= $(notdir $(abspath $(dir $(firstword $(wildcard $(RTE_SDK)/*/.config)))))
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+include $(RTE_SDK)/mk/rte.extapp.mk
+endif
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
new file mode 100644
index 000000000..c206415d0
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -0,0 +1,91 @@
+#include "l2fwd_common.h"
+
+int
+l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
+{
+ uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+ struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .max_rx_pkt_len = RTE_ETHER_MAX_LEN,
+ .split_hdr_size = 0,
+ },
+ .txmode = {
+ .mq_mode = ETH_MQ_TX_NONE,
+ },
+ };
+ uint16_t nb_ports_available = 0;
+ uint16_t port_id;
+ int ret;
+
+ /* Initialise each port */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ struct rte_eth_conf local_port_conf = port_conf;
+ struct rte_eth_dev_info dev_info;
+ struct rte_eth_rxconf rxq_conf;
+ struct rte_eth_txconf txq_conf;
+
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0) {
+ printf("Skipping disabled port %u\n", port_id);
+ continue;
+ }
+ nb_ports_available++;
+
+ /* init port */
+ printf("Initializing port %u... ", port_id);
+ fflush(stdout);
+ rte_eth_dev_info_get(port_id, &dev_info);
+ if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+ local_port_conf.txmode.offloads |=
+ DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+ ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+ if (ret < 0)
+ rte_panic("Cannot configure device: err=%d, port=%u\n",
+ ret, port_id);
+
+ ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+ &nb_txd);
+ if (ret < 0)
+ rte_panic("Cannot adjust number of descriptors: err=%d, port=%u\n",
+ ret, port_id);
+
+ rte_eth_macaddr_get(port_id, &rsrc->eth_addr[port_id]);
+
+ /* init one RX queue */
+ fflush(stdout);
+ rxq_conf = dev_info.default_rxconf;
+ rxq_conf.offloads = local_port_conf.rxmode.offloads;
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd,
+ rte_eth_dev_socket_id(port_id),
+ &rxq_conf,
+ rsrc->pktmbuf_pool);
+ if (ret < 0)
+ rte_panic("rte_eth_rx_queue_setup:err=%d, port=%u\n",
+ ret, port_id);
+
+ /* init one TX queue on each port */
+ fflush(stdout);
+ txq_conf = dev_info.default_txconf;
+ txq_conf.offloads = local_port_conf.txmode.offloads;
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ rte_eth_dev_socket_id(port_id),
+ &txq_conf);
+ if (ret < 0)
+ rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, port_id);
+
+ rte_eth_promiscuous_enable(port_id);
+
+ printf("Port %u,MAC address: %02X:%02X:%02X:%02X:%02X:%02X\n\n",
+ port_id,
+ rsrc->eth_addr[port_id].addr_bytes[0],
+ rsrc->eth_addr[port_id].addr_bytes[1],
+ rsrc->eth_addr[port_id].addr_bytes[2],
+ rsrc->eth_addr[port_id].addr_bytes[3],
+ rsrc->eth_addr[port_id].addr_bytes[4],
+ rsrc->eth_addr[port_id].addr_bytes[5]);
+ }
+
+ return nb_ports_available;
+}
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
new file mode 100644
index 000000000..7b74f92b3
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -0,0 +1,127 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_COMMON_H__
+#define __L2FWD_COMMON_H__
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+#include <netinet/in.h>
+#include <setjmp.h>
+#include <stdarg.h>
+#include <ctype.h>
+#include <errno.h>
+#include <getopt.h>
+#include <signal.h>
+#include <stdbool.h>
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_eal.h>
+#include <rte_launch.h>
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_prefetch.h>
+#include <rte_lcore.h>
+#include <rte_per_lcore.h>
+#include <rte_branch_prediction.h>
+#include <rte_interrupts.h>
+#include <rte_random.h>
+#include <rte_debug.h>
+#include <rte_ether.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#define MAX_PKT_BURST 32
+#define MAX_RX_QUEUE_PER_LCORE 16
+#define MAX_TX_QUEUE_PER_PORT 16
+
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
+#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
+#define MEMPOOL_CACHE_SIZE 256
+
+#define DEFAULT_TIMER_PERIOD 10 /* default period is 10 seconds */
+#define MAX_TIMER_PERIOD 86400 /* 1 day max */
+
+/* Per-port statistics struct */
+struct l2fwd_port_statistics {
+ uint64_t dropped;
+ uint64_t tx;
+ uint64_t rx;
+} __rte_cache_aligned;
+
+struct l2fwd_resources {
+ volatile uint8_t force_quit;
+ uint8_t mac_updating;
+ uint8_t rx_queue_per_lcore;
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint32_t enabled_port_mask;
+ uint64_t timer_period;
+ struct rte_mempool *pktmbuf_pool;
+ uint32_t dst_ports[RTE_MAX_ETHPORTS];
+ struct rte_ether_addr eth_addr[RTE_MAX_ETHPORTS];
+ struct l2fwd_port_statistics port_stats[RTE_MAX_ETHPORTS];
+ void *poll_rsrc;
+} __rte_cache_aligned;
+
+static __rte_always_inline void
+l2fwd_mac_updating(struct rte_mbuf *m, uint32_t dest_port_id,
+ struct rte_ether_addr *addr)
+{
+ struct rte_ether_hdr *eth;
+ void *tmp;
+
+ eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+
+ /* 02:00:00:00:00:xx */
+ tmp = ð->d_addr.addr_bytes[0];
+ *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t)dest_port_id << 40);
+
+ /* src addr */
+ rte_ether_addr_copy(addr, ð->s_addr);
+}
+
+static __rte_always_inline struct l2fwd_resources *
+l2fwd_get_rsrc(void)
+{
+ static const char name[RTE_MEMZONE_NAMESIZE] = "rsrc";
+ const struct rte_memzone *mz;
+
+ mz = rte_memzone_lookup(name);
+ if (mz != NULL)
+ return mz->addr;
+
+ mz = rte_memzone_reserve(name, sizeof(struct l2fwd_resources), 0, 0);
+ if (mz != NULL) {
+ struct l2fwd_resources *rsrc = mz->addr;
+
+ memset(rsrc, 0, sizeof(struct l2fwd_resources));
+ rsrc->mac_updating = true;
+ rsrc->rx_queue_per_lcore = 1;
+ rsrc->timer_period = 10 * rte_get_timer_hz();
+
+ return mz->addr;
+ }
+
+ rte_panic("Unable to allocate memory for l2fwd resources\n");
+
+ return NULL;
+}
+
+int l2fwd_event_init_ports(struct l2fwd_resources *rsrc);
+
+#endif /* __L2FWD_COMMON_H__ */
diff --git a/examples/l2fwd-event/l2fwd_poll.c b/examples/l2fwd-event/l2fwd_poll.c
new file mode 100644
index 000000000..cc96b14cb
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_poll.c
@@ -0,0 +1,177 @@
+#include "l2fwd_poll.h"
+
+static inline void
+l2fwd_poll_simple_forward(struct l2fwd_resources *rsrc, struct rte_mbuf *m,
+ uint32_t portid)
+{
+ struct rte_eth_dev_tx_buffer *buffer;
+ uint32_t dst_port;
+ int sent;
+
+ dst_port = rsrc->dst_ports[portid];
+
+ if (rsrc->mac_updating)
+ l2fwd_mac_updating(m, dst_port, &rsrc->eth_addr[dst_port]);
+
+ buffer = ((struct l2fwd_poll_resources *)rsrc->poll_rsrc)->tx_buffer[
+ dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ rsrc->port_stats[dst_port].tx += sent;
+}
+
+/* main poll mode processing loop */
+static void
+l2fwd_poll_main_loop(struct l2fwd_resources *rsrc)
+{
+ uint64_t prev_tsc, diff_tsc, cur_tsc, drain_tsc;
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_tx_buffer *buf;
+ struct lcore_queue_conf *qconf;
+ uint32_t i, j, port_id, nb_rx;
+ struct rte_mbuf *m;
+ uint32_t lcore_id;
+ int32_t sent;
+
+ drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) / US_PER_S *
+ BURST_TX_DRAIN_US;
+ prev_tsc = 0;
+
+ lcore_id = rte_lcore_id();
+ qconf = &poll_rsrc->lcore_queue_conf[lcore_id];
+
+ if (qconf->n_rx_port == 0) {
+ printf("lcore %u has nothing to do\n", lcore_id);
+ return;
+ }
+
+ printf("entering main loop on lcore %u\n", lcore_id);
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ port_id = qconf->rx_port_list[i];
+ printf(" -- lcoreid=%u port_id=%u\n", lcore_id, port_id);
+
+ }
+
+ while (!rsrc->force_quit) {
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ port_id =
+ rsrc->dst_ports[qconf->rx_port_list[i]];
+ buf = poll_rsrc->tx_buffer[port_id];
+ sent = rte_eth_tx_buffer_flush(port_id, 0, buf);
+ if (sent)
+ rsrc->port_stats[port_id].tx += sent;
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+ /*
+ * Read packet from RX queues
+ */
+ for (i = 0; i < qconf->n_rx_port; i++) {
+
+ port_id = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst(port_id, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ rsrc->port_stats[port_id].rx += nb_rx;
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_poll_simple_forward(rsrc, m, port_id);
+ }
+ }
+ }
+}
+
+static void
+l2fwd_poll_lcore_config(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ struct lcore_queue_conf *qconf = NULL;
+ uint32_t rx_lcore_id = 0;
+ uint16_t port_id;
+
+ /* Initialize the port/queue configuration of each logical core */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+
+ /* get the lcore_id for this port */
+ while (rte_lcore_is_enabled(rx_lcore_id) == 0 ||
+ poll_rsrc->lcore_queue_conf[rx_lcore_id].n_rx_port ==
+ rsrc->rx_queue_per_lcore) {
+ rx_lcore_id++;
+ if (rx_lcore_id >= RTE_MAX_LCORE)
+ rte_panic("Not enough cores\n");
+ }
+
+ if (qconf != &poll_rsrc->lcore_queue_conf[rx_lcore_id]) {
+ /* Assigned a new logical core in the loop above. */
+ qconf = &poll_rsrc->lcore_queue_conf[rx_lcore_id];
+ }
+
+ qconf->rx_port_list[qconf->n_rx_port] = port_id;
+ qconf->n_rx_port++;
+ printf("Lcore %u: RX port %u\n", rx_lcore_id, port_id);
+ }
+}
+
+static void
+l2fwd_poll_init_tx_buffers(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* Initialize TX buffers */
+ poll_rsrc->tx_buffer[port_id] = rte_zmalloc_socket("tx_buffer",
+ RTE_ETH_TX_BUFFER_SIZE(MAX_PKT_BURST), 0,
+ rte_eth_dev_socket_id(port_id));
+ if (poll_rsrc->tx_buffer[port_id] == NULL)
+ rte_panic("Cannot allocate buffer for tx on port %u\n",
+ port_id);
+
+ rte_eth_tx_buffer_init(poll_rsrc->tx_buffer[port_id],
+ MAX_PKT_BURST);
+
+ ret = rte_eth_tx_buffer_set_err_callback(
+ poll_rsrc->tx_buffer[port_id],
+ rte_eth_tx_buffer_count_callback,
+ &rsrc->port_stats[port_id].dropped);
+ if (ret < 0)
+ rte_panic("Cannot set error callback for tx buffer on port %u\n",
+ port_id);
+ }
+}
+
+void
+l2fwd_poll_resource_setup(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_poll_resources *poll_rsrc;
+
+ poll_rsrc = rte_zmalloc("l2fwd_poll_rsrc",
+ sizeof(struct l2fwd_poll_resources), 0);
+ if (poll_rsrc == NULL)
+ rte_panic("Failed to allocate resources for l2fwd poll mode\n");
+
+ rsrc->poll_rsrc = poll_rsrc;
+ l2fwd_poll_lcore_config(rsrc);
+ l2fwd_poll_init_tx_buffers(rsrc);
+
+ poll_rsrc->poll_main_loop = l2fwd_poll_main_loop;
+}
diff --git a/examples/l2fwd-event/l2fwd_poll.h b/examples/l2fwd-event/l2fwd_poll.h
new file mode 100644
index 000000000..d59b0c844
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_poll.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_POLL_H__
+#define __L2FWD_POLL_H__
+
+#include "l2fwd_common.h"
+
+typedef void (*poll_main_loop_cb)(struct l2fwd_resources *rsrc);
+
+struct lcore_queue_conf {
+ uint32_t rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ uint32_t n_rx_port;
+} __rte_cache_aligned;
+
+struct l2fwd_poll_resources {
+ poll_main_loop_cb poll_main_loop;
+ struct rte_eth_dev_tx_buffer *tx_buffer[RTE_MAX_ETHPORTS];
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+};
+
+void l2fwd_poll_resource_setup(struct l2fwd_resources *rsrc);
+
+#endif
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
new file mode 100644
index 000000000..a4e41ddb4
--- /dev/null
+++ b/examples/l2fwd-event/main.c
@@ -0,0 +1,446 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "l2fwd_poll.h"
+
+/* display usage */
+static void
+l2fwd_event_usage(const char *prgname)
+{
+ printf("%s [EAL options] -- -p PORTMASK [-q NQ]\n"
+ " -p PORTMASK: hexadecimal bitmask of ports to configure\n"
+ " -q NQ: number of queue (=ports) per lcore (default is 1)\n"
+ " -T PERIOD: statistics will be refreshed each PERIOD seconds "
+ " (0 to disable, 10 default, 86400 maximum)\n"
+ " --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
+ " When enabled:\n"
+ " - The source MAC address is replaced by the TX port MAC address\n"
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ prgname);
+}
+
+static int
+l2fwd_event_parse_portmask(const char *portmask)
+{
+ char *end = NULL;
+ unsigned long pm;
+
+ /* parse hexadecimal string */
+ pm = strtoul(portmask, &end, 16);
+ if ((portmask[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+
+ if (pm == 0)
+ return -1;
+
+ return pm;
+}
+
+static unsigned int
+l2fwd_event_parse_nqueue(const char *q_arg)
+{
+ char *end = NULL;
+ unsigned long n;
+
+ /* parse hexadecimal string */
+ n = strtoul(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return 0;
+ if (n == 0)
+ return 0;
+ if (n >= MAX_RX_QUEUE_PER_LCORE)
+ return 0;
+
+ return n;
+}
+
+static int
+l2fwd_event_parse_timer_period(const char *q_arg)
+{
+ char *end = NULL;
+ int n;
+
+ /* parse number string */
+ n = strtol(q_arg, &end, 10);
+ if ((q_arg[0] == '\0') || (end == NULL) || (*end != '\0'))
+ return -1;
+ if (n >= MAX_TIMER_PERIOD)
+ return -1;
+
+ return n;
+}
+
+static const char short_options[] =
+ "p:" /* portmask */
+ "q:" /* number of queues */
+ "T:" /* timer period */
+ ;
+
+#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
+#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+
+enum {
+ /* long options mapped to a short option */
+
+ /* first long only option value must be >= 256, so that we won't
+ * conflict with short options
+ */
+ CMD_LINE_OPT_MIN_NUM = 256,
+};
+
+/* Parse the argument given in the command line of the application */
+static int
+l2fwd_event_parse_args(int argc, char **argv,
+ struct l2fwd_resources *rsrc)
+{
+ int mac_updating = 1;
+ struct option lgopts[] = {
+ { CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
+ { CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ {NULL, 0, 0, 0}
+ };
+ int opt, ret, timer_secs;
+ char *prgname = argv[0];
+ char **argvopt;
+ int option_index;
+
+ argvopt = argv;
+ while ((opt = getopt_long(argc, argvopt, short_options,
+ lgopts, &option_index)) != EOF) {
+
+ switch (opt) {
+ /* portmask */
+ case 'p':
+ rsrc->enabled_port_mask =
+ l2fwd_event_parse_portmask(optarg);
+ if (rsrc->enabled_port_mask == 0) {
+ printf("invalid portmask\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* nqueue */
+ case 'q':
+ rsrc->rx_queue_per_lcore =
+ l2fwd_event_parse_nqueue(optarg);
+ if (rsrc->rx_queue_per_lcore == 0) {
+ printf("invalid queue number\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ break;
+
+ /* timer period */
+ case 'T':
+ timer_secs = l2fwd_event_parse_timer_period(optarg);
+ if (timer_secs < 0) {
+ printf("invalid timer period\n");
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ rsrc->timer_period = timer_secs;
+ /* convert to number of cycles */
+ rsrc->timer_period *= rte_get_timer_hz();
+ break;
+
+ /* long options */
+ case 0:
+ break;
+
+ default:
+ l2fwd_event_usage(prgname);
+ return -1;
+ }
+ }
+
+ rsrc->mac_updating = mac_updating;
+
+ if (optind >= 0)
+ argv[optind-1] = prgname;
+
+ ret = optind-1;
+ optind = 1; /* reset getopt lib */
+ return ret;
+}
+
+static int
+l2fwd_launch_one_lcore(void *args)
+{
+ struct l2fwd_resources *rsrc = args;
+ struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+
+ poll_rsrc->poll_main_loop(rsrc);
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 9s, and print them finally */
+static void
+check_all_ports_link_status(struct l2fwd_resources *rsrc,
+ uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */
+ uint16_t port_id;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+
+ printf("\nChecking link status...");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ if (rsrc->force_quit)
+ return;
+ all_ports_up = 1;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if (rsrc->force_quit)
+ return;
+ if ((port_mask & (1 << port_id)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ rte_eth_link_get_nowait(port_id, &link);
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status)
+ printf(
+ "Port%d Link Up. Speed %u Mbps - %s\n",
+ port_id, link.link_speed,
+ (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+ ("full-duplex") : ("half-duplex\n"));
+ else
+ printf("Port %d Link Down\n", port_id);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ printf(".");
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {
+ print_flag = 1;
+ printf("done\n");
+ }
+ }
+}
+
+/* Print out statistics on packets dropped */
+static void
+print_stats(struct l2fwd_resources *rsrc)
+{
+ uint64_t total_packets_dropped, total_packets_tx, total_packets_rx;
+ uint32_t port_id;
+
+ total_packets_dropped = 0;
+ total_packets_tx = 0;
+ total_packets_rx = 0;
+
+ const char clr[] = {27, '[', '2', 'J', '\0' };
+ const char topLeft[] = {27, '[', '1', ';', '1', 'H', '\0' };
+
+ /* Clear screen and move to top left */
+ printf("%s%s", clr, topLeft);
+
+ printf("\nPort statistics ====================================");
+
+ for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++) {
+ /* skip disabled ports */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ printf("\nStatistics for port %u ------------------------------"
+ "\nPackets sent: %24"PRIu64
+ "\nPackets received: %20"PRIu64
+ "\nPackets dropped: %21"PRIu64,
+ port_id,
+ rsrc->port_stats[port_id].tx,
+ rsrc->port_stats[port_id].rx,
+ rsrc->port_stats[port_id].dropped);
+
+ total_packets_dropped +=
+ rsrc->port_stats[port_id].dropped;
+ total_packets_tx += rsrc->port_stats[port_id].tx;
+ total_packets_rx += rsrc->port_stats[port_id].rx;
+ }
+ printf("\nAggregate statistics ==============================="
+ "\nTotal packets sent: %18"PRIu64
+ "\nTotal packets received: %14"PRIu64
+ "\nTotal packets dropped: %15"PRIu64,
+ total_packets_tx,
+ total_packets_rx,
+ total_packets_dropped);
+ printf("\n====================================================\n");
+}
+
+static void
+l2fwd_event_print_stats(struct l2fwd_resources *rsrc)
+{
+ uint64_t prev_tsc = 0, diff_tsc, cur_tsc, timer_tsc = 0;
+ const uint64_t timer_period = rsrc->timer_period;
+
+ while (!rsrc->force_quit) {
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ cur_tsc = rte_rdtsc();
+ diff_tsc = cur_tsc - prev_tsc;
+
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ print_stats(rsrc);
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ prev_tsc = cur_tsc;
+ }
+ }
+}
+
+
+static void
+signal_handler(int signum)
+{
+ struct l2fwd_resources *rsrc = l2fwd_get_rsrc();
+ if (signum == SIGINT || signum == SIGTERM) {
+ printf("\n\nSignal %d received, preparing to exit...\n",
+ signum);
+ rsrc->force_quit = true;
+ }
+}
+
+int
+main(int argc, char **argv)
+{
+ struct l2fwd_resources *rsrc;
+ uint16_t nb_ports_available = 0;
+ uint32_t nb_ports_in_mask = 0;
+ uint16_t port_id, last_port;
+ uint32_t nb_mbufs;
+ uint16_t nb_ports;
+ int ret;
+
+ /* init EAL */
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_panic("Invalid EAL arguments\n");
+ argc -= ret;
+ argv += ret;
+
+ rsrc = l2fwd_get_rsrc();
+
+ signal(SIGINT, signal_handler);
+ signal(SIGTERM, signal_handler);
+
+ /* parse application arguments (after the EAL ones) */
+ ret = l2fwd_event_parse_args(argc, argv, rsrc);
+ if (ret < 0)
+ rte_panic("Invalid L2FWD arguments\n");
+
+ printf("MAC updating %s\n", rsrc->mac_updating ? "enabled" :
+ "disabled");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports == 0)
+ rte_panic("No Ethernet ports - bye\n");
+
+ /* check port mask to possible port mask */
+ if (rsrc->enabled_port_mask & ~((1 << nb_ports) - 1))
+ rte_panic("Invalid portmask; possible (0x%x)\n",
+ (1 << nb_ports) - 1);
+
+ /* reset l2fwd_dst_ports */
+ for (port_id = 0; port_id < RTE_MAX_ETHPORTS; port_id++)
+ rsrc->dst_ports[port_id] = 0;
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ rsrc->dst_ports[port_id] = last_port;
+ rsrc->dst_ports[last_port] = port_id;
+ } else {
+ last_port = port_id;
+ }
+
+ nb_ports_in_mask++;
+ }
+ if (nb_ports_in_mask % 2) {
+ printf("Notice: odd number of ports in portmask.\n");
+ rsrc->dst_ports[last_port] = last_port;
+ }
+
+ nb_mbufs = RTE_MAX(nb_ports * (RTE_TEST_RX_DESC_DEFAULT +
+ RTE_TEST_TX_DESC_DEFAULT +
+ MAX_PKT_BURST + rte_lcore_count() *
+ MEMPOOL_CACHE_SIZE), 8192U);
+
+ /* create the mbuf pool */
+ rsrc->pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool",
+ nb_mbufs, MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
+ if (rsrc->pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+ nb_ports_available = l2fwd_event_init_ports(rsrc);
+ if (!nb_ports_available)
+ rte_panic("All available ports are disabled. Please set portmask.\n");
+
+ l2fwd_poll_resource_setup(rsrc);
+
+ /* initialize port stats */
+ memset(&rsrc->port_stats, 0,
+ sizeof(struct l2fwd_port_statistics));
+
+ /* All settings are done. Now enable eth devices */
+ RTE_ETH_FOREACH_DEV(port_id) {
+ /* skip ports that are not enabled */
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0)
+ rte_panic("rte_eth_dev_start:err=%d, port=%u\n", ret,
+ port_id);
+ }
+
+ check_all_ports_link_status(rsrc, rsrc->enabled_port_mask);
+
+ /* launch per-lcore init on every lcore */
+ rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, rsrc,
+ SKIP_MASTER);
+ l2fwd_event_print_stats(rsrc);
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ printf("Closing port %d...", port_id);
+ rte_eth_dev_stop(port_id);
+ rte_eth_dev_close(port_id);
+ printf(" Done\n");
+ }
+ printf("Bye...\n");
+
+ return 0;
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
new file mode 100644
index 000000000..c936aa72e
--- /dev/null
+++ b/examples/l2fwd-event/meson.build
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(C) 2019 Marvell International Ltd.
+#
+
+# meson file, for building this example as part of a main DPDK build.
+#
+# To build this example as a standalone application with an already-installed
+# DPDK instance, use 'make'
+
+deps += 'eventdev'
+sources = files(
+ 'main.c',
+ 'l2fwd_poll.c',
+ 'l2fwd_common.c',
+)
diff --git a/examples/meson.build b/examples/meson.build
index a046b74ad..0b02640bf 100644
--- a/examples/meson.build
+++ b/examples/meson.build
@@ -19,7 +19,7 @@ all_examples = [
'ip_fragmentation', 'ip_pipeline',
'ip_reassembly', 'ipsec-secgw',
'ipv4_multicast', 'kni',
- 'l2fwd', 'l2fwd-cat',
+ 'l2fwd', 'l2fwd-cat', 'l2fwd-event',
'l2fwd-crypto', 'l2fwd-jobstats',
'l2fwd-keepalive', 'l3fwd',
'l3fwd-acl', 'l3fwd-power',
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 02/10] examples/l2fwd-event: add infra for eventdev
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
` (8 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add infra to select event device as a mode to process packets through
command line arguments. Also, allow the user to select the schedule type
to be RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or
RTE_SCHED_TYPE_PARALLEL.
Usage:
`--mode="eventdev"` or `--mode="poll"`
`--eventq-sched="ordered"`, `--eventq-sched="atomic"` or
`--event-sched=parallel`
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 1 +
examples/l2fwd-event/l2fwd_common.h | 4 +++
examples/l2fwd-event/l2fwd_event.c | 34 +++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 21 ++++++++++++
examples/l2fwd-event/main.c | 52 +++++++++++++++++++++++++++--
examples/l2fwd-event/meson.build | 1 +
6 files changed, 111 insertions(+), 2 deletions(-)
create mode 100644 examples/l2fwd-event/l2fwd_event.c
create mode 100644 examples/l2fwd-event/l2fwd_event.h
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index 73f02dd3b..08ba1835d 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -8,6 +8,7 @@ APP = l2fwd-event
# all source are stored in SRCS-y
SRCS-y := main.c
SRCS-y += l2fwd_poll.c
+SRCS-y += l2fwd_event.c
SRCS-y += l2fwd_common.c
# Build using pkg-config variables if possible
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index 7b74f92b3..a4e17ab97 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -65,6 +65,8 @@ struct l2fwd_port_statistics {
struct l2fwd_resources {
volatile uint8_t force_quit;
+ uint8_t event_mode;
+ uint8_t sched_type;
uint8_t mac_updating;
uint8_t rx_queue_per_lcore;
uint16_t nb_rxd;
@@ -75,6 +77,7 @@ struct l2fwd_resources {
uint32_t dst_ports[RTE_MAX_ETHPORTS];
struct rte_ether_addr eth_addr[RTE_MAX_ETHPORTS];
struct l2fwd_port_statistics port_stats[RTE_MAX_ETHPORTS];
+ void *evt_rsrc;
void *poll_rsrc;
} __rte_cache_aligned;
@@ -112,6 +115,7 @@ l2fwd_get_rsrc(void)
memset(rsrc, 0, sizeof(struct l2fwd_resources));
rsrc->mac_updating = true;
rsrc->rx_queue_per_lcore = 1;
+ rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
rsrc->timer_period = 10 * rte_get_timer_hz();
return mz->addr;
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
new file mode 100644
index 000000000..48d32d718
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc;
+
+ if (!rte_event_dev_count())
+ rte_panic("No Eventdev found\n");
+
+ evt_rsrc = rte_zmalloc("l2fwd_event",
+ sizeof(struct l2fwd_event_resources), 0);
+ if (evt_rsrc == NULL)
+ rte_panic("Failed to allocate memory\n");
+
+ rsrc->evt_rsrc = evt_rsrc;
+}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
new file mode 100644
index 000000000..9a1bb1612
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L2FWD_EVENT_H__
+#define __L2FWD_EVENT_H__
+
+#include <rte_common.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_mbuf.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+
+struct l2fwd_event_resources {
+};
+
+void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
+
+#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index a4e41ddb4..2a1fe4e11 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -2,6 +2,7 @@
* Copyright(C) 2019 Marvell International Ltd.
*/
+#include "l2fwd_event.h"
#include "l2fwd_poll.h"
/* display usage */
@@ -16,7 +17,12 @@ l2fwd_event_usage(const char *prgname)
" --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default)\n"
" When enabled:\n"
" - The source MAC address is replaced by the TX port MAC address\n"
- " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n",
+ " - The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID\n"
+ " --mode: Packet transfer mode for I/O, poll or eventdev\n"
+ " Default mode = eventdev\n"
+ " --eventq-sched: Event queue schedule type, ordered, atomic or parallel.\n"
+ " Default: atomic\n"
+ " Valid only if --mode=eventdev\n\n",
prgname);
}
@@ -71,6 +77,28 @@ l2fwd_event_parse_timer_period(const char *q_arg)
return n;
}
+static void
+l2fwd_event_parse_mode(const char *optarg,
+ struct l2fwd_resources *rsrc)
+{
+ if (!strncmp(optarg, "poll", 4))
+ rsrc->event_mode = false;
+ else if (!strncmp(optarg, "eventdev", 8))
+ rsrc->event_mode = true;
+}
+
+static void
+l2fwd_event_parse_eventq_sched(const char *optarg,
+ struct l2fwd_resources *rsrc)
+{
+ if (!strncmp(optarg, "ordered", 7))
+ rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+ else if (!strncmp(optarg, "atomic", 6))
+ rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+ else if (!strncmp(optarg, "parallel", 8))
+ rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
+}
+
static const char short_options[] =
"p:" /* portmask */
"q:" /* number of queues */
@@ -79,6 +107,8 @@ static const char short_options[] =
#define CMD_LINE_OPT_MAC_UPDATING "mac-updating"
#define CMD_LINE_OPT_NO_MAC_UPDATING "no-mac-updating"
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SCHED "eventq-sched"
enum {
/* long options mapped to a short option */
@@ -87,6 +117,8 @@ enum {
* conflict with short options
*/
CMD_LINE_OPT_MIN_NUM = 256,
+ CMD_LINE_OPT_MODE_NUM,
+ CMD_LINE_OPT_EVENTQ_SCHED_NUM,
};
/* Parse the argument given in the command line of the application */
@@ -98,6 +130,10 @@ l2fwd_event_parse_args(int argc, char **argv,
struct option lgopts[] = {
{ CMD_LINE_OPT_MAC_UPDATING, no_argument, &mac_updating, 1},
{ CMD_LINE_OPT_NO_MAC_UPDATING, no_argument, &mac_updating, 0},
+ { CMD_LINE_OPT_MODE, required_argument, NULL,
+ CMD_LINE_OPT_MODE_NUM},
+ { CMD_LINE_OPT_EVENTQ_SCHED, required_argument, NULL,
+ CMD_LINE_OPT_EVENTQ_SCHED_NUM},
{NULL, 0, 0, 0}
};
int opt, ret, timer_secs;
@@ -145,6 +181,14 @@ l2fwd_event_parse_args(int argc, char **argv,
rsrc->timer_period *= rte_get_timer_hz();
break;
+ case CMD_LINE_OPT_MODE_NUM:
+ l2fwd_event_parse_mode(optarg, rsrc);
+ break;
+
+ case CMD_LINE_OPT_EVENTQ_SCHED_NUM:
+ l2fwd_event_parse_eventq_sched(optarg, rsrc);
+ break;
+
/* long options */
case 0:
break;
@@ -404,7 +448,11 @@ main(int argc, char **argv)
if (!nb_ports_available)
rte_panic("All available ports are disabled. Please set portmask.\n");
- l2fwd_poll_resource_setup(rsrc);
+ /* Configure eventdev parameters if required */
+ if (rsrc->event_mode)
+ l2fwd_event_resource_setup(rsrc);
+ else
+ l2fwd_poll_resource_setup(rsrc);
/* initialize port stats */
memset(&rsrc->port_stats, 0,
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index c936aa72e..c1ae2037c 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -12,4 +12,5 @@ sources = files(
'main.c',
'l2fwd_poll.c',
'l2fwd_common.c',
+ 'l2fwd_event.c',
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 03/10] examples/l2fwd-event: add infra to split eventdev framework
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 04/10] examples/l2fwd-event: add event device setup pbhagavatula
` (7 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/Makefile | 2 ++
examples/l2fwd-event/l2fwd_event.c | 26 +++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 7 +++++
examples/l2fwd-event/l2fwd_event_generic.c | 23 ++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 23 ++++++++++++++++
examples/l2fwd-event/meson.build | 2 ++
6 files changed, 83 insertions(+)
create mode 100644 examples/l2fwd-event/l2fwd_event_generic.c
create mode 100644 examples/l2fwd-event/l2fwd_event_internal_port.c
diff --git a/examples/l2fwd-event/Makefile b/examples/l2fwd-event/Makefile
index 08ba1835d..6f4176882 100644
--- a/examples/l2fwd-event/Makefile
+++ b/examples/l2fwd-event/Makefile
@@ -10,6 +10,8 @@ SRCS-y := main.c
SRCS-y += l2fwd_poll.c
SRCS-y += l2fwd_event.c
SRCS-y += l2fwd_common.c
+SRCS-y += l2fwd_event_generic.c
+SRCS-y += l2fwd_event_internal_port.c
# Build using pkg-config variables if possible
ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 48d32d718..7f90e6311 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,29 @@
#include "l2fwd_event.h"
+static void
+l2fwd_event_capability_setup(struct l2fwd_event_resources *evt_rsrc)
+{
+ uint32_t caps = 0;
+ uint16_t i;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(i) {
+ ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+ if (ret)
+ rte_panic("Invalid capability for Tx adptr port %d\n",
+ i);
+
+ evt_rsrc->tx_mode_q |= !(caps &
+ RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+ }
+
+ if (evt_rsrc->tx_mode_q)
+ l2fwd_event_set_generic_ops(&evt_rsrc->ops);
+ else
+ l2fwd_event_set_internal_port_ops(&evt_rsrc->ops);
+}
+
void
l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
{
@@ -31,4 +54,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
rte_panic("Failed to allocate memory\n");
rsrc->evt_rsrc = evt_rsrc;
+
+ /* Setup eventdev capability callbacks */
+ l2fwd_event_capability_setup(evt_rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 9a1bb1612..b7aaa39f9 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -13,9 +13,16 @@
#include "l2fwd_common.h"
+struct event_setup_ops {
+};
+
struct l2fwd_event_resources {
+ uint8_t tx_mode_q;
+ struct event_setup_ops ops;
};
void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
+void l2fwd_event_set_generic_ops(struct event_setup_ops *ops);
+void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops);
#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
new file mode 100644
index 000000000..9afade7d2
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
new file mode 100644
index 000000000..ce95b8e6d
--- /dev/null
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
+#include <rte_event_eth_tx_adapter.h>
+#include <rte_lcore.h>
+#include <rte_spinlock.h>
+
+#include "l2fwd_common.h"
+#include "l2fwd_event.h"
+
+void
+l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
+{
+ RTE_SET_USED(ops);
+}
diff --git a/examples/l2fwd-event/meson.build b/examples/l2fwd-event/meson.build
index c1ae2037c..4e9a069d6 100644
--- a/examples/l2fwd-event/meson.build
+++ b/examples/l2fwd-event/meson.build
@@ -13,4 +13,6 @@ sources = files(
'l2fwd_poll.c',
'l2fwd_common.c',
'l2fwd_event.c',
+ 'l2fwd_event_internal_port.c',
+ 'l2fwd_event_generic.c'
)
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 04/10] examples/l2fwd-event: add event device setup
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (2 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
` (6 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event device device setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 3 +
examples/l2fwd-event/l2fwd_event.h | 16 ++++
examples/l2fwd-event/l2fwd_event_generic.c | 75 +++++++++++++++++-
.../l2fwd-event/l2fwd_event_internal_port.c | 77 ++++++++++++++++++-
4 files changed, 169 insertions(+), 2 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 7f90e6311..a5c1c2c40 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -57,4 +57,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
/* Setup eventdev capability callbacks */
l2fwd_event_capability_setup(evt_rsrc);
+
+ /* Event device configuration */
+ evt_rsrc->ops.event_device_setup(rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index b7aaa39f9..6b5beb041 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -13,11 +13,27 @@
#include "l2fwd_common.h"
+typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc);
+
+struct event_queues {
+ uint8_t nb_queues;
+};
+
+struct event_ports {
+ uint8_t nb_ports;
+};
+
struct event_setup_ops {
+ event_device_setup_cb event_device_setup;
};
struct l2fwd_event_resources {
uint8_t tx_mode_q;
+ uint8_t has_burst;
+ uint8_t event_d_id;
+ uint8_t disable_implicit_release;
+ struct event_ports evp;
+ struct event_queues evq;
struct event_setup_ops ops;
};
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 9afade7d2..33e570585 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -16,8 +16,81 @@
#include "l2fwd_common.h"
#include "l2fwd_event.h"
+static uint32_t
+l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t ethdev_count = 0;
+ uint16_t num_workers = 0;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ethdev_count++;
+ }
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+ evt_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ /* One queue for each ethdev port + one Tx adapter Single link queue. */
+ event_d_conf.nb_event_queues = ethdev_count + 1;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count() - rte_service_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ evt_rsrc->evp.nb_ports = num_workers;
+ evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+ evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event device\n");
+
+ evt_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->event_device_setup = l2fwd_event_device_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index ce95b8e6d..acd98798e 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -16,8 +16,83 @@
#include "l2fwd_common.h"
#include "l2fwd_event.h"
+static uint32_t
+l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_dev_config event_d_conf = {
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+ struct rte_event_dev_info dev_info;
+ uint8_t disable_implicit_release;
+ const uint8_t event_d_id = 0; /* Always use first event device only */
+ uint32_t event_queue_cfg = 0;
+ uint16_t ethdev_count = 0;
+ uint16_t num_workers = 0;
+ uint16_t port_id;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ethdev_count++;
+ }
+
+ /* Event device configurtion */
+ rte_event_dev_info_get(event_d_id, &dev_info);
+
+ disable_implicit_release = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+ evt_rsrc->disable_implicit_release =
+ disable_implicit_release;
+
+ if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+ event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+ event_d_conf.nb_event_queues = ethdev_count;
+ if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+ event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+ if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+ event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+ if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+ event_d_conf.nb_event_queue_flows =
+ dev_info.max_event_queue_flows;
+
+ if (dev_info.max_event_port_dequeue_depth <
+ event_d_conf.nb_event_port_dequeue_depth)
+ event_d_conf.nb_event_port_dequeue_depth =
+ dev_info.max_event_port_dequeue_depth;
+
+ if (dev_info.max_event_port_enqueue_depth <
+ event_d_conf.nb_event_port_enqueue_depth)
+ event_d_conf.nb_event_port_enqueue_depth =
+ dev_info.max_event_port_enqueue_depth;
+
+ num_workers = rte_lcore_count();
+ if (dev_info.max_event_ports < num_workers)
+ num_workers = dev_info.max_event_ports;
+
+ event_d_conf.nb_event_ports = num_workers;
+ evt_rsrc->evp.nb_ports = num_workers;
+ evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+ evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+ RTE_EVENT_DEV_CAP_BURST_MODE);
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event device\n");
+
+ evt_rsrc->event_d_id = event_d_id;
+ return event_queue_cfg;
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
- RTE_SET_USED(ops);
+ ops->event_device_setup = l2fwd_event_device_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 05/10] examples/l2fwd-event: add eventdev queue and port setup
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (3 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 04/10] examples/l2fwd-event: add event device setup pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
` (5 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event device queue and port setup based on event eth Tx adapter
capabilities.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 9 +-
examples/l2fwd-event/l2fwd_event.h | 10 ++
examples/l2fwd-event/l2fwd_event_generic.c | 104 ++++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 100 +++++++++++++++++
4 files changed, 222 insertions(+), 1 deletion(-)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index a5c1c2c40..8dd00a6d3 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -44,6 +44,7 @@ void
l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
{
struct l2fwd_event_resources *evt_rsrc;
+ uint32_t event_queue_cfg;
if (!rte_event_dev_count())
rte_panic("No Eventdev found\n");
@@ -59,5 +60,11 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
l2fwd_event_capability_setup(evt_rsrc);
/* Event device configuration */
- evt_rsrc->ops.event_device_setup(rsrc);
+ event_queue_cfg = evt_rsrc->ops.event_device_setup(rsrc);
+
+ /* Event queue configuration */
+ evt_rsrc->ops.event_queue_setup(rsrc, event_queue_cfg);
+
+ /* Event port configuration */
+ evt_rsrc->ops.event_port_setup(rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 6b5beb041..fe7857cdf 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -14,27 +14,37 @@
#include "l2fwd_common.h"
typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc);
+typedef void (*event_port_setup_cb)(struct l2fwd_resources *rsrc);
+typedef void (*event_queue_setup_cb)(struct l2fwd_resources *rsrc,
+ uint32_t event_queue_cfg);
struct event_queues {
+ uint8_t *event_q_id;
uint8_t nb_queues;
};
struct event_ports {
+ uint8_t *event_p_id;
uint8_t nb_ports;
+ rte_spinlock_t lock;
};
struct event_setup_ops {
event_device_setup_cb event_device_setup;
+ event_queue_setup_cb event_queue_setup;
+ event_port_setup_cb event_port_setup;
};
struct l2fwd_event_resources {
uint8_t tx_mode_q;
+ uint8_t deq_depth;
uint8_t has_burst;
uint8_t event_d_id;
uint8_t disable_implicit_release;
struct event_ports evp;
struct event_queues evq;
struct event_setup_ops ops;
+ struct rte_event_port_conf def_p_conf;
};
void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index 33e570585..f72d21c0b 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -89,8 +89,112 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc)
return event_queue_cfg;
}
+static void
+l2fwd_event_port_setup_generic(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evp.nb_ports);
+ if (!evt_rsrc->evp.event_p_id)
+ rte_panic("No space is available\n");
+
+ memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ evt_rsrc->disable_implicit_release;
+ evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+ for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event port %d\n",
+ event_p_id);
+
+ ret = rte_event_port_link(event_d_id, event_p_id,
+ evt_rsrc->evq.event_q_id,
+ NULL,
+ evt_rsrc->evq.nb_queues - 1);
+ if (ret != (evt_rsrc->evq.nb_queues - 1))
+ rte_panic("Error in linking event port %d to queues\n",
+ event_p_id);
+ evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+ }
+ /* init spinlock */
+ rte_spinlock_init(&evt_rsrc->evp.lock);
+
+ evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l2fwd_event_queue_setup_generic(struct l2fwd_resources *rsrc,
+ uint32_t event_queue_cfg)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id;
+ int32_t ret;
+
+ event_q_conf.schedule_type = rsrc->sched_type;
+ evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evq.nb_queues);
+ if (!evt_rsrc->evq.event_q_id)
+ rte_panic("Memory allocation failure\n");
+
+ rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ for (event_q_id = 0; event_q_id < (evt_rsrc->evq.nb_queues - 1);
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue\n");
+ evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+
+ event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+ event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+ ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue for Tx adapter\n");
+ evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_generic;
+ ops->event_queue_setup = l2fwd_event_queue_setup_generic;
+ ops->event_port_setup = l2fwd_event_port_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index acd98798e..dab3f24ee 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -91,8 +91,108 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc)
return event_queue_cfg;
}
+static void
+l2fwd_event_port_setup_internal_port(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+ struct rte_event_port_conf def_p_conf;
+ uint8_t event_p_id;
+ int32_t ret;
+
+ evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evp.nb_ports);
+ if (!evt_rsrc->evp.event_p_id)
+ rte_panic("Failed to allocate memory for Event Ports\n");
+
+ rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+ if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+ event_p_conf.new_event_threshold =
+ def_p_conf.new_event_threshold;
+
+ if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+ event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+ if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+ event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+ event_p_conf.disable_implicit_release =
+ evt_rsrc->disable_implicit_release;
+
+ for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+ event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event port %d\n",
+ event_p_id);
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0)
+ rte_panic("Error in linking event port %d to queue\n",
+ event_p_id);
+ evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+ /* init spinlock */
+ rte_spinlock_init(&evt_rsrc->evp.lock);
+ }
+
+ evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l2fwd_event_queue_setup_internal_port(struct l2fwd_resources *rsrc,
+ uint32_t event_queue_cfg)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = event_queue_cfg,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+ };
+ struct rte_event_queue_conf def_q_conf;
+ uint8_t event_q_id = 0;
+ int32_t ret;
+
+ rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+ if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+ event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+ if (def_q_conf.nb_atomic_order_sequences <
+ event_q_conf.nb_atomic_order_sequences)
+ event_q_conf.nb_atomic_order_sequences =
+ def_q_conf.nb_atomic_order_sequences;
+
+ event_q_conf.event_queue_cfg = event_queue_cfg;
+ event_q_conf.schedule_type = rsrc->sched_type;
+ evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->evq.nb_queues);
+ if (!evt_rsrc->evq.event_q_id)
+ rte_panic("Memory allocation failure\n");
+
+ for (event_q_id = 0; event_q_id < evt_rsrc->evq.nb_queues;
+ event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue\n");
+ evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+ }
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_internal_port;
+ ops->event_queue_setup = l2fwd_event_queue_setup_internal_port;
+ ops->event_port_setup = l2fwd_event_port_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (4 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 07/10] examples/l2fwd-event: add service core setup pbhagavatula
` (4 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 3 +
examples/l2fwd-event/l2fwd_event.h | 16 +++
examples/l2fwd-event/l2fwd_event_generic.c | 115 ++++++++++++++++++
.../l2fwd-event/l2fwd_event_internal_port.c | 96 +++++++++++++++
4 files changed, 230 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 8dd00a6d3..33c702739 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -67,4 +67,7 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
/* Event port configuration */
evt_rsrc->ops.event_port_setup(rsrc);
+
+ /* Rx/Tx adapters configuration */
+ evt_rsrc->ops.adapter_setup(rsrc);
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index fe7857cdf..1d7090ddf 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -17,6 +17,7 @@ typedef uint32_t (*event_device_setup_cb)(struct l2fwd_resources *rsrc);
typedef void (*event_port_setup_cb)(struct l2fwd_resources *rsrc);
typedef void (*event_queue_setup_cb)(struct l2fwd_resources *rsrc,
uint32_t event_queue_cfg);
+typedef void (*adapter_setup_cb)(struct l2fwd_resources *rsrc);
struct event_queues {
uint8_t *event_q_id;
@@ -29,10 +30,23 @@ struct event_ports {
rte_spinlock_t lock;
};
+struct event_rx_adptr {
+ uint32_t service_id;
+ uint8_t nb_rx_adptr;
+ uint8_t *rx_adptr;
+};
+
+struct event_tx_adptr {
+ uint32_t service_id;
+ uint8_t nb_tx_adptr;
+ uint8_t *tx_adptr;
+};
+
struct event_setup_ops {
event_device_setup_cb event_device_setup;
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
+ adapter_setup_cb adapter_setup;
};
struct l2fwd_event_resources {
@@ -44,6 +58,8 @@ struct l2fwd_event_resources {
struct event_ports evp;
struct event_queues evq;
struct event_setup_ops ops;
+ struct event_rx_adptr rx_adptr;
+ struct event_tx_adptr tx_adptr;
struct rte_event_port_conf def_p_conf;
};
diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c
index f72d21c0b..f99608173 100644
--- a/examples/l2fwd-event/l2fwd_event_generic.c
+++ b/examples/l2fwd-event/l2fwd_event_generic.c
@@ -191,10 +191,125 @@ l2fwd_event_queue_setup_generic(struct l2fwd_resources *rsrc,
evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
}
+static void
+l2fwd_rx_tx_adapter_setup_generic(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ uint8_t rx_adptr_id = 0;
+ uint8_t tx_adptr_id = 0;
+ uint8_t tx_port_id = 0;
+ uint16_t port_id;
+ uint32_t service_id;
+ int32_t ret, i = 0;
+
+ /* Rx adapter setup */
+ evt_rsrc->rx_adptr.nb_rx_adptr = 1;
+ evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->rx_adptr.nb_rx_adptr);
+ if (!evt_rsrc->rx_adptr.rx_adptr) {
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+ ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create rx adapter\n");
+
+ /* Configure user requested sched type */
+ eth_q_conf.ev.sched_type = rsrc->sched_type;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[i];
+ ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_panic("Failed to add queues to Rx adapter\n");
+ if (i < evt_rsrc->evq.nb_queues)
+ i++;
+ }
+
+ ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error getting the service ID for rx adptr\n");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ evt_rsrc->rx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+ if (ret)
+ rte_panic("Rx adapter[%d] start Failed\n", rx_adptr_id);
+
+ evt_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+ /* Tx adapter setup */
+ evt_rsrc->tx_adptr.nb_tx_adptr = 1;
+ evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->tx_adptr.nb_tx_adptr);
+ if (!evt_rsrc->tx_adptr.tx_adptr) {
+ free(evt_rsrc->rx_adptr.rx_adptr);
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+ ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create tx adapter\n");
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+ -1);
+ if (ret)
+ rte_panic("Failed to add queues to Tx adapter\n");
+ }
+
+ ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Failed to get Tx adapter service ID\n");
+
+ rte_service_runstate_set(service_id, 1);
+ rte_service_set_runstate_mapped_check(service_id, 0);
+ evt_rsrc->tx_adptr.service_id = service_id;
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &evt_rsrc->evq.event_q_id[
+ evt_rsrc->evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+ ret);
+
+ ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+ if (ret)
+ rte_panic("Tx adapter[%d] start Failed\n", tx_adptr_id);
+
+ evt_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
void
l2fwd_event_set_generic_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_generic;
ops->event_queue_setup = l2fwd_event_queue_setup_generic;
ops->event_port_setup = l2fwd_event_port_setup_generic;
+ ops->adapter_setup = l2fwd_rx_tx_adapter_setup_generic;
}
diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c
index dab3f24ee..bed94754f 100644
--- a/examples/l2fwd-event/l2fwd_event_internal_port.c
+++ b/examples/l2fwd-event/l2fwd_event_internal_port.c
@@ -189,10 +189,106 @@ l2fwd_event_queue_setup_internal_port(struct l2fwd_resources *rsrc,
}
}
+static void
+l2fwd_rx_tx_adapter_setup_internal_port(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_eth_rx_adapter_queue_conf eth_q_conf = {
+ .rx_queue_flags = 0,
+ .ev = {
+ .queue_id = 0,
+ .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
+ }
+ };
+ uint8_t event_d_id = evt_rsrc->event_d_id;
+ uint16_t adapter_id = 0;
+ uint16_t nb_adapter = 0;
+ uint16_t port_id;
+ uint8_t q_id = 0;
+ int ret;
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ nb_adapter++;
+ }
+
+ evt_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+ evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->rx_adptr.nb_rx_adptr);
+ if (!evt_rsrc->rx_adptr.rx_adptr) {
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create rx adapter[%d]\n",
+ adapter_id);
+
+ /* Configure user requested sched type*/
+ eth_q_conf.ev.sched_type = rsrc->sched_type;
+ eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+ ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_panic("Failed to add queues to Rx adapter\n");
+
+ ret = rte_event_eth_rx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ if (q_id < evt_rsrc->evq.nb_queues)
+ q_id++;
+ }
+
+ evt_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+ evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+ evt_rsrc->tx_adptr.nb_tx_adptr);
+ if (!evt_rsrc->tx_adptr.tx_adptr) {
+ free(evt_rsrc->rx_adptr.rx_adptr);
+ free(evt_rsrc->evp.event_p_id);
+ free(evt_rsrc->evq.event_q_id);
+ rte_panic("Failed to allocate memery for Rx adapter\n");
+ }
+
+ adapter_id = 0;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create tx adapter[%d]\n",
+ adapter_id);
+
+ ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+ -1);
+ if (ret)
+ rte_panic("Failed to add queues to Tx adapter\n");
+
+ ret = rte_event_eth_tx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ }
+}
+
void
l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops)
{
ops->event_device_setup = l2fwd_event_device_setup_internal_port;
ops->event_queue_setup = l2fwd_event_queue_setup_internal_port;
ops->event_port_setup = l2fwd_event_port_setup_internal_port;
+ ops->adapter_setup = l2fwd_rx_tx_adapter_setup_internal_port;
}
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 07/10] examples/l2fwd-event: add service core setup
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (5 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
` (3 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Setup service cores for eventdev and Rx/Tx adapter when they don't have
internal port capability.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_event.c | 82 ++++++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 1 +
examples/l2fwd-event/main.c | 3 ++
3 files changed, 86 insertions(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 33c702739..562d61292 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,88 @@
#include "l2fwd_event.h"
+static inline int
+l2fwd_event_service_enable(uint32_t service_id)
+{
+ uint8_t min_service_count = UINT8_MAX;
+ uint32_t slcore_array[RTE_MAX_LCORE];
+ unsigned int slcore = 0;
+ uint8_t service_count;
+ int32_t slcore_count;
+
+ if (!rte_service_lcore_count())
+ return -ENOENT;
+
+ slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+ if (slcore_count < 0)
+ return -ENOENT;
+ /* Get the core which has least number of services running. */
+ while (slcore_count--) {
+ /* Reset default mapping */
+ rte_service_map_lcore_set(service_id,
+ slcore_array[slcore_count], 0);
+ service_count = rte_service_lcore_count_services(
+ slcore_array[slcore_count]);
+ if (service_count < min_service_count) {
+ slcore = slcore_array[slcore_count];
+ min_service_count = service_count;
+ }
+ }
+ if (rte_service_map_lcore_set(service_id, slcore, 1))
+ return -ENOENT;
+ rte_service_lcore_start(slcore);
+
+ return 0;
+}
+
+void
+l2fwd_event_service_setup(struct l2fwd_resources *rsrc)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_dev_info evdev_info;
+ uint32_t service_id, caps;
+ int ret, i;
+
+ rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+ if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+ ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting eventdev service\n");
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+ ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ ret = rte_event_eth_rx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+ ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ ret = rte_event_eth_tx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+}
+
static void
l2fwd_event_capability_setup(struct l2fwd_event_resources *evt_rsrc)
{
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index 1d7090ddf..ebfbfe460 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -66,5 +66,6 @@ struct l2fwd_event_resources {
void l2fwd_event_resource_setup(struct l2fwd_resources *rsrc);
void l2fwd_event_set_generic_ops(struct event_setup_ops *ops);
void l2fwd_event_set_internal_port_ops(struct event_setup_ops *ops);
+void l2fwd_event_service_setup(struct l2fwd_resources *rsrc);
#endif /* __L2FWD_EVENT_H__ */
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 2a1fe4e11..1ae18bb51 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -471,6 +471,9 @@ main(int argc, char **argv)
port_id);
}
+ if (rsrc->event_mode)
+ l2fwd_event_service_setup(rsrc);
+
check_all_ports_link_status(rsrc, rsrc->enabled_port_mask);
/* launch per-lcore init on every lcore */
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 08/10] examples/l2fwd-event: add eventdev main loop
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (6 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 07/10] examples/l2fwd-event: add service core setup pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
` (2 subsequent siblings)
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add event dev main loop based on enabled l2fwd options and eventdev
capabilities.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/l2fwd_common.c | 6 +
examples/l2fwd-event/l2fwd_common.h | 1 +
examples/l2fwd-event/l2fwd_event.c | 238 ++++++++++++++++++++++++++++
examples/l2fwd-event/l2fwd_event.h | 2 +
examples/l2fwd-event/main.c | 68 +++++++-
5 files changed, 307 insertions(+), 8 deletions(-)
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index c206415d0..4ba788550 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -18,6 +18,12 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
uint16_t port_id;
int ret;
+ if (rsrc->event_mode) {
+ port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+ port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
+ port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+ }
+
/* Initialise each port */
RTE_ETH_FOREACH_DEV(port_id) {
struct rte_eth_conf local_port_conf = port_conf;
diff --git a/examples/l2fwd-event/l2fwd_common.h b/examples/l2fwd-event/l2fwd_common.h
index a4e17ab97..7e33ee749 100644
--- a/examples/l2fwd-event/l2fwd_common.h
+++ b/examples/l2fwd-event/l2fwd_common.h
@@ -114,6 +114,7 @@ l2fwd_get_rsrc(void)
memset(rsrc, 0, sizeof(struct l2fwd_resources));
rsrc->mac_updating = true;
+ rsrc->event_mode = true;
rsrc->rx_queue_per_lcore = 1;
rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
rsrc->timer_period = 10 * rte_get_timer_hz();
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 562d61292..c7782cbc5 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -17,6 +17,12 @@
#include "l2fwd_event.h"
+#define L2FWD_EVENT_SINGLE 0x1
+#define L2FWD_EVENT_BURST 0x2
+#define L2FWD_EVENT_TX_DIRECT 0x4
+#define L2FWD_EVENT_TX_ENQ 0x8
+#define L2FWD_EVENT_UPDT_MAC 0x10
+
static inline int
l2fwd_event_service_enable(uint32_t service_id)
{
@@ -122,11 +128,233 @@ l2fwd_event_capability_setup(struct l2fwd_event_resources *evt_rsrc)
l2fwd_event_set_internal_port_ops(&evt_rsrc->ops);
}
+static __rte_noinline int
+l2fwd_get_free_event_port(struct l2fwd_event_resources *evt_rsrc)
+{
+ static int index;
+ int port_id;
+
+ rte_spinlock_lock(&evt_rsrc->evp.lock);
+ if (index >= evt_rsrc->evp.nb_ports) {
+ printf("No free event port is available\n");
+ return -1;
+ }
+
+ port_id = evt_rsrc->evp.event_p_id[index];
+ index++;
+ rte_spinlock_unlock(&evt_rsrc->evp.lock);
+
+ return port_id;
+}
+
+static __rte_always_inline void
+l2fwd_event_fwd(struct l2fwd_resources *rsrc, struct rte_event *ev,
+ const uint8_t tx_q_id, const uint64_t timer_period,
+ const uint32_t flags)
+{
+ struct rte_mbuf *mbuf = ev->mbuf;
+ uint16_t dst_port;
+
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf, void *));
+ dst_port = rsrc->dst_ports[mbuf->port];
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&rsrc->port_stats[mbuf->port].rx,
+ 1, __ATOMIC_RELAXED);
+ mbuf->port = dst_port;
+
+ if (flags & L2FWD_EVENT_UPDT_MAC)
+ l2fwd_mac_updating(mbuf, dst_port, &rsrc->eth_addr[dst_port]);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ ev->queue_id = tx_q_id;
+ ev->op = RTE_EVENT_OP_FORWARD;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT)
+ rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+
+ if (timer_period > 0)
+ __atomic_fetch_add(&rsrc->port_stats[mbuf->port].tx,
+ 1, __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_single(struct l2fwd_resources *rsrc,
+ const uint32_t flags)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ const int port_id = l2fwd_get_free_event_port(evt_rsrc);
+ const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+ evt_rsrc->evq.nb_queues - 1];
+ const uint64_t timer_period = rsrc->timer_period;
+ const uint8_t event_d_id = evt_rsrc->event_d_id;
+ struct rte_event ev;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!rsrc->force_quit) {
+ /* Read packet from eventdev */
+ if (!rte_event_dequeue_burst(event_d_id, port_id, &ev, 1, 0))
+ continue;
+
+ l2fwd_event_fwd(rsrc, &ev, tx_q_id, timer_period, flags);
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ while (rte_event_enqueue_burst(event_d_id, port_id,
+ &ev, 1) &&
+ !rsrc->force_quit)
+ ;
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id,
+ &ev, 1, 0) &&
+ !rsrc->force_quit)
+ ;
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop_burst(struct l2fwd_resources *rsrc,
+ const uint32_t flags)
+{
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ const int port_id = l2fwd_get_free_event_port(evt_rsrc);
+ const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+ evt_rsrc->evq.nb_queues - 1];
+ const uint64_t timer_period = rsrc->timer_period;
+ const uint8_t event_d_id = evt_rsrc->event_d_id;
+ const uint8_t deq_len = evt_rsrc->deq_depth;
+ struct rte_event ev[MAX_PKT_BURST];
+ uint16_t nb_rx, nb_tx;
+ uint8_t i;
+
+ if (port_id < 0)
+ return;
+
+ printf("%s(): entering eventdev main loop on lcore %u\n", __func__,
+ rte_lcore_id());
+
+ while (!rsrc->force_quit) {
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, port_id, ev,
+ deq_len, 0);
+ if (nb_rx == 0)
+ continue;
+
+ for (i = 0; i < nb_rx; i++) {
+ l2fwd_event_fwd(rsrc, &ev[i], tx_q_id, timer_period,
+ flags);
+ }
+
+ if (flags & L2FWD_EVENT_TX_ENQ) {
+ nb_tx = rte_event_enqueue_burst(event_d_id, port_id,
+ ev, nb_rx);
+ while (nb_tx < nb_rx && !rsrc->force_quit)
+ nb_tx += rte_event_enqueue_burst(event_d_id,
+ port_id, ev + nb_tx,
+ nb_rx - nb_tx);
+ }
+
+ if (flags & L2FWD_EVENT_TX_DIRECT) {
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id,
+ port_id, ev,
+ nb_rx, 0);
+ while (nb_tx < nb_rx && !rsrc->force_quit)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx, 0);
+ }
+ }
+}
+
+static __rte_always_inline void
+l2fwd_event_loop(struct l2fwd_resources *rsrc,
+ const uint32_t flags)
+{
+ if (flags & L2FWD_EVENT_SINGLE)
+ l2fwd_event_loop_single(rsrc, flags);
+ if (flags & L2FWD_EVENT_BURST)
+ l2fwd_event_loop_burst(rsrc, flags);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc,
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_brst(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_brst(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_d_brst_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_DIRECT | L2FWD_EVENT_BURST);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_SINGLE);
+}
+
+static void __rte_noinline
+l2fwd_event_main_loop_tx_q_brst_mac(struct l2fwd_resources *rsrc)
+{
+ l2fwd_event_loop(rsrc, L2FWD_EVENT_UPDT_MAC |
+ L2FWD_EVENT_TX_ENQ | L2FWD_EVENT_BURST);
+}
+
void
l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
{
+ /* [MAC_UPDT][TX_MODE][BURST] */
+ const event_loop_cb event_loop[2][2][2] = {
+ [0][0][0] = l2fwd_event_main_loop_tx_d,
+ [0][0][1] = l2fwd_event_main_loop_tx_d_brst,
+ [0][1][0] = l2fwd_event_main_loop_tx_q,
+ [0][1][1] = l2fwd_event_main_loop_tx_q_brst,
+ [1][0][0] = l2fwd_event_main_loop_tx_d_mac,
+ [1][0][1] = l2fwd_event_main_loop_tx_d_brst_mac,
+ [1][1][0] = l2fwd_event_main_loop_tx_q_mac,
+ [1][1][1] = l2fwd_event_main_loop_tx_q_brst_mac,
+ };
struct l2fwd_event_resources *evt_rsrc;
uint32_t event_queue_cfg;
+ int ret;
if (!rte_event_dev_count())
rte_panic("No Eventdev found\n");
@@ -152,4 +380,14 @@ l2fwd_event_resource_setup(struct l2fwd_resources *rsrc)
/* Rx/Tx adapters configuration */
evt_rsrc->ops.adapter_setup(rsrc);
+
+ /* Start event device */
+ ret = rte_event_dev_start(evt_rsrc->event_d_id);
+ if (ret < 0)
+ rte_panic("Error in starting eventdev\n");
+
+ evt_rsrc->ops.l2fwd_event_loop = event_loop
+ [rsrc->mac_updating]
+ [evt_rsrc->tx_mode_q]
+ [evt_rsrc->has_burst];
}
diff --git a/examples/l2fwd-event/l2fwd_event.h b/examples/l2fwd-event/l2fwd_event.h
index ebfbfe460..78f22e5f9 100644
--- a/examples/l2fwd-event/l2fwd_event.h
+++ b/examples/l2fwd-event/l2fwd_event.h
@@ -18,6 +18,7 @@ typedef void (*event_port_setup_cb)(struct l2fwd_resources *rsrc);
typedef void (*event_queue_setup_cb)(struct l2fwd_resources *rsrc,
uint32_t event_queue_cfg);
typedef void (*adapter_setup_cb)(struct l2fwd_resources *rsrc);
+typedef void (*event_loop_cb)(struct l2fwd_resources *rsrc);
struct event_queues {
uint8_t *event_q_id;
@@ -47,6 +48,7 @@ struct event_setup_ops {
event_queue_setup_cb event_queue_setup;
event_port_setup_cb event_port_setup;
adapter_setup_cb adapter_setup;
+ event_loop_cb l2fwd_event_loop;
};
struct l2fwd_event_resources {
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1ae18bb51..01eb19f3e 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -214,8 +214,12 @@ l2fwd_launch_one_lcore(void *args)
{
struct l2fwd_resources *rsrc = args;
struct l2fwd_poll_resources *poll_rsrc = rsrc->poll_rsrc;
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
- poll_rsrc->poll_main_loop(rsrc);
+ if (rsrc->event_mode)
+ evt_rsrc->ops.l2fwd_event_loop(rsrc);
+ else
+ poll_rsrc->poll_main_loop(rsrc);
return 0;
}
@@ -304,9 +308,9 @@ print_stats(struct l2fwd_resources *rsrc)
if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
continue;
printf("\nStatistics for port %u ------------------------------"
- "\nPackets sent: %24"PRIu64
- "\nPackets received: %20"PRIu64
- "\nPackets dropped: %21"PRIu64,
+ "\nPackets sent: %29"PRIu64
+ "\nPackets received: %25"PRIu64
+ "\nPackets dropped: %26"PRIu64,
port_id,
rsrc->port_stats[port_id].tx,
rsrc->port_stats[port_id].rx,
@@ -317,10 +321,58 @@ print_stats(struct l2fwd_resources *rsrc)
total_packets_tx += rsrc->port_stats[port_id].tx;
total_packets_rx += rsrc->port_stats[port_id].rx;
}
- printf("\nAggregate statistics ==============================="
- "\nTotal packets sent: %18"PRIu64
- "\nTotal packets received: %14"PRIu64
- "\nTotal packets dropped: %15"PRIu64,
+
+ if (rsrc->event_mode) {
+ struct l2fwd_event_resources *evt_rsrc = rsrc->evt_rsrc;
+ struct rte_event_eth_rx_adapter_stats rx_adptr_stats;
+ struct rte_event_eth_tx_adapter_stats tx_adptr_stats;
+ int ret, i;
+
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+ ret = rte_event_eth_rx_adapter_stats_get(
+ evt_rsrc->rx_adptr.rx_adptr[i],
+ &rx_adptr_stats);
+ if (ret < 0)
+ continue;
+ printf("\nRx adapter[%d] statistics===================="
+ "\nReceive queue poll count: %17"PRIu64
+ "\nReceived packet count: %20"PRIu64
+ "\nEventdev enqueue count: %19"PRIu64
+ "\nEventdev enqueue retry count: %13"PRIu64
+ "\nReceived packet dropped count: %12"PRIu64
+ "\nRx enqueue start timestamp: %15"PRIu64
+ "\nRx enqueue block cycles: %18"PRIu64
+ "\nRx enqueue unblock timestamp: %13"PRIu64,
+ evt_rsrc->rx_adptr.rx_adptr[i],
+ rx_adptr_stats.rx_poll_count,
+ rx_adptr_stats.rx_packets,
+ rx_adptr_stats.rx_enq_count,
+ rx_adptr_stats.rx_enq_retry,
+ rx_adptr_stats.rx_dropped,
+ rx_adptr_stats.rx_enq_start_ts,
+ rx_adptr_stats.rx_enq_block_cycles,
+ rx_adptr_stats.rx_enq_end_ts);
+ }
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+ ret = rte_event_eth_tx_adapter_stats_get(
+ evt_rsrc->tx_adptr.tx_adptr[i],
+ &tx_adptr_stats);
+ if (ret < 0)
+ continue;
+ printf("\nTx adapter[%d] statistics===================="
+ "\nNumber of transmit retries: %15"PRIu64
+ "\nNumber of packets transmitted: %12"PRIu64
+ "\nNumber of packets dropped: %16"PRIu64,
+ evt_rsrc->tx_adptr.tx_adptr[i],
+ tx_adptr_stats.tx_retry,
+ tx_adptr_stats.tx_packets,
+ tx_adptr_stats.tx_dropped);
+ }
+ }
+ printf("\nAggregate lcore statistics ========================="
+ "\nTotal packets sent: %23"PRIu64
+ "\nTotal packets received: %19"PRIu64
+ "\nTotal packets dropped: %20"PRIu64,
total_packets_tx,
total_packets_rx,
total_packets_dropped);
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 09/10] examples/l2fwd-event: add graceful teardown
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (7 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-10-30 12:58 ` [dpdk-dev] [PATCH v7 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
10 siblings, 0 replies; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Marko Kovacevic,
Ori Kam, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Add graceful teardown that addresses both event mode and poll mode.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
examples/l2fwd-event/main.c | 50 +++++++++++++++++++++++++++++--------
1 file changed, 40 insertions(+), 10 deletions(-)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 01eb19f3e..142c00e8f 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -426,7 +426,7 @@ main(int argc, char **argv)
uint16_t port_id, last_port;
uint32_t nb_mbufs;
uint16_t nb_ports;
- int ret;
+ int i, ret;
/* init EAL */
ret = rte_eal_init(argc, argv);
@@ -532,16 +532,46 @@ main(int argc, char **argv)
rte_eal_mp_remote_launch(l2fwd_launch_one_lcore, rsrc,
SKIP_MASTER);
l2fwd_event_print_stats(rsrc);
- rte_eal_mp_wait_lcore();
+ if (rsrc->event_mode) {
+ struct l2fwd_event_resources *evt_rsrc =
+ rsrc->evt_rsrc;
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
+ rte_event_eth_rx_adapter_stop(
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
+ rte_event_eth_tx_adapter_stop(
+ evt_rsrc->tx_adptr.tx_adptr[i]);
- RTE_ETH_FOREACH_DEV(port_id) {
- if ((rsrc->enabled_port_mask &
- (1 << port_id)) == 0)
- continue;
- printf("Closing port %d...", port_id);
- rte_eth_dev_stop(port_id);
- rte_eth_dev_close(port_id);
- printf(" Done\n");
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ rte_eth_dev_stop(port_id);
+ }
+
+ rte_eal_mp_wait_lcore();
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ rte_eth_dev_close(port_id);
+ }
+
+ rte_event_dev_stop(evt_rsrc->event_d_id);
+ rte_event_dev_close(evt_rsrc->event_d_id);
+
+ } else {
+ rte_eal_mp_wait_lcore();
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask &
+ (1 << port_id)) == 0)
+ continue;
+ printf("Closing port %d...", port_id);
+ rte_eth_dev_stop(port_id);
+ rte_eth_dev_close(port_id);
+ printf(" Done\n");
+ }
}
printf("Bye...\n");
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* [dpdk-dev] [PATCH v7 10/10] doc: add application usage guide for l2fwd-event
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (8 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
@ 2019-10-26 11:10 ` pbhagavatula
2019-10-30 13:19 ` Jerin Jacob
2019-10-30 12:58 ` [dpdk-dev] [PATCH v7 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
10 siblings, 1 reply; 107+ messages in thread
From: pbhagavatula @ 2019-10-26 11:10 UTC (permalink / raw)
To: jerinj, bruce.richardson, hemant.agrawal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
Cc: dev
From: Sunil Kumar Kori <skori@marvell.com>
Add documentation for l2fwd-event example.
Update release notes.
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
MAINTAINERS | 1 +
doc/guides/rel_notes/release_19_11.rst | 6 +
doc/guides/sample_app_ug/index.rst | 1 +
doc/guides/sample_app_ug/intro.rst | 5 +
doc/guides/sample_app_ug/l2_forward_event.rst | 698 ++++++++++++++++++
5 files changed, 711 insertions(+)
create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
diff --git a/MAINTAINERS b/MAINTAINERS
index 6957b2a24..8898ff252 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1478,6 +1478,7 @@ F: examples/l2fwd-cat/
M: Sunil Kumar Kori <skori@marvell.com>
M: Pavan Nikhilesh <pbhagavatula@marvell.com>
F: examples/l2fwd-event/
+F: doc/guides/sample_app_ug/l2_forward_event.rst
T: git://dpdk.org/next/dpdk-next-eventdev
F: examples/l3fwd/
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 08dafa76f..c8e9158af 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -115,6 +115,12 @@ New Features
Added eBPF JIT support for arm64 architecture to improve the eBPF program
performance.
+* **Added new example l2fwd-event application.**
+
+ Added an example application `l2fwd-event` that adds event device support to
+ traditional l2fwd example. It demonstrates usage of poll and event mode IO
+ mechanism under a single application.
+
Removed Items
-------------
diff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst
index f23f8f59e..41388231a 100644
--- a/doc/guides/sample_app_ug/index.rst
+++ b/doc/guides/sample_app_ug/index.rst
@@ -26,6 +26,7 @@ Sample Applications User Guides
l2_forward_crypto
l2_forward_job_stats
l2_forward_real_virtual
+ l2_forward_event
l2_forward_cat
l3_forward
l3_forward_power_man
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index 90704194a..15bbbec44 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -87,6 +87,11 @@ examples are highlighted below.
forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
addresses like a simple switch.
+* :doc:`Network Layer 2 forwarding<l2_forward_event>`: The Network Layer 2
+ forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
+ addresses like a simple switch. It demonstrates usage of poll and event mode
+ IO mechanism under a single application.
+
* :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
forwarding, or ``l3fwd`` application does forwarding based on Internet
Protocol, IPv4 or IPv6 like a simple router.
diff --git a/doc/guides/sample_app_ug/l2_forward_event.rst b/doc/guides/sample_app_ug/l2_forward_event.rst
new file mode 100644
index 000000000..52a570b97
--- /dev/null
+++ b/doc/guides/sample_app_ug/l2_forward_event.rst
@@ -0,0 +1,698 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2010-2014 Intel Corporation.
+
+.. _l2_fwd_event_app:
+
+L2 Forwarding Eventdev Sample Application
+=========================================
+
+The L2 Forwarding eventdev sample application is a simple example of packet
+processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
+poll and event mode packet I/O mechanism.
+
+Overview
+--------
+
+The L2 Forwarding eventdev sample application, performs L2 forwarding for each
+packet that is received on an RX_PORT. The destination port is the adjacent port
+from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
+ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
+Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows:
+
+* The source MAC address is replaced by the TX_PORT MAC address
+
+* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
+
+Application receives packets from RX_PORT using below mentioned methods:
+
+* Poll mode
+
+* Eventdev mode (default)
+
+This application can be used to benchmark performance using a traffic-generator,
+as shown in the :numref:`figure_l2fwd_event_benchmark_setup`.
+
+.. _figure_l2fwd_event_benchmark_setup:
+
+.. figure:: img/l2_fwd_benchmark_setup.*
+
+ Performance Benchmark Setup (Basic Environment)
+
+Compiling the Application
+-------------------------
+
+To compile the sample application see :doc:`compiling`.
+
+The application is located in the ``l2fwd-event`` sub-directory.
+
+Running the Application
+-----------------------
+
+The application requires a number of command line options:
+
+.. code-block:: console
+
+ ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sched=SCHED_MODE
+
+where,
+
+* p PORTMASK: A hexadecimal bitmask of the ports to configure
+
+* q NQ: A number of queues (=ports) per lcore (default is 1)
+
+* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
+
+* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
+
+* --eventq-sched=SCHED_MODE: Event queue schedule mode, Ordered, Atomic or Parallel. Atomic by default.
+
+Sample usage commands are given below to run the application into different mode:
+
+Poll mode with 4 lcores, 16 ports and 8 RX queues per lcore and MAC address updating enabled,
+issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
+
+Eventdev mode with 4 lcores, 16 ports , sched method ordered and MAC address updating enabled,
+issue the command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sched=ordered
+
+or
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered
+
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
+
+To run application with S/W scheduler, it uses following DPDK services:
+
+* Software scheduler
+* Rx adapter service function
+* Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+ ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sched=ordered
+
+Explanation
+-----------
+
+The following sections provide some explanation of the code.
+
+.. _l2_fwd_event_app_cmd_arguments:
+
+Command Line Arguments
+~~~~~~~~~~~~~~~~~~~~~~
+
+The L2 Forwarding eventdev sample application takes specific parameters,
+in addition to Environment Abstraction Layer (EAL) arguments.
+The preferred way to parse parameters is to use the getopt() function,
+since it is part of a well-defined and portable library.
+
+The parsing of arguments is done in the **l2fwd_parse_args()** function for non
+eventdev parameters and in **parse_eventdev_args()** for eventdev parameters.
+The method of argument parsing is not described here. Refer to the
+*glibc getopt(3)* man page for details.
+
+EAL arguments are parsed first, then application-specific arguments.
+This is done at the beginning of the main() function and eventdev parameters
+are parsed in eventdev_resource_setup() function during eventdev setup:
+
+.. code-block:: c
+
+ /* init EAL */
+
+ ret = rte_eal_init(argc, argv);
+ if (ret < 0)
+ rte_panic("Invalid EAL arguments\n");
+
+ argc -= ret;
+ argv += ret;
+
+ /* parse application arguments (after the EAL ones) */
+
+ ret = l2fwd_parse_args(argc, argv);
+ if (ret < 0)
+ rte_panic("Invalid L2FWD arguments\n");
+ .
+ .
+ .
+
+ /* Parse eventdev command line options */
+ ret = parse_eventdev_args(argc, argv);
+ if (ret < 0)
+ return ret;
+
+
+
+
+.. _l2_fwd_event_app_mbuf_init:
+
+Mbuf Pool Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once the arguments are parsed, the mbuf pool is created.
+The mbuf pool contains a set of mbuf objects that will be used by the driver
+and the application to store network packet data:
+
+.. code-block:: c
+
+ /* create the mbuf pool */
+
+ l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE,
+ rte_socket_id());
+ if (l2fwd_pktmbuf_pool == NULL)
+ rte_panic("Cannot init mbuf pool\n");
+
+The rte_mempool is a generic structure used to handle pools of objects.
+In this case, it is necessary to create a pool that will be used by the driver.
+The number of allocated pkt mbufs is NB_MBUF, with a data room size of
+RTE_MBUF_DEFAULT_BUF_SIZE each.
+A per-lcore cache of 32 mbufs is kept.
+The memory is allocated in NUMA socket 0,
+but it is possible to extend this code to allocate one mbuf pool per socket.
+
+The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
+initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
+An advanced application may want to use the mempool API to create the
+mbuf pool with more control.
+
+.. _l2_fwd_event_app_drv_init:
+
+Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~
+
+The main part of the code in the main() function relates to the initialization
+of the driver. To fully understand this code, it is recommended to study the
+chapters that related to the Poll Mode and Event mode Driver in the
+*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
+
+.. code-block:: c
+
+ if (rte_pci_probe() < 0)
+ rte_panic("Cannot probe PCI\n");
+
+ /* reset l2fwd_dst_ports */
+
+ for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
+ l2fwd_dst_ports[portid] = 0;
+
+ last_port = 0;
+
+ /*
+ * Each logical core is assigned a dedicated TX queue on each port.
+ */
+
+ RTE_ETH_FOREACH_DEV(portid) {
+ /* skip ports that are not enabled */
+
+ if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
+ continue;
+
+ if (nb_ports_in_mask % 2) {
+ l2fwd_dst_ports[portid] = last_port;
+ l2fwd_dst_ports[last_port] = portid;
+ }
+ else
+ last_port = portid;
+
+ nb_ports_in_mask++;
+
+ rte_eth_dev_info_get((uint8_t) portid, &dev_info);
+ }
+
+Observe that:
+
+* rte_pci_probe() parses the devices on the PCI bus and initializes recognized
+ devices.
+
+The next step is to configure the RX and TX queues. For each port, there is only
+one RX queue (only one lcore is able to poll a given port). The number of TX
+queues depends on the number of available lcores. The rte_eth_dev_configure()
+function is used to configure the number of queues for a port:
+
+.. code-block:: c
+
+ ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
+ if (ret < 0)
+ rte_panic("Cannot configure device: err=%d, port=%u\n",
+ ret, portid);
+
+.. _l2_fwd_event_app_rx_init:
+
+RX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The application uses one lcore to poll one or several ports, depending on the -q
+option, which specifies the number of queues per lcore.
+
+For example, if the user specifies -q 4, the application is able to poll four
+ports with one lcore. If there are 16 ports on the target (and if the portmask
+argument is -p ffff ), the application will need four lcores to poll all the
+ports.
+
+.. code-block:: c
+
+ ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
+ &rx_conf, l2fwd_pktmbuf_pool);
+ if (ret < 0)
+
+ rte_panic("rte_eth_rx_queue_setup: err=%d, port=%u\n",
+ ret, portid);
+
+The list of queues that must be polled for a given lcore is stored in a private
+structure called struct lcore_queue_conf.
+
+.. code-block:: c
+
+ struct lcore_queue_conf {
+ unsigned n_rx_port;
+ unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
+ struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
+ } rte_cache_aligned;
+
+ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
+
+The values n_rx_port and rx_port_list[] are used in the main packet processing
+loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
+
+.. _l2_fwd_event_app_tx_init:
+
+TX Queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Each lcore should be able to transmit on any port. For every port, a single TX
+queue is initialized.
+
+.. code-block:: c
+
+ /* init one TX queue on each port */
+
+ fflush(stdout);
+
+ ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
+ rte_eth_dev_socket_id(portid), &tx_conf);
+ if (ret < 0)
+ rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n",
+ ret, (unsigned) portid);
+
+To configure eventdev support, application setups following components:
+
+* Event dev
+* Event queue
+* Event Port
+* Rx/Tx adapters
+* Ethernet ports
+
+.. _l2_fwd_event_app_event_dev_init:
+
+Event device Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Application can use either H/W or S/W based event device scheduler
+implementation and supports single instance of event device. It configures event
+device as per below configuration
+
+.. code-block:: c
+
+ struct rte_event_dev_config event_d_conf = {
+ .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
+ .nb_event_ports = num_workers, /* Dedicated to each lcore */
+ .nb_events_limit = 4096,
+ .nb_event_queue_flows = 1024,
+ .nb_event_port_dequeue_depth = 128,
+ .nb_event_port_enqueue_depth = 128
+ };
+
+ ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event device\n");
+
+In case of S/W scheduler, application runs eventdev scheduler service on service
+core. Application retrieves service id and finds the best possible service core to
+run S/W scheduler.
+
+.. code-block:: c
+
+ rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+ if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+ ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting eventdev service\n");
+ l2fwd_event_service_enable(service_id);
+ }
+
+.. _l2_fwd_app_event_queue_init:
+
+Event queue Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet device is assigned a dedicated event queue which will be linked
+to all available event ports i.e. each lcore can dequeue packets from any of the
+Ethernet ports.
+
+.. code-block:: c
+
+ struct rte_event_queue_conf event_q_conf = {
+ .nb_atomic_flows = 1024,
+ .nb_atomic_order_sequences = 1024,
+ .event_queue_cfg = 0,
+ .schedule_type = RTE_SCHED_TYPE_ATOMIC,
+ .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
+ };
+
+ /* User requested sched mode */
+ event_q_conf.schedule_type = eventq_sched_mode;
+ for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
+ ret = rte_event_queue_setup(event_d_id, event_q_id,
+ &event_q_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event queue\n");
+ }
+
+In case of S/W scheduler, an extra event queue is created which will be used for
+Tx adapter service function for enqueue operation.
+
+.. _l2_fwd_app_event_port_init:
+
+Event port Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~
+Each worker thread is assigned a dedicated event port for enq/deq operations
+to/from an event device. All event ports are linked with all available event
+queues.
+
+.. code-block:: c
+
+ struct rte_event_port_conf event_p_conf = {
+ .dequeue_depth = 32,
+ .enqueue_depth = 32,
+ .new_event_threshold = 4096
+ };
+
+ for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
+ ret = rte_event_port_setup(event_d_id, event_p_id,
+ &event_p_conf);
+ if (ret < 0)
+ rte_panic("Error in configuring event port %d\n", event_p_id);
+
+ ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+ NULL, 0);
+ if (ret < 0)
+ rte_panic("Error in linking event port %d to queue\n",
+ event_p_id);
+ }
+
+In case of S/W scheduler, an extra event port is created by DPDK library which
+is retrieved by the application and same will be used by Tx adapter service.
+
+.. code-block:: c
+
+ ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+ if (ret)
+ rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+ ret = rte_event_port_link(event_d_id, tx_port_id,
+ &evt_rsrc.evq.event_q_id[
+ evt_rsrc.evq.nb_queues - 1],
+ NULL, 1);
+ if (ret != 1)
+ rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+ ret);
+
+.. _l2_fwd_event_app_adapter_init:
+
+Rx/Tx adapter Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
+Ethernet port's Rx queues are connected to its respective event queue at
+priority 0 via Rx adapter configuration and Ethernet port's tx queues are
+connected via Tx adapter.
+
+.. code-block:: c
+
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create rx adapter[%d]\n",
+ adapter_id);
+
+ /* Configure user requested sched type*/
+ eth_q_conf.ev.sched_type = rsrc->sched_type;
+ eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+ ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+ -1, ð_q_conf);
+ if (ret)
+ rte_panic("Failed to add queues to Rx adapter\n");
+
+ ret = rte_event_eth_rx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ if (q_id < evt_rsrc->evq.nb_queues)
+ q_id++;
+ }
+
+ adapter_id = 0;
+ RTE_ETH_FOREACH_DEV(port_id) {
+ if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
+ continue;
+ ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+ &evt_rsrc->def_p_conf);
+ if (ret)
+ rte_panic("Failed to create tx adapter[%d]\n",
+ adapter_id);
+
+ ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+ -1);
+ if (ret)
+ rte_panic("Failed to add queues to Tx adapter\n");
+
+ ret = rte_event_eth_tx_adapter_start(adapter_id);
+ if (ret)
+ rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+ evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+ adapter_id++;
+ }
+
+For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
+configured which will be shared among all the Ethernet ports. Also DPDK library
+need service cores to run internal services for Rx/Tx adapters. Application gets
+service id for Rx/Tx adapters and after successful setup it runs the services
+on dedicated service cores.
+
+.. code-block:: c
+
+ for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+ ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ ret = rte_event_eth_rx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->rx_adptr.rx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+ for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+ ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+ evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+ if (ret < 0)
+ rte_panic("Failed to get Rx adapter[%d] caps\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ ret = rte_event_eth_tx_adapter_service_id_get(
+ evt_rsrc->event_d_id,
+ &service_id);
+ if (ret != -ESRCH && ret != 0)
+ rte_panic("Error in starting Rx adapter[%d] service\n",
+ evt_rsrc->tx_adptr.tx_adptr[i]);
+ l2fwd_event_service_enable(service_id);
+ }
+
+.. _l2_fwd_event_app_rx_tx_packets:
+
+Receive, Process and Transmit Packets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
+the RX queues. This is done using the following code:
+
+.. code-block:: c
+
+ /*
+ * Read packet from RX queues
+ */
+
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = qconf->rx_port_list[i];
+ nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
+ MAX_PKT_BURST);
+
+ for (j = 0; j < nb_rx; j++) {
+ m = pkts_burst[j];
+ rte_prefetch0(rte_pktmbuf_mtod(m, void *));
+ l2fwd_simple_forward(m, portid);
+ }
+ }
+
+Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
+function writes the mbuf pointers in a local table and returns the number of
+available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_simple_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+Also to optimize enqueue operation, l2fwd_simple_forward() stores incoming mbufs
+up to MAX_PKT_BURST. Once it reaches up to limit, all packets are transmitted to
+destination ports.
+
+.. code-block:: c
+
+ static void
+ l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
+ {
+ uint32_t dst_port;
+ int32_t sent;
+ struct rte_eth_dev_tx_buffer *buffer;
+
+ dst_port = l2fwd_dst_ports[portid];
+
+ if (mac_updating)
+ l2fwd_mac_updating(m, dst_port);
+
+ buffer = tx_buffer[dst_port];
+ sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
+ if (sent)
+ port_statistics[dst_port].tx += sent;
+ }
+
+For this test application, the processing is exactly the same for all packets
+arriving on the same RX port. Therefore, it would have been possible to call
+the rte_eth_tx_buffer() function directly from the main loop to send all the
+received packets on the same TX port, using the burst-oriented send function,
+which is more efficient.
+
+However, in real-life applications (such as, L3 routing),
+packet N is not necessarily forwarded on the same port as packet N-1.
+The application is implemented to illustrate that, so the same approach can be
+reused in a more complex application.
+
+To ensure that no packets remain in the tables, each lcore does a draining of TX
+queue in its main loop. This technique introduces some latency when there are
+not many packets to send, however it improves performance:
+
+.. code-block:: c
+
+ cur_tsc = rte_rdtsc();
+
+ /*
+ * TX burst queue drain
+ */
+ diff_tsc = cur_tsc - prev_tsc;
+ if (unlikely(diff_tsc > drain_tsc)) {
+ for (i = 0; i < qconf->n_rx_port; i++) {
+ portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
+ buffer = tx_buffer[portid];
+ sent = rte_eth_tx_buffer_flush(portid, 0,
+ buffer);
+ if (sent)
+ port_statistics[portid].tx += sent;
+ }
+
+ /* if timer is enabled */
+ if (timer_period > 0) {
+ /* advance the timer */
+ timer_tsc += diff_tsc;
+
+ /* if timer has reached its timeout */
+ if (unlikely(timer_tsc >= timer_period)) {
+ /* do this only on master core */
+ if (lcore_id == rte_get_master_lcore()) {
+ print_stats();
+ /* reset the timer */
+ timer_tsc = 0;
+ }
+ }
+ }
+
+ prev_tsc = cur_tsc;
+ }
+
+In the **l2fwd_event_loop()** function, the main task is to read ingress
+packets from the event ports. This is done using the following code:
+
+.. code-block:: c
+
+ /* Read packet from eventdev */
+ nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
+ events, deq_len, 0);
+ if (nb_rx == 0) {
+ rte_pause();
+ continue;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ mbuf[i] = events[i].mbuf;
+ rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
+ }
+
+
+Before reading packets, deq_len is fetched to ensure correct allowed deq length
+by the eventdev.
+The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
+and returns the number of available mbufs in the table.
+
+Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
+function. The processing is very simple: process the TX port from the RX port,
+then replace the source and destination MAC addresses if MAC addresses updating
+is enabled.
+
+During the initialization process, a static array of destination ports
+(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
+is assigned that is either the next or previous enabled port from the portmask.
+If number of ports are odd in portmask then packet from last port will be
+forwarded to first port i.e. if portmask=0x07, then forwarding will take place
+like p0--->p1, p1--->p2, p2--->p0.
+
+l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
+be to destination ports via Tx adapter or generic event dev enqueue API
+depending H/W or S/W scheduler is used.
+
+.. code-block:: c
+
+ nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id, port_id, ev,
+ nb_rx);
+ while (nb_tx < nb_rx && !rsrc->force_quit)
+ nb_tx += rte_event_eth_tx_adapter_enqueue(
+ event_d_id, port_id,
+ ev + nb_tx, nb_rx - nb_tx);
--
2.17.1
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v7 00/10] example/l2fwd-event: introduce l2fwd-event example
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
` (9 preceding siblings ...)
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-10-30 12:58 ` Jerin Jacob
10 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-30 12:58 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: Jerin Jacob, Richardson, Bruce, Hemant Agrawal, dpdk-dev
On Sat, Oct 26, 2019 at 4:41 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds a new application to demonstrate the usage of event
> mode. The poll mode is also available to help with the transition.
>
> The following new command line parameters are added:
> --mode: Dictates the mode of operation either poll or event.
> --eventq_sched: Dictates event scheduling mode ordered, atomic or
> parallel.
>
> Based on event device capability the configuration is done as follows:
> - A single event device is enabled.
> - The number of event ports is equal to the number of worker
> cores enabled in the core mask. Additional event ports might
> be configured based on Rx/Tx adapter capability.
> - The number of event queues is equal to the number of ethernet
> ports. If Tx adapter doesn't have internal port capability then
> an additional single link event queue is used to enqueue events
> to Tx adapter.
> - Each event port is linked to all existing event queues.
> - Dedicated Rx/Tx adapters for each Ethernet port.
Please fix the below clang build issue
http://mails.dpdk.org/archives/test-report/2019-October/104973.html
../examples/l2fwd-event/l2fwd_event.c:(.text+0xb4d): undefined
reference to `__atomic_fetch_add_8'
examples/c590b3c@@dpdk-l2fwd-event at
exe/l2fwd-event_l2fwd_event.c.o:../examples/l2fwd-event/l2fwd_event.c:(.text+0xbb0):
more undefined references to `__atomic_fetch_add_8' follow
clang: error: linker command failed with exit code 1 (use -v to see invocation)
^ permalink raw reply [flat|nested] 107+ messages in thread
* Re: [dpdk-dev] [PATCH v7 10/10] doc: add application usage guide for l2fwd-event
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
@ 2019-10-30 13:19 ` Jerin Jacob
0 siblings, 0 replies; 107+ messages in thread
From: Jerin Jacob @ 2019-10-30 13:19 UTC (permalink / raw)
To: Pavan Nikhilesh
Cc: Jerin Jacob, Richardson, Bruce, Hemant Agrawal, Thomas Monjalon,
John McNamara, Marko Kovacevic, Ori Kam, Radu Nicolau,
Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, dpdk-dev
On Sat, Oct 26, 2019 at 4:42 PM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add documentation for l2fwd-event example.
> Update release notes.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
> MAINTAINERS | 1 +
> doc/guides/rel_notes/release_19_11.rst | 6 +
> doc/guides/sample_app_ug/index.rst | 1 +
> doc/guides/sample_app_ug/intro.rst | 5 +
> doc/guides/sample_app_ug/l2_forward_event.rst | 698 ++++++++++++++++++
> 5 files changed, 711 insertions(+)
> create mode 100644 doc/guides/sample_app_ug/l2_forward_event.rst
#Please fix the below warning:
dpdk-next-eventdev/doc/guides/sample_app_ug/l2_forward_event.rst:
WARNING: document isn't included in any toctree
dpdk-next-eventdev/doc/guides/sample_app_ug/l2_forward_event.rst:32:
WARNING: no number is assigned for figure:
figure-l2fwd-event-benchmark-setup
# Please rebase this patch series to dpdk-next-eventdev tree.
^ permalink raw reply [flat|nested] 107+ messages in thread
end of thread, other threads:[~2019-10-30 13:19 UTC | newest]
Thread overview: 107+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-19 9:25 [dpdk-dev] [PATCH v2 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-19 9:43 ` Sunil Kumar Kori
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
2019-09-19 10:35 ` Sunil Kumar Kori
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 07/10] examples/l2fwd-event: add service core setup pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
2019-09-19 10:13 ` [dpdk-dev] [PATCH v3 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-09-24 9:41 ` [dpdk-dev] [PATCH v4 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-09-26 17:28 ` Jerin Jacob
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-09-26 17:33 ` Jerin Jacob
2019-09-27 13:08 ` Nipun Gupta
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
2019-09-27 13:15 ` Nipun Gupta
2019-09-27 14:45 ` Pavan Nikhilesh Bhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
2019-09-27 13:22 ` Nipun Gupta
2019-09-27 14:43 ` Pavan Nikhilesh Bhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 07/10] examples/l2fwd-event: add service core setup pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-09-27 13:28 ` Nipun Gupta
2019-09-27 14:35 ` Pavan Nikhilesh Bhagavatula
2019-09-30 5:38 ` Nipun Gupta
2019-09-30 6:38 ` Jerin Jacob
2019-09-30 7:46 ` Nipun Gupta
2019-09-30 8:09 ` Pavan Nikhilesh Bhagavatula
2019-09-30 17:50 ` Nipun Gupta
2019-10-01 5:59 ` Pavan Nikhilesh Bhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
2019-09-24 9:42 ` [dpdk-dev] [PATCH v4 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-09-26 17:42 ` Jerin Jacob
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-11 14:41 ` Jerin Jacob
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-10-04 12:30 ` Nipun Gupta
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 04/10] examples/l2fwd-event: add event device setup pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 07/10] examples/l2fwd-event: add service core setup pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-10-11 14:52 ` Jerin Jacob
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
2019-10-02 20:57 ` [dpdk-dev] [PATCH v5 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-10-11 14:11 ` Jerin Jacob
2019-10-03 10:33 ` [dpdk-dev] [PATCH v5 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
2019-10-03 12:40 ` Hemant Agrawal
2019-10-03 12:47 ` Jerin Jacob
2019-10-09 7:50 ` Nipun Gupta
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 " pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-16 19:07 ` Stephen Hemminger
2019-10-21 3:29 ` Varghese, Vipin
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 04/10] examples/l2fwd-event: add event device setup pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 07/10] examples/l2fwd-event: add service core setup pbhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-10-21 3:19 ` Varghese, Vipin
2019-10-21 16:53 ` Pavan Nikhilesh Bhagavatula
2019-10-22 3:13 ` Varghese, Vipin
2019-10-22 17:02 ` Pavan Nikhilesh Bhagavatula
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
2019-10-21 3:12 ` Varghese, Vipin
2019-10-21 16:56 ` Pavan Nikhilesh Bhagavatula
2019-10-22 2:48 ` Varghese, Vipin
2019-10-14 18:22 ` [dpdk-dev] [PATCH v6 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-10-16 12:38 ` [dpdk-dev] [PATCH v6 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
2019-10-21 3:25 ` Varghese, Vipin
2019-10-21 17:02 ` Pavan Nikhilesh Bhagavatula
2019-10-22 2:57 ` Varghese, Vipin
2019-10-22 15:55 ` Pavan Nikhilesh Bhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 " pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 01/10] examples/l2fwd-event: add default poll mode routines pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 04/10] examples/l2fwd-event: add event device setup pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 07/10] examples/l2fwd-event: add service core setup pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
2019-10-26 11:10 ` [dpdk-dev] [PATCH v7 10/10] doc: add application usage guide for l2fwd-event pbhagavatula
2019-10-30 13:19 ` Jerin Jacob
2019-10-30 12:58 ` [dpdk-dev] [PATCH v7 00/10] example/l2fwd-event: introduce l2fwd-event example Jerin Jacob
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 02/10] examples/l2fwd-event: add infra for eventdev pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 03/10] examples/l2fwd-event: add infra to split eventdev framework pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 04/10] examples/l2fwd-event: add eth port setup for eventdev pbhagavatula
2019-09-19 9:25 ` [dpdk-dev] [PATCH v2 05/10] examples/l2fwd-event: add eventdev queue and port setup pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 06/10] examples/l2fwd-event: add event Rx/Tx adapter setup pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 07/10] examples/l2fwd-event: add service core setup pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 08/10] examples/l2fwd-event: add eventdev main loop pbhagavatula
2019-09-19 9:26 ` [dpdk-dev] [PATCH v2 09/10] examples/l2fwd-event: add graceful teardown pbhagavatula
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).