DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support
@ 2019-12-04 14:43 pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device pbhagavatula
                   ` (12 more replies)
  0 siblings, 13 replies; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

This patchset adds support for event mode in l3fwd.
The poll mode is still the default mode of operation.

The following new command line parameters are added:
 --mode: Dictates the mode of operation either poll or event. If unset then
         by default poll mode is used.
 --eventq_sched: Dictates event synchronization mode i.e. either ordered,
		 atomic or parallel.

Based on event device capability the configuration is done as follows:
    - A single event device is enabled.
    - The number of event ports is equal to the number of worker
      cores enabled in the core mask. Additional event ports might
      be configured based on Rx/Tx adapter capability.
    - The number of event queues is equal to the number of ethernet
      ports. If Tx adapter doesn't have internal port capability then
      an additional single link event queue is used to enqueue events
      to Tx adapter.
    - Each event port is linked to all existing event queues.
    - Dedicated Rx/Tx adapters for each Ethernet port.

v2 Changes:
----------
- Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
- Use rte_panic instead of rte_exit.
- Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
- Add parallel mode support.
- Fix missing eventdev args parsing.

Pavan Nikhilesh (7):
  examples/l3fwd: add event device configuration
  examples/l3fwd: add event eth Rx/Tx adapter setup
  examples/l3fwd: add service core setup based on caps
  examples/l3fwd: add event lpm main loop
  examples/l3fwd: add event em main loop
  examples/l3fwd: add graceful teardown for eventdevice
  doc: update l3fwd user guide to support eventdev

Sunil Kumar Kori (4):
  examples/l3fwd: add framework for event device
  examples/l3fwd: split pipelines based on capability
  examples/l3fwd: add ethdev setup based on eventdev
  examples/l3fwd: add event port and queue setup

 doc/guides/sample_app_ug/l3_forward.rst    |  76 +++++-
 examples/l3fwd/Makefile                    |   3 +-
 examples/l3fwd/l3fwd.h                     |  32 +++
 examples/l3fwd/l3fwd_em.c                  | 177 ++++++++++++
 examples/l3fwd/l3fwd_em.h                  | 159 +++++++----
 examples/l3fwd/l3fwd_em_hlm.h              | 131 +++++++++
 examples/l3fwd/l3fwd_em_sequential.h       |  26 ++
 examples/l3fwd/l3fwd_event.c               | 294 ++++++++++++++++++++
 examples/l3fwd/l3fwd_event.h               | 117 ++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 300 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 279 +++++++++++++++++++
 examples/l3fwd/l3fwd_lpm.c                 | 231 ++++++++++++++++
 examples/l3fwd/main.c                      | 203 +++++++++++---
 examples/l3fwd/meson.build                 |   5 +-
 14 files changed, 1948 insertions(+), 85 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

--
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2020-01-03 12:49   ` Ananyev, Konstantin
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add framework to enable event device as a producer of packets.
To switch between event mode and poll mode the following options
have been added:
	`--mode="eventdev"` or `--mode="poll"`
Also, allow the user to select the schedule type to be either
RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_PARALLEL
through:
	`--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
		`--eventq-sched="parallel"`

Poll mode is still the default operation mode.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/Makefile      |  2 +-
 examples/l3fwd/l3fwd.h       |  6 +++
 examples/l3fwd/l3fwd_event.c | 75 ++++++++++++++++++++++++++++++++++++
 examples/l3fwd/l3fwd_event.h | 54 ++++++++++++++++++++++++++
 examples/l3fwd/main.c        | 41 +++++++++++++++++---
 examples/l3fwd/meson.build   |  4 +-
 6 files changed, 174 insertions(+), 8 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index b2dbf2607..c892b867b 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -5,7 +5,7 @@
 APP = l3fwd
 
 # all source are stored in SRCS-y
-SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c
+SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index 293fb1fa2..cd17a41b3 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -5,6 +5,9 @@
 #ifndef __L3_FWD_H__
 #define __L3_FWD_H__
 
+#include <stdbool.h>
+
+#include <rte_ethdev.h>
 #include <rte_vect.h>
 
 #define DO_RFC_1812_CHECKS
@@ -169,6 +172,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
 }
 #endif /* DO_RFC_1812_CHECKS */
 
+void
+print_usage(const char *prgname);
+
 /* Function pointers for LPM or EM functionality. */
 void
 setup_lpm(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
new file mode 100644
index 000000000..3892720be
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.c
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+static void
+parse_mode(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strncmp(optarg, "poll", 4))
+		evt_rsrc->enabled = false;
+	else if (!strncmp(optarg, "eventdev", 8))
+		evt_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sync(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strncmp(optarg, "ordered", 7))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+	if (!strncmp(optarg, "atomic", 6))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+	if (!strncmp(optarg, "parallel", 8))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
+}
+
+static void
+l3fwd_parse_eventdev_args(char **argv, int argc)
+{
+	const struct option eventdev_lgopts[] = {
+		{CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+		{CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+		{NULL, 0, 0, 0}
+	};
+	char *prgname = argv[0];
+	char **argvopt = argv;
+	int32_t option_index;
+	int32_t opt;
+
+	while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts,
+					&option_index)) != EOF) {
+		switch (opt) {
+		case CMD_LINE_OPT_MODE_NUM:
+			parse_mode(optarg);
+			break;
+
+		case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+			parse_eventq_sync(optarg);
+			break;
+
+		default:
+			print_usage(prgname);
+			exit(1);
+		}
+	}
+}
+
+void
+l3fwd_event_resource_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	/* Parse eventdev command line options */
+	l3fwd_parse_eventdev_args(evt_rsrc->args, evt_rsrc->nb_args);
+	if (!evt_rsrc->enabled)
+		return;
+}
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
new file mode 100644
index 000000000..c95296c38
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L3FWD_EVENTDEV_H__
+#define __L3FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_eventdev.h>
+#include <rte_spinlock.h>
+
+#include "l3fwd.h"
+
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
+
+enum {
+	CMD_LINE_OPT_MODE_NUM = 265,
+	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+};
+
+struct l3fwd_event_resources {
+	uint8_t sched_type;
+	uint8_t enabled;
+	uint8_t nb_args;
+	char **args;
+};
+
+static inline struct l3fwd_event_resources *
+l3fwd_get_eventdev_rsrc(void)
+{
+	static const char name[RTE_MEMZONE_NAMESIZE] = "l3fwd_event_rsrc";
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(name);
+
+	if (mz != NULL)
+		return mz->addr;
+
+	mz = rte_memzone_reserve(name, sizeof(struct l3fwd_event_resources),
+				 0, 0);
+	if (mz != NULL) {
+		memset(mz->addr, 0, sizeof(struct l3fwd_event_resources));
+		return mz->addr;
+	}
+
+	rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+	return NULL;
+}
+
+void l3fwd_event_resource_setup(void);
+
+#endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4dea12a65..19ca4483c 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -13,12 +13,12 @@
 #include <errno.h>
 #include <getopt.h>
 #include <signal.h>
-#include <stdbool.h>
 
 #include <rte_common.h>
 #include <rte_vect.h>
 #include <rte_byteorder.h>
 #include <rte_log.h>
+#include <rte_malloc.h>
 #include <rte_memory.h>
 #include <rte_memcpy.h>
 #include <rte_eal.h>
@@ -33,7 +33,6 @@
 #include <rte_random.h>
 #include <rte_debug.h>
 #include <rte_ether.h>
-#include <rte_ethdev.h>
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
 #include <rte_ip.h>
@@ -46,6 +45,7 @@
 #include <cmdline_parse_etheraddr.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 /*
  * Configurable number of RX/TX ring descriptors
@@ -274,7 +274,7 @@ init_lcore_rx_queues(void)
 }
 
 /* display usage */
-static void
+void
 print_usage(const char *prgname)
 {
 	fprintf(stderr, "%s [EAL options] --"
@@ -289,7 +289,9 @@ print_usage(const char *prgname)
 		" [--hash-entry-num]"
 		" [--ipv6]"
 		" [--parse-ptype]"
-		" [--per-port-pool]\n\n"
+		" [--per-port-pool]"
+		" [--mode]"
+		" [--eventq-sched]\n\n"
 
 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -304,7 +306,13 @@ print_usage(const char *prgname)
 		"  --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
 		"  --ipv6: Set if running ipv6 packets\n"
 		"  --parse-ptype: Set to use software to analyze packet type\n"
-		"  --per-port-pool: Use separate buffer pool per port\n\n",
+		"  --per-port-pool: Use separate buffer pool per port\n"
+		"  --mode: Packet transfer mode for I/O, poll or eventdev\n"
+		"          Default mode = poll\n"
+		"  --eventq-sched: Event queue synchronization method "
+		"                  ordered, atomic or parallel.\n\t\t"
+		"		   Default: atomic\n\t\t"
+		"                  Valid only if --mode=eventdev\n\n",
 		prgname);
 }
 
@@ -504,11 +512,19 @@ static const struct option lgopts[] = {
 static int
 parse_args(int argc, char **argv)
 {
+	struct l3fwd_event_resources *evt_rsrc;
 	int opt, ret;
 	char **argvopt;
 	int option_index;
 	char *prgname = argv[0];
 
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	evt_rsrc->args = rte_zmalloc("l3fwd_event_args", sizeof(char *), 0);
+	if (evt_rsrc->args == NULL)
+		rte_exit(EXIT_FAILURE,
+				"Unable to allocate memory for eventdev arg");
+	evt_rsrc->args[0] = argv[0];
+	evt_rsrc->nb_args++;
 	argvopt = argv;
 
 	/* Error or normal output strings. */
@@ -538,6 +554,15 @@ parse_args(int argc, char **argv)
 			l3fwd_lpm_on = 1;
 			break;
 
+		case '?':
+			/* Eventdev options are encountered skip for
+			 * now and processed later.
+			 */
+			evt_rsrc->args[evt_rsrc->nb_args] =
+				argv[optind - 1];
+			evt_rsrc->nb_args++;
+			break;
+
 		/* long options */
 		case CMD_LINE_OPT_CONFIG_NUM:
 			ret = parse_config(optarg);
@@ -811,6 +836,7 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 int
 main(int argc, char **argv)
 {
+	struct l3fwd_event_resources *evt_rsrc;
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_txconf *txconf;
@@ -839,11 +865,16 @@ main(int argc, char **argv)
 		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 	}
 
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	/* Configure eventdev parameters if user has requested */
+	l3fwd_event_resource_setup();
+
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
 
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 6dd4b9022..864327c7b 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -6,7 +6,7 @@
 # To build this example as a standalone application with an already-installed
 # DPDK instance, use 'make'
 
-deps += ['hash', 'lpm']
+deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 02/11] examples/l3fwd: split pipelines based on capability
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 03/11] examples/l3fwd: add event device configuration pbhagavatula
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/Makefile                    |  1 +
 examples/l3fwd/l3fwd_event.c               | 31 ++++++++++++++++++++++
 examples/l3fwd/l3fwd_event.h               | 20 ++++++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 12 +++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 12 +++++++++
 examples/l3fwd/meson.build                 |  3 ++-
 6 files changed, 78 insertions(+), 1 deletion(-)
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index c892b867b..59a110d12 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -6,6 +6,7 @@ APP = l3fwd
 
 # all source are stored in SRCS-y
 SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
+SRCS-y += l3fwd_event_generic.c l3fwd_event_internal_port.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 3892720be..a76195a1f 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -63,6 +63,31 @@ l3fwd_parse_eventdev_args(char **argv, int argc)
 	}
 }
 
+static void
+l3fwd_event_capability_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t caps = 0;
+	uint16_t i;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(i) {
+		ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+		if (ret)
+			rte_exit(EXIT_FAILURE,
+				 "Invalid capability for Tx adptr port %d\n",
+				 i);
+
+		evt_rsrc->tx_mode_q |= !(caps &
+				   RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+	}
+
+	if (evt_rsrc->tx_mode_q)
+		l3fwd_event_set_generic_ops(&evt_rsrc->ops);
+	else
+		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
+}
+
 void
 l3fwd_event_resource_setup(void)
 {
@@ -72,4 +97,10 @@ l3fwd_event_resource_setup(void)
 	l3fwd_parse_eventdev_args(evt_rsrc->args, evt_rsrc->nb_args);
 	if (!evt_rsrc->enabled)
 		return;
+
+	if (!rte_event_dev_count())
+		rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+	/* Setup eventdev capability callbacks */
+	l3fwd_event_capability_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index c95296c38..edb0c9d45 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,6 +7,7 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_tx_adapter.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
@@ -19,8 +20,25 @@ enum {
 	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
 };
 
+typedef uint32_t (*event_device_setup_cb)(void);
+typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*adapter_setup_cb)(void);
+typedef int (*event_loop_cb)(void *);
+
+struct l3fwd_event_setup_ops {
+	event_device_setup_cb event_device_setup;
+	event_queue_setup_cb event_queue_setup;
+	event_port_setup_cb event_port_setup;
+	adapter_setup_cb adapter_setup;
+	event_loop_cb lpm_event_loop;
+	event_loop_cb em_event_loop;
+};
+
 struct l3fwd_event_resources {
+	struct l3fwd_event_setup_ops ops;
 	uint8_t sched_type;
+	uint8_t tx_mode_q;
 	uint8_t enabled;
 	uint8_t nb_args;
 	char **args;
@@ -50,5 +68,7 @@ l3fwd_get_eventdev_rsrc(void)
 }
 
 void l3fwd_event_resource_setup(void);
+void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
+void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
 #endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
new file mode 100644
index 000000000..6e228dbba
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
new file mode 100644
index 000000000..b7eca15be
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 864327c7b..ebed3b518 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -8,5 +8,6 @@
 
 deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c',
+	'l3fwd_event_internal_port.c', 'l3fwd_event_generic.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 03/11] examples/l3fwd: add event device configuration
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event device configuration based on the capabilities of the
probed event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  3 +
 examples/l3fwd/l3fwd_event.h               | 36 ++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 75 ++++++++++++++++++++-
 examples/l3fwd/l3fwd_event_internal_port.c | 78 +++++++++++++++++++++-
 4 files changed, 190 insertions(+), 2 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index a76195a1f..a027e150d 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -103,4 +103,7 @@ l3fwd_event_resource_setup(void)
 
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
+
+	/* Event device configuration */
+	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index edb0c9d45..5aac0b06c 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -8,6 +8,7 @@
 #include <rte_common.h>
 #include <rte_eventdev.h>
 #include <rte_event_eth_tx_adapter.h>
+#include <rte_service.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
@@ -26,6 +27,29 @@ typedef void (*event_port_setup_cb)(void);
 typedef void (*adapter_setup_cb)(void);
 typedef int (*event_loop_cb)(void *);
 
+struct l3fwd_event_queues {
+	uint8_t *event_q_id;
+	uint8_t	nb_queues;
+};
+
+struct l3fwd_event_ports {
+	uint8_t *event_p_id;
+	uint8_t	nb_ports;
+	rte_spinlock_t lock;
+};
+
+struct l3fwd_event_rx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_rx_adptr;
+	uint8_t *rx_adptr;
+};
+
+struct l3fwd_event_tx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_tx_adptr;
+	uint8_t *tx_adptr;
+};
+
 struct l3fwd_event_setup_ops {
 	event_device_setup_cb event_device_setup;
 	event_queue_setup_cb event_queue_setup;
@@ -36,9 +60,21 @@ struct l3fwd_event_setup_ops {
 };
 
 struct l3fwd_event_resources {
+	struct rte_event_port_conf def_p_conf;
+	struct l3fwd_event_rx_adptr rx_adptr;
+	struct l3fwd_event_tx_adptr tx_adptr;
+	uint8_t disable_implicit_release;
 	struct l3fwd_event_setup_ops ops;
+	struct rte_mempool * (*pkt_pool)[NB_SOCKETS];
+	struct l3fwd_event_queues evq;
+	struct l3fwd_event_ports evp;
+	uint32_t port_mask;
+	uint8_t per_port_pool;
+	uint8_t event_d_id;
 	uint8_t sched_type;
 	uint8_t tx_mode_q;
+	uint8_t deq_depth;
+	uint8_t has_burst;
 	uint8_t enabled;
 	uint8_t nb_args;
 	char **args;
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 6e228dbba..d748dcef9 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -5,8 +5,81 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+	evt_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	/* One queue for each ethdev port + one Tx adapter Single link queue. */
+	event_d_conf.nb_event_queues = ethdev_count + 1;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count() - rte_service_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index b7eca15be..9c053aa6f 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -5,8 +5,84 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	uint8_t disable_implicit_release;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+
+	disable_implicit_release = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+	evt_rsrc->disable_implicit_release =
+						disable_implicit_release;
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	event_d_conf.nb_event_queues = ethdev_count;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (2 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 03/11] examples/l3fwd: add event device configuration pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2019-12-27 13:33   ` Nipun Gupta
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/l3fwd.h       |  10 +++
 examples/l3fwd/l3fwd_event.c | 129 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/l3fwd_event.h |   2 +-
 examples/l3fwd/main.c        |  15 ++--
 4 files changed, 144 insertions(+), 12 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index cd17a41b3..6d16cde74 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -18,9 +18,16 @@
 #define NO_HASH_MULTI_LOOKUP 1
 #endif
 
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
 #define MAX_PKT_BURST     32
 #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
 
+#define MEMPOOL_CACHE_SIZE 256
 #define MAX_RX_QUEUE_PER_LCORE 16
 
 /*
@@ -175,6 +182,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
 void
 print_usage(const char *prgname);
 
+int
+init_mem(uint16_t portid, unsigned int nb_mbuf);
+
 /* Function pointers for LPM or EM functionality. */
 void
 setup_lpm(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index a027e150d..b1ff8dc31 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -8,6 +8,14 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+	char buf[RTE_ETHER_ADDR_FMT_SIZE];
+	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+	printf("%s%s", name, buf);
+}
+
 static void
 parse_mode(const char *optarg)
 {
@@ -63,6 +71,122 @@ l3fwd_parse_eventdev_args(char **argv, int argc)
 	}
 }
 
+static void
+l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint16_t nb_ports = rte_eth_dev_count_avail();
+	uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+	uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+	unsigned int nb_lcores = rte_lcore_count();
+	struct rte_eth_conf local_port_conf;
+	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf txconf;
+	struct rte_eth_rxconf rxconf;
+	unsigned int nb_mbuf;
+	uint16_t port_id;
+	int32_t ret;
+
+	/* initialize all ports */
+	RTE_ETH_FOREACH_DEV(port_id) {
+		local_port_conf = *port_conf;
+		/* skip ports that are not enabled */
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0) {
+			printf("\nSkipping disabled port %d\n", port_id);
+			continue;
+		}
+
+		/* init port */
+		printf("Initializing port %d ... ", port_id);
+		fflush(stdout);
+		printf("Creating queues: nb_rxq=1 nb_txq=1...\n");
+
+		rte_eth_dev_info_get(port_id, &dev_info);
+		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+			local_port_conf.txmode.offloads |=
+						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+						dev_info.flow_type_rss_offloads;
+		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+				port_conf->rx_adv_conf.rss_conf.rss_hf) {
+			printf("Port %u modified RSS hash function "
+			       "based on hardware support,"
+			       "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+			       port_id,
+			       port_conf->rx_adv_conf.rss_conf.rss_hf,
+			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+		}
+
+		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot configure device: err=%d, port=%d\n",
+				 ret, port_id);
+
+		ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+						       &nb_txd);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot adjust number of descriptors: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+		print_ethaddr(" Address:", &ports_eth_addr[port_id]);
+		printf(", ");
+		print_ethaddr("Destination:",
+			(const struct rte_ether_addr *)&dest_eth_addr[port_id]);
+		printf(", ");
+
+		/* prepare source MAC for each port. */
+		rte_ether_addr_copy(&ports_eth_addr[port_id],
+			(struct rte_ether_addr *)(val_eth + port_id) + 1);
+
+		/* init memory */
+		if (!evt_rsrc->per_port_pool) {
+			/* port_id = 0; this is *not* signifying the first port,
+			 * rather, it signifies that port_id is ignored.
+			 */
+			nb_mbuf = RTE_MAX(nb_ports * nb_rxd +
+					  nb_ports * nb_txd +
+					  nb_ports * nb_lcores *
+							MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(0, nb_mbuf);
+		} else {
+			nb_mbuf = RTE_MAX(nb_rxd + nb_rxd +
+					  nb_lcores * MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(port_id, nb_mbuf);
+		}
+		/* init one Rx queue per port */
+		rxconf = dev_info.default_rxconf;
+		rxconf.offloads = local_port_conf.rxmode.offloads;
+		if (!evt_rsrc->per_port_pool)
+			ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0,
+					&rxconf, evt_rsrc->pkt_pool[0][0]);
+		else
+			ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, 0,
+					&rxconf,
+					evt_rsrc->pkt_pool[port_id][0]);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_rx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		/* init one Tx queue per port */
+		txconf = dev_info.default_txconf;
+		txconf.offloads = local_port_conf.txmode.offloads;
+		ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_tx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+	}
+}
+
 static void
 l3fwd_event_capability_setup(void)
 {
@@ -89,7 +213,7 @@ l3fwd_event_capability_setup(void)
 }
 
 void
-l3fwd_event_resource_setup(void)
+l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 
@@ -104,6 +228,9 @@ l3fwd_event_resource_setup(void)
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
 
+	/* Ethernet device configuration */
+	l3fwd_eth_dev_port_setup(port_conf);
+
 	/* Event device configuration */
 	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 5aac0b06c..cd36d99ae 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -103,7 +103,7 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
-void l3fwd_event_resource_setup(void);
+void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 19ca4483c..20df12748 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -47,12 +47,6 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
-/*
- * Configurable number of RX/TX ring descriptors
- */
-#define RTE_TEST_RX_DESC_DEFAULT 1024
-#define RTE_TEST_TX_DESC_DEFAULT 1024
-
 #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
 #define MAX_RX_QUEUE_PER_PORT 128
 
@@ -449,7 +443,6 @@ parse_eth_dest(const char *optarg)
 }
 
 #define MAX_JUMBO_PKT_LEN  9600
-#define MEMPOOL_CACHE_SIZE 256
 
 static const char short_options[] =
 	"p:"  /* portmask */
@@ -679,7 +672,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
 	printf("%s%s", name, buf);
 }
 
-static int
+int
 init_mem(uint16_t portid, unsigned int nb_mbuf)
 {
 	struct lcore_conf *qconf;
@@ -866,14 +859,16 @@ main(int argc, char **argv)
 	}
 
 	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup();
+	l3fwd_event_resource_setup(&port_conf);
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 05/11] examples/l3fwd: add event port and queue setup
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (3 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add event device queue and port setup based on event eth Tx adapter
capabilities.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  28 +++++-
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 103 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  98 ++++++++++++++++++++
 4 files changed, 229 insertions(+), 1 deletion(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index b1ff8dc31..4133dd167 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -212,10 +212,30 @@ l3fwd_event_capability_setup(void)
 		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
 }
 
+int
+l3fwd_get_free_event_port(struct l3fwd_event_resources *evt_rsrc)
+{
+	static int index;
+	int port_id;
+
+	rte_spinlock_lock(&evt_rsrc->evp.lock);
+	if (index >= evt_rsrc->evp.nb_ports) {
+		printf("No free event port is available\n");
+		return -1;
+	}
+
+	port_id = evt_rsrc->evp.event_p_id[index];
+	index++;
+	rte_spinlock_unlock(&evt_rsrc->evp.lock);
+
+	return port_id;
+}
+
 void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t event_queue_cfg;
 
 	/* Parse eventdev command line options */
 	l3fwd_parse_eventdev_args(evt_rsrc->args, evt_rsrc->nb_args);
@@ -232,5 +252,11 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	l3fwd_eth_dev_port_setup(port_conf);
 
 	/* Event device configuration */
-	evt_rsrc->ops.event_device_setup();
+	event_queue_cfg = evt_rsrc->ops.event_device_setup();
+
+	/* Event queue configuration */
+	evt_rsrc->ops.event_queue_setup(event_queue_cfg);
+
+	/* Event port configuration */
+	evt_rsrc->ops.event_port_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index cd36d99ae..8e0b70509 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -104,6 +104,7 @@ l3fwd_get_eventdev_rsrc(void)
 }
 
 void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
+int l3fwd_get_free_event_port(struct l3fwd_event_resources *eventdev_rsrc);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index d748dcef9..27bc60038 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -78,8 +78,111 @@ l3fwd_event_device_setup_generic(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("No space is available\n");
+
+	memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+			def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id,
+					  evt_rsrc->evq.event_q_id,
+					  NULL,
+					  evt_rsrc->evq.nb_queues - 1);
+		if (ret != (evt_rsrc->evq.nb_queues - 1))
+			rte_panic("Error in linking event port %d to queues\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+	}
+	/* init spinlock */
+	rte_spinlock_init(&evt_rsrc->evp.lock);
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id;
+	int32_t ret;
+
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	for (event_q_id = 0; event_q_id < (evt_rsrc->evq.nb_queues - 1);
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+
+	event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+	ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event queue for Tx adapter\n");
+	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
+	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
+	ops->event_port_setup = l3fwd_event_port_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 9c053aa6f..2216366d9 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -80,9 +80,107 @@ l3fwd_event_device_setup_internal_port(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("Failed to allocate memory for Event Ports\n");
+
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+						def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+					  NULL, 0);
+		if (ret < 0)
+			rte_panic("Error in linking event port %d to queue\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+		/* init spinlock */
+		rte_spinlock_init(&evt_rsrc->evp.lock);
+	}
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id = 0;
+	int32_t ret;
+
+	rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	if (def_q_conf.nb_atomic_order_sequences <
+					event_q_conf.nb_atomic_order_sequences)
+		event_q_conf.nb_atomic_order_sequences =
+					def_q_conf.nb_atomic_order_sequences;
+
+	event_q_conf.event_queue_cfg = event_queue_cfg;
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	for (event_q_id = 0; event_q_id < evt_rsrc->evq.nb_queues;
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+}
 
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
+	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
+	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (4 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |   3 +
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 112 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  93 +++++++++++++++++
 4 files changed, 209 insertions(+)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 4133dd167..3a43f30a3 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -259,4 +259,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Event port configuration */
 	evt_rsrc->ops.event_port_setup();
+
+	/* Rx/Tx adapters configuration */
+	evt_rsrc->ops.adapter_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 8e0b70509..9d8bd5a36 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,6 +7,7 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
 #include <rte_event_eth_tx_adapter.h>
 #include <rte_service.h>
 #include <rte_spinlock.h>
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 27bc60038..74fa9509d 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -179,10 +179,122 @@ l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
 	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint8_t rx_adptr_id = 0;
+	uint8_t tx_adptr_id = 0;
+	uint8_t tx_port_id = 0;
+	uint16_t port_id;
+	uint32_t service_id;
+	int32_t ret, i = 0;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	/* Rx adapter setup */
+	evt_rsrc->rx_adptr.nb_rx_adptr = 1;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create rx adapter\n");
+
+	/* Configure user requested sched type */
+	eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[i];
+		ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+		if (i < evt_rsrc->evq.nb_queues)
+			i++;
+	}
+
+	ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Error getting the service ID for rx adptr\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->rx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+	if (ret)
+		rte_panic("Rx adapter[%d] start Failed\n", rx_adptr_id);
+
+	evt_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+	/* Tx adapter setup */
+	evt_rsrc->tx_adptr.nb_tx_adptr = 1;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create tx adapter\n");
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Failed to get Tx adapter service ID\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->tx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+	if (ret)
+		rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+	ret = rte_event_port_link(event_d_id, tx_port_id,
+				  &evt_rsrc->evq.event_q_id[
+					evt_rsrc->evq.nb_queues - 1],
+				  NULL, 1);
+	if (ret != 1)
+		rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+			 ret);
+
+	ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+	if (ret)
+		rte_panic("Tx adapter[%d] start Failed\n", tx_adptr_id);
+
+	evt_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
 	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
 	ops->event_port_setup = l3fwd_event_port_setup_generic;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 2216366d9..15103db17 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -177,10 +177,103 @@ l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
 	}
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint16_t adapter_id = 0;
+	uint16_t nb_adapter = 0;
+	uint16_t port_id;
+	uint8_t q_id = 0;
+	int ret;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		nb_adapter++;
+	}
+
+	evt_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create rx adapter[%d]\n",
+				  adapter_id);
+
+		/* Configure user requested sched type*/
+		eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+		ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+
+		ret = rte_event_eth_rx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+		if (q_id < evt_rsrc->evq.nb_queues)
+			q_id++;
+	}
+
+	evt_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	adapter_id = 0;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create tx adapter[%d]\n",
+				  adapter_id);
+
+		ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+
+		ret = rte_event_eth_tx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+	}
+}
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
 	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (5 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2020-01-03 13:07   ` Ananyev, Konstantin
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c |  6 +++
 examples/l3fwd/main.c        | 93 ++++++++++++++++++++++++++++++++++++
 2 files changed, 99 insertions(+)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 3a43f30a3..0e796f003 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -236,6 +236,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 	uint32_t event_queue_cfg;
+	int ret;
 
 	/* Parse eventdev command line options */
 	l3fwd_parse_eventdev_args(evt_rsrc->args, evt_rsrc->nb_args);
@@ -262,4 +263,9 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Rx/Tx adapters configuration */
 	evt_rsrc->ops.adapter_setup();
+
+	/* Start event device */
+	ret = rte_event_dev_start(evt_rsrc->event_d_id);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 20df12748..69d212bc2 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -826,6 +826,93 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 	return 0;
 }
 
+static inline int
+l3fwd_service_enable(uint32_t service_id)
+{
+	uint8_t min_service_count = UINT8_MAX;
+	uint32_t slcore_array[RTE_MAX_LCORE];
+	unsigned int slcore = 0;
+	uint8_t service_count;
+	int32_t slcore_count;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+	if (slcore_count < 0)
+		return -ENOENT;
+	/* Get the core which has least number of services running. */
+	while (slcore_count--) {
+		/* Reset default mapping */
+		rte_service_map_lcore_set(service_id,
+				slcore_array[slcore_count], 0);
+		service_count = rte_service_lcore_count_services(
+				slcore_array[slcore_count]);
+		if (service_count < min_service_count) {
+			slcore = slcore_array[slcore_count];
+			min_service_count = service_count;
+		}
+	}
+	if (rte_service_map_lcore_set(service_id, slcore, 1))
+		return -ENOENT;
+	rte_service_lcore_start(slcore);
+
+	return 0;
+}
+
+static void
+l3fwd_event_service_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_info evdev_info;
+	uint32_t service_id, caps;
+	int ret, i;
+
+	rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+	if (evdev_info.event_dev_cap  & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+		ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting eventdev service\n");
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		ret = rte_event_eth_rx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		ret = rte_event_eth_tx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+}
+
 int
 main(int argc, char **argv)
 {
@@ -869,6 +956,8 @@ main(int argc, char **argv)
 	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
 	l3fwd_event_resource_setup(&port_conf);
+	if (evt_rsrc->enabled)
+		goto skip_port_config;
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
@@ -1054,6 +1143,7 @@ main(int argc, char **argv)
 		}
 	}
 
+skip_port_config:
 	printf("\n");
 
 	/* start ports */
@@ -1083,6 +1173,9 @@ main(int argc, char **argv)
 		}
 	}
 
+	if (evt_rsrc->enabled)
+		l3fwd_event_service_setup();
+
 	printf("\n");
 
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (6 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2020-01-03 13:16   ` Ananyev, Konstantin
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em " pbhagavatula
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add lpm main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h       |   9 ++
 examples/l3fwd/l3fwd_event.c |   9 ++
 examples/l3fwd/l3fwd_event.h |   5 +
 examples/l3fwd/l3fwd_lpm.c   | 231 +++++++++++++++++++++++++++++++++++
 examples/l3fwd/main.c        |  10 +-
 5 files changed, 260 insertions(+), 4 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index 6d16cde74..8f2e4be23 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -212,6 +212,15 @@ em_main_loop(__attribute__((unused)) void *dummy);
 int
 lpm_main_loop(__attribute__((unused)) void *dummy);
 
+int
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 0e796f003..c7de046e3 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -235,6 +235,12 @@ void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	const event_loop_cb lpm_event_loop[2][2] = {
+		[0][0] = lpm_event_main_loop_tx_d,
+		[0][1] = lpm_event_main_loop_tx_d_burst,
+		[1][0] = lpm_event_main_loop_tx_q,
+		[1][1] = lpm_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -268,4 +274,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	ret = rte_event_dev_start(evt_rsrc->event_d_id);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 9d8bd5a36..fcc0ce51a 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -14,6 +14,11 @@
 
 #include "l3fwd.h"
 
+#define L3FWD_EVENT_SINGLE     0x1
+#define L3FWD_EVENT_BURST      0x2
+#define L3FWD_EVENT_TX_DIRECT  0x4
+#define L3FWD_EVENT_TX_ENQ     0x8
+
 #define CMD_LINE_OPT_MODE "mode"
 #define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
 
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 349de2703..c4669d6d5 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -28,6 +28,7 @@
 #include <rte_lpm6.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 struct ipv4_l3fwd_lpm_route {
 	uint32_t ip;
@@ -254,6 +255,236 @@ lpm_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline void
+lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		struct rte_mbuf *mbuf = ev.mbuf;
+		mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
+				|| defined RTE_ARCH_PPC_64
+		process_packet(mbuf, &mbuf->port);
+#else
+
+		struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
+							struct rte_ether_hdr *);
+#ifdef DO_RFC_1812_CHECKS
+		struct rte_ipv4_hdr *ipv4_hdr;
+		if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
+			/* Handle IPv4 headers.*/
+			ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
+					struct rte_ipv4_hdr *,
+					sizeof(struct rte_ether_hdr));
+
+			if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
+					< 0) {
+				mbuf->port = BAD_PORT;
+				continue;
+			}
+			/* Update time to live and header checksum */
+			--(ipv4_hdr->time_to_live);
+			++(ipv4_hdr->hdr_checksum);
+		}
+#endif
+		/* dst addr */
+		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[mbuf->port];
+
+		/* src addr */
+		rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
+				&eth_hdr->s_addr);
+#endif
+		if (mbuf->port == BAD_PORT) {
+			rte_pktmbuf_free(mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+		for (i = 0; i < nb_deq; i++) {
+			struct rte_mbuf *mbuf = events[i].mbuf;
+
+			mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
+					|| defined RTE_ARCH_PPC_64
+			process_packet(mbuf, &mbuf->port);
+#else
+			struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
+						struct rte_ether_hdr *);
+
+#ifdef DO_RFC_1812_CHECKS
+			struct rte_ipv4_hdr *ipv4_hdr;
+			if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
+				/* Handle IPv4 headers.*/
+				ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
+						struct rte_ipv4_hdr *,
+						sizeof(struct rte_ether_hdr));
+
+				if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
+						< 0) {
+					mbuf->port = BAD_PORT;
+					continue;
+				}
+				/* Update time to live and header checksum */
+				--(ipv4_hdr->time_to_live);
+				++(ipv4_hdr->hdr_checksum);
+			}
+#endif
+			/* dst addr */
+			*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[
+								mbuf->port];
+			/* src addr */
+			rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
+					&eth_hdr->s_addr);
+#endif
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		lpm_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		lpm_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 void
 setup_lpm(const int socketid)
 {
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 69d212bc2..b88fd88db 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -951,13 +951,18 @@ main(int argc, char **argv)
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	/* Setup function pointers for lookup method. */
+	setup_l3fwd_lookup_tables();
+
 	evt_rsrc->per_port_pool = per_port_pool;
 	evt_rsrc->pkt_pool = pktmbuf_pool;
 	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
 	l3fwd_event_resource_setup(&port_conf);
-	if (evt_rsrc->enabled)
+	if (evt_rsrc->enabled) {
+		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		goto skip_port_config;
+	}
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
@@ -973,9 +978,6 @@ main(int argc, char **argv)
 
 	nb_lcores = rte_lcore_count();
 
-	/* Setup function pointers for lookup method. */
-	setup_l3fwd_lookup_tables();
-
 	/* initialize all ports */
 	RTE_ETH_FOREACH_DEV(portid) {
 		struct rte_eth_conf local_port_conf = port_conf;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em main loop
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (7 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2020-01-03 13:45   ` Ananyev, Konstantin
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add em main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h               |  10 ++
 examples/l3fwd/l3fwd_em.c            | 177 +++++++++++++++++++++++++++
 examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
 examples/l3fwd/l3fwd_em_hlm.h        | 131 ++++++++++++++++++++
 examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
 examples/l3fwd/l3fwd_event.c         |   9 ++
 examples/l3fwd/main.c                |   5 +-
 7 files changed, 470 insertions(+), 47 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index 8f2e4be23..2d02fa731 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -221,6 +221,16 @@ lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
 int
 lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
 
+int
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c
index 74a7c8fa4..1a8bc9168 100644
--- a/examples/l3fwd/l3fwd_em.c
+++ b/examples/l3fwd/l3fwd_em.c
@@ -26,6 +26,7 @@
 #include <rte_hash.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 #if defined(RTE_ARCH_X86) || defined(RTE_MACHINE_CPUFLAG_CRC32)
 #define EM_HASH_CRC 1
@@ -699,6 +700,182 @@ em_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline void
+em_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		struct rte_mbuf *mbuf = ev.mbuf;
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		mbuf->port = em_get_dst_port(lconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+#else
+		l3fwd_em_simple_process(mbuf, lconf);
+#endif
+		if (mbuf->port == BAD_PORT) {
+			rte_pktmbuf_free(mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		l3fwd_em_process_events(nb_deq, (struct rte_event **)&events,
+					lconf);
+#else
+		l3fwd_em_no_opt_process_events(nb_deq,
+					       (struct rte_event **)&events,
+					       lconf);
+#endif
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		em_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		em_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 /*
  * Initialize exact match (hash) parameters.
  */
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index 090c1b448..b992a21da 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -5,73 +5,92 @@
 #ifndef __L3FWD_EM_H__
 #define __L3FWD_EM_H__
 
-static __rte_always_inline void
-l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
-		struct lcore_conf *qconf)
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
+		     struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
 {
-	struct rte_ether_hdr *eth_hdr;
 	struct rte_ipv4_hdr *ipv4_hdr;
 	uint16_t dst_port;
-	uint32_t tcp_or_udp;
-	uint32_t l3_ptypes;
-
-	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
-	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
 
-	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
-		/* Handle IPv4 headers.*/
-		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
-						sizeof(struct rte_ether_hdr));
+	/* Handle IPv4 headers.*/
+	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
+			sizeof(struct rte_ether_hdr));
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Check to make sure the packet is valid (RFC1812) */
-		if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
-			rte_pktmbuf_free(m);
-			return;
-		}
+	/* Check to make sure the packet is valid (RFC1812) */
+	if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
+		rte_pktmbuf_free(m);
+		return BAD_PORT;
+	}
 #endif
-		dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
-						qconf->ipv4_lookup_struct);
+	dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
+			qconf->ipv4_lookup_struct);
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Update time to live and header checksum */
-		--(ipv4_hdr->time_to_live);
-		++(ipv4_hdr->hdr_checksum);
+	/* Update time to live and header checksum */
+	--(ipv4_hdr->time_to_live);
+	++(ipv4_hdr->hdr_checksum);
 #endif
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		send_single_packet(qconf, m, dst_port);
-	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
-		/* Handle IPv6 headers.*/
-		struct rte_ipv6_hdr *ipv6_hdr;
+	return dst_port;
+}
 
-		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
-						sizeof(struct rte_ether_hdr));
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
+		struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
+{
+	/* Handle IPv6 headers.*/
+	struct rte_ipv6_hdr *ipv6_hdr;
+	uint16_t dst_port;
 
-		dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
-					qconf->ipv6_lookup_struct);
+	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
+			sizeof(struct rte_ether_hdr));
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
+			qconf->ipv6_lookup_struct);
+
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
+
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	return dst_port;
+}
 
+static __rte_always_inline void
+l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
+		struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint16_t dst_port;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
+		dst_port = l3fwd_em_handle_ipv4(m, portid, eth_hdr, qconf);
+		send_single_packet(qconf, m, dst_port);
+	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
+		dst_port = l3fwd_em_handle_ipv6(m, portid, eth_hdr, qconf);
 		send_single_packet(qconf, m, dst_port);
 	} else {
 		/* Free the mbuf that contains non-IPV4/IPV6 packet */
@@ -79,6 +98,25 @@ l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
 	}
 }
 
+static __rte_always_inline void
+l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4))
+		m->port = l3fwd_em_handle_ipv4(m, m->port, eth_hdr, qconf);
+	else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6))
+		m->port = l3fwd_em_handle_ipv6(m, m->port, eth_hdr, qconf);
+	else
+		m->port = BAD_PORT;
+}
+
 /*
  * Buffer non-optimized handling of packets, invoked
  * from main_loop.
@@ -108,4 +146,33 @@ l3fwd_em_no_opt_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 		l3fwd_em_simple_forward(pkts_burst[j], portid, qconf);
 }
 
+/*
+ * Buffer non-optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_no_opt_process_events(int nb_rx, struct rte_event **events,
+			       struct lcore_conf *qconf)
+{
+	int32_t j;
+
+	/* Prefetch first packets */
+	for (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++)
+		rte_prefetch0(rte_pktmbuf_mtod(events[j]->mbuf, void *));
+
+	/*
+	 * Prefetch and forward already prefetched
+	 * packets.
+	 */
+	for (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) {
+		rte_prefetch0(rte_pktmbuf_mtod(events[
+				j + PREFETCH_OFFSET]->mbuf, void *));
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+	}
+
+	/* Forward remaining prefetched packets */
+	for (; j < nb_rx; j++)
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+}
+
 #endif /* __L3FWD_EM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h
index ad8b9ce87..79812716c 100644
--- a/examples/l3fwd/l3fwd_em_hlm.h
+++ b/examples/l3fwd/l3fwd_em_hlm.h
@@ -75,6 +75,60 @@ em_get_dst_port_ipv6xN(struct lcore_conf *qconf, struct rte_mbuf *m[],
 	}
 }
 
+static __rte_always_inline void
+em_get_dst_port_ipv4xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv4_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv4_5tuple(m[i], mask0.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv4_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv4_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
+static __rte_always_inline void
+em_get_dst_port_ipv6xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv6_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv6_5tuple(m[i], mask1.x, mask2.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv6_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv6_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
 static __rte_always_inline uint16_t
 em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 		uint16_t portid)
@@ -187,4 +241,81 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **ev,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j, pos;
+	uint16_t dst_port[MAX_PKT_BURST];
+	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+
+	/*
+	 * Send nb_rx - nb_rx % EM_HASH_LOOKUP_COUNT packets
+	 * in groups of EM_HASH_LOOKUP_COUNT.
+	 */
+	int32_t n = RTE_ALIGN_FLOOR(nb_rx, EM_HASH_LOOKUP_COUNT);
+
+	for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < nb_rx; j++) {
+		pkts_burst[j] = ev[j]->mbuf;
+		rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j],
+					       struct rte_ether_hdr *) + 1);
+	}
+
+	for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) {
+
+		uint32_t pkt_type = RTE_PTYPE_L3_MASK |
+				    RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP;
+		uint32_t l3_type, tcp_or_udp;
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkt_type &= pkts_burst[j + i]->packet_type;
+
+		l3_type = pkt_type & RTE_PTYPE_L3_MASK;
+		tcp_or_udp = pkt_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+
+		for (i = 0, pos = j + EM_HASH_LOOKUP_COUNT;
+		     i < EM_HASH_LOOKUP_COUNT && pos < nb_rx; i++, pos++) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					pkts_burst[pos],
+					struct rte_ether_hdr *) + 1);
+		}
+
+		if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV4)) {
+
+			em_get_dst_port_ipv4xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV6)) {
+
+			em_get_dst_port_ipv6xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else {
+			for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+				pkts_burst[j + i]->port = em_get_dst_port(qconf,
+						pkts_burst[j + i],
+						pkts_burst[j + i]->port);
+				process_packet(pkts_burst[j + i],
+						&pkts_burst[j + i]->port);
+			}
+			continue;
+		}
+		processx4_step3(&pkts_burst[j], &dst_port[j]);
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkts_burst[j + i]->port = dst_port[j + i];
+
+	}
+
+	for (; j < nb_rx; j++) {
+		pkts_burst[j]->port = em_get_dst_port(qconf, pkts_burst[j],
+						      pkts_burst[j]->port);
+		process_packet(pkts_burst[j], &pkts_burst[j]->port);
+	}
+}
 #endif /* __L3FWD_EM_HLM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h
index 23fe9dec8..b231b9994 100644
--- a/examples/l3fwd/l3fwd_em_sequential.h
+++ b/examples/l3fwd/l3fwd_em_sequential.h
@@ -95,4 +95,30 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **events,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j;
+
+	rte_prefetch0(rte_pktmbuf_mtod(events[0]->mbuf,
+		      struct rte_ether_hdr *) + 1);
+
+	for (i = 1, j = 0; j < nb_rx; i++, j++) {
+		struct rte_mbuf *mbuf = events[j]->mbuf;
+
+		if (i < nb_rx) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					events[i]->mbuf,
+					struct rte_ether_hdr *) + 1);
+		}
+		mbuf->port = em_get_dst_port(qconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+	}
+}
 #endif /* __L3FWD_EM_SEQUENTIAL_H__ */
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index c7de046e3..9fea11bd9 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -241,6 +241,12 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 		[1][0] = lpm_event_main_loop_tx_q,
 		[1][1] = lpm_event_main_loop_tx_q_burst,
 	};
+	const event_loop_cb em_event_loop[2][2] = {
+		[0][0] = em_event_main_loop_tx_d,
+		[0][1] = em_event_main_loop_tx_d_burst,
+		[1][0] = em_event_main_loop_tx_q,
+		[1][1] = em_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -277,4 +283,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
 						       [evt_rsrc->has_burst];
+
+	evt_rsrc->ops.em_event_loop = em_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index b88fd88db..0ae64dd41 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -960,7 +960,10 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	l3fwd_event_resource_setup(&port_conf);
 	if (evt_rsrc->enabled) {
-		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
+		if (l3fwd_em_on)
+			l3fwd_lkp.main_loop = evt_rsrc->ops.em_event_loop;
+		else
+			l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		goto skip_port_config;
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (8 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em " pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2020-01-03 13:52   ` Ananyev, Konstantin
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add graceful teardown that addresses both event mode and poll mode.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/main.c | 49 ++++++++++++++++++++++++++++++-------------
 1 file changed, 34 insertions(+), 15 deletions(-)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 0ae64dd41..68998f42c 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -920,7 +920,7 @@ main(int argc, char **argv)
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_txconf *txconf;
-	int ret;
+	int i, ret;
 	unsigned nb_ports;
 	uint16_t queueid, portid;
 	unsigned lcore_id;
@@ -1195,27 +1195,46 @@ main(int argc, char **argv)
 		}
 	}
 
-
 	check_all_ports_link_status(enabled_port_mask);
 
 	ret = 0;
 	/* launch per-lcore init on every lcore */
 	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
-		if (rte_eal_wait_lcore(lcore_id) < 0) {
-			ret = -1;
-			break;
+	if (evt_rsrc->enabled) {
+		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
+			rte_event_eth_rx_adapter_stop(
+					evt_rsrc->rx_adptr.rx_adptr[i]);
+		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
+			rte_event_eth_tx_adapter_stop(
+					evt_rsrc->tx_adptr.tx_adptr[i]);
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_stop(portid);
 		}
-	}
 
-	/* stop ports */
-	RTE_ETH_FOREACH_DEV(portid) {
-		if ((enabled_port_mask & (1 << portid)) == 0)
-			continue;
-		printf("Closing port %d...", portid);
-		rte_eth_dev_stop(portid);
-		rte_eth_dev_close(portid);
-		printf(" Done\n");
+		rte_eal_mp_wait_lcore();
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_close(portid);
+		}
+
+		rte_event_dev_stop(evt_rsrc->event_d_id);
+		rte_event_dev_close(evt_rsrc->event_d_id);
+
+	} else {
+		rte_eal_mp_wait_lcore();
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			printf("Closing port %d...", portid);
+			rte_eth_dev_stop(portid);
+			rte_eth_dev_close(portid);
+			printf(" Done\n");
+		}
 	}
 	printf("Bye...\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v2 11/11] doc: update l3fwd user guide to support eventdev
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (9 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
@ 2019-12-04 14:43 ` pbhagavatula
  2019-12-04 15:16 ` [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support Jerin Jacob
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2019-12-04 14:43 UTC (permalink / raw)
  To: jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh,
	John McNamara
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Update l3fwd user guide to include event device related information.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 doc/guides/sample_app_ug/l3_forward.rst | 76 +++++++++++++++++++++++--
 1 file changed, 70 insertions(+), 6 deletions(-)

diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 4cb4b18da..e86c4dfd5 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -4,16 +4,23 @@
 L3 Forwarding Sample Application
 ================================
 
-The L3 Forwarding application is a simple example of packet processing using the DPDK.
+The L3 Forwarding application is a simple example of packet processing using
+DPDK to demonstrate usage of poll and event mode packet I/O mechanism.
 The application performs L3 forwarding.
 
 Overview
 --------
 
-The application demonstrates the use of the hash and LPM libraries in the DPDK to implement packet forwarding.
-The initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
-The main difference from the L2 Forwarding sample application is that the forwarding decision
-is made based on information read from the input packet.
+The application demonstrates the use of the hash and LPM libraries in the DPDK
+to implement packet forwarding using poll or event mode PMDs for packet I/O.
+The initialization and run-time paths are very similar to those of the
+:doc:`l2_forward_real_virtual` and :doc:`l2_forward_event_real_virtual`.
+The main difference from the L2 Forwarding sample application is that optionally
+packet can be Rx/Tx from/to eventdev instead of port directly and forwarding
+decision is made based on information read from the input packet.
+
+Eventdev can optionally use S/W or H/W (if supported by platform) scheduler
+implementation for packet I/O based on run time parameters.
 
 The lookup method is either hash-based or LPM-based and is selected at run time. When the selected lookup method is hash-based,
 a hash object is used to emulate the flow classification stage.
@@ -56,6 +63,8 @@ The application has a number of command line options::
                              [--ipv6]
                              [--parse-ptype]
                              [--per-port-pool]
+                             [--mode]
+                             [--eventq-sched]
 
 Where,
 
@@ -86,6 +95,11 @@ Where,
 
 * ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports.
 
+* ``--mode:`` Optional, Packet transfer mode for I/O, poll or eventdev.
+
+* ``--eventq-sched:`` Optional, Event queue synchronization method, Ordered or Atomic. Only valid if --mode=eventdev.
+
+
 For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
 while cores 8-15 and 24-31 appear on socket 1.
 
@@ -116,6 +130,51 @@ In this command:
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+
 
+To use eventdev mode with sync method **ordered** on above mentioned environment,
+Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+
+or
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+
+In this command:
+
+*   -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+
+*   The --mode option defines PMD to be used for packet I/O.
+
+*   The --eventq-sched option enables synchronization menthod of event queue so that packets will be scheduled accordingly.
+
+If application uses S/W scheduler, it uses following DPDK services:
+
+*   Software scheduler
+*   Rx adapter service function
+*   Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -p 0x3 --mode=eventdev --eventq-sched=ordered
+
+In case of eventdev mode, *--config* option is not used for ethernet port
+configuration. Instead each ethernet port will be configured with mentioned
+setup:
+
+*   Single Rx/Tx queue
+
+*   Each Rx queue will be connected to event queue via Rx adapter.
+
+*   Each Tx queue will be connected via Tx adapter.
+
 Refer to the *DPDK Getting Started Guide* for general information on running applications and
 the Environment Abstraction Layer (EAL) options.
 
@@ -125,7 +184,7 @@ Explanation
 -----------
 
 The following sections provide some explanation of the sample application code. As mentioned in the overview section,
-the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
+the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual` and :doc:`l2_forward_event_real_virtual`.
 The following sections describe aspects that are specific to the L3 Forwarding sample application.
 
 Hash Initialization
@@ -315,3 +374,8 @@ for LPM-based lookups is done by the get_ipv4_dst_port() function below:
 
         return ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct, rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)? next_hop : portid);
     }
+
+Eventdev Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Eventdev driver initialization is same as L2 forwarding eventdev application.
+Refer :doc:`l2_forward_event_real_virtual` for more details.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (10 preceding siblings ...)
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
@ 2019-12-04 15:16 ` Jerin Jacob
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
  12 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2019-12-04 15:16 UTC (permalink / raw)
  To: Pavan Nikhilesh
  Cc: Jerin Jacob, dpdk-dev, Ananyev, Konstantin, Thomas Monjalon,
	Richardson, Bruce, Hemant Agrawal

On Wed, Dec 4, 2019 at 8:13 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds support for event mode in l3fwd.
> The poll mode is still the default mode of operation.
>
> The following new command line parameters are added:
>  --mode: Dictates the mode of operation either poll or event. If unset then
>          by default poll mode is used.
>  --eventq_sched: Dictates event synchronization mode i.e. either ordered,
>                  atomic or parallel.
>
> Based on event device capability the configuration is done as follows:
>     - A single event device is enabled.
>     - The number of event ports is equal to the number of worker
>       cores enabled in the core mask. Additional event ports might
>       be configured based on Rx/Tx adapter capability.
>     - The number of event queues is equal to the number of ethernet
>       ports. If Tx adapter doesn't have internal port capability then
>       an additional single link event queue is used to enqueue events
>       to Tx adapter.
>     - Each event port is linked to all existing event queues.
>     - Dedicated Rx/Tx adapters for each Ethernet port.
>
> v2 Changes:
> ----------
> - Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
> - Use rte_panic instead of rte_exit.
> - Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
> - Add parallel mode support.
> - Fix missing eventdev args parsing.

Since there is no maintainer for l3fwd and tech board decided to have
a unified l3fwd application
for poll mode and event mode, I request to have the review for the
common code change(at least 8/11 and 9/11
patches). This series has been pushed from the last release to this
release to have enough review.
We would like to merge this patch for this release. Request a timely review.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
@ 2019-12-27 13:33   ` Nipun Gupta
  2019-12-29 15:42     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Nipun Gupta @ 2019-12-27 13:33 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Marko Kovacevic, Ori Kam, Bruce Richardson,
	Radu Nicolau, Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori
  Cc: dev



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Wednesday, December 4, 2019 8:14 PM
> To: jerinj@marvell.com; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> Kam <orika@mellanox.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> Pavan Nikhilesh <pbhagavatula@marvell.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> From: Sunil Kumar Kori <skori@marvell.com>
> 
> Add ethernet port Rx/Tx queue setup for event device which are later
> used for setting up event eth Rx/Tx adapters.
> 
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
>  examples/l3fwd/l3fwd.h       |  10 +++
>  examples/l3fwd/l3fwd_event.c | 129
> ++++++++++++++++++++++++++++++++++-
>  examples/l3fwd/l3fwd_event.h |   2 +-
>  examples/l3fwd/main.c        |  15 ++--
>  4 files changed, 144 insertions(+), 12 deletions(-)
> 

<snip>

> +
> +		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
> +
> 	dev_info.flow_type_rss_offloads;
> +		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
> +				port_conf->rx_adv_conf.rss_conf.rss_hf) {
> +			printf("Port %u modified RSS hash function "
> +			       "based on hardware support,"
> +			       "requested:%#"PRIx64"
> configured:%#"PRIx64"\n",
> +			       port_id,
> +			       port_conf->rx_adv_conf.rss_conf.rss_hf,
> +			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
> +		}

We are using 1 queue, but using RSS hash function?

> +
> +		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
> +		if (ret < 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Cannot configure device: err=%d,
> port=%d\n",
> +				 ret, port_id);
> +

We should be using number of RX queues as per the config option provided in the arguments.
L3fwd is supposed to support multiple queue. Right?

Regards,
Nipun



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2019-12-27 13:33   ` Nipun Gupta
@ 2019-12-29 15:42     ` Pavan Nikhilesh Bhagavatula
  2019-12-30  7:40       ` Nipun Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2019-12-29 15:42 UTC (permalink / raw)
  To: Nipun Gupta, Jerin Jacob Kollanukkaran, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori
  Cc: dev


>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of
>> pbhagavatula@marvell.com
>> Sent: Wednesday, December 4, 2019 8:14 PM
>> To: jerinj@marvell.com; Marko Kovacevic
><marko.kovacevic@intel.com>; Ori
>> Kam <orika@mellanox.com>; Bruce Richardson
>> <bruce.richardson@intel.com>; Radu Nicolau
><radu.nicolau@intel.com>;
>> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
>> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>;
>> Pavan Nikhilesh <pbhagavatula@marvell.com>
>> Cc: dev@dpdk.org
>> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev
>setup
>> based on eventdev
>>
>> From: Sunil Kumar Kori <skori@marvell.com>
>>
>> Add ethernet port Rx/Tx queue setup for event device which are later
>> used for setting up event eth Rx/Tx adapters.
>>
>> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
>> ---
>>  examples/l3fwd/l3fwd.h       |  10 +++
>>  examples/l3fwd/l3fwd_event.c | 129
>> ++++++++++++++++++++++++++++++++++-
>>  examples/l3fwd/l3fwd_event.h |   2 +-
>>  examples/l3fwd/main.c        |  15 ++--
>>  4 files changed, 144 insertions(+), 12 deletions(-)
>>
>
><snip>
>
>> +
>> +		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
>> +
>> 	dev_info.flow_type_rss_offloads;
>> +		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
>> +				port_conf-
>>rx_adv_conf.rss_conf.rss_hf) {
>> +			printf("Port %u modified RSS hash function "
>> +			       "based on hardware support,"
>> +			       "requested:%#"PRIx64"
>> configured:%#"PRIx64"\n",
>> +			       port_id,
>> +			       port_conf->rx_adv_conf.rss_conf.rss_hf,
>> +
>local_port_conf.rx_adv_conf.rss_conf.rss_hf);
>> +		}
>
>We are using 1 queue, but using RSS hash function?

rte_event::flow_id which uniquely identifies a given flow is generated using
RSS Hash function on the required fields in the packet.

>
>> +
>> +		ret = rte_eth_dev_configure(port_id, 1, 1,
>&local_port_conf);
>> +		if (ret < 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Cannot configure device: err=%d,
>> port=%d\n",
>> +				 ret, port_id);
>> +
>
>We should be using number of RX queues as per the config option
>provided in the arguments.
>L3fwd is supposed to support multiple queue. Right?

The entire premise of using event device is to showcase packet scheduling to cores
without the need for splitting packets across multiple queues.

Queue config is ignored when event mode is selected.
 
>
>Regards,
>Nipun
>

Regards,
Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2019-12-29 15:42     ` Pavan Nikhilesh Bhagavatula
@ 2019-12-30  7:40       ` Nipun Gupta
  2020-01-02  6:21         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Nipun Gupta @ 2019-12-30  7:40 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Hemant Agrawal
  Cc: dev

Hi Pavan,

> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Sunday, December 29, 2019 9:12 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> Kam <orika@mellanox.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> 
> >> -----Original Message-----
> >> From: dev <dev-bounces@dpdk.org> On Behalf Of
> >> pbhagavatula@marvell.com
> >> Sent: Wednesday, December 4, 2019 8:14 PM
> >> To: jerinj@marvell.com; Marko Kovacevic
> ><marko.kovacevic@intel.com>; Ori
> >> Kam <orika@mellanox.com>; Bruce Richardson
> >> <bruce.richardson@intel.com>; Radu Nicolau
> ><radu.nicolau@intel.com>;
> >> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> >> <tomasz.kantecki@intel.com>; Sunil Kumar Kori
> ><skori@marvell.com>;
> >> Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> Cc: dev@dpdk.org
> >> Subject: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev
> >setup
> >> based on eventdev
> >>
> >> From: Sunil Kumar Kori <skori@marvell.com>
> >>
> >> Add ethernet port Rx/Tx queue setup for event device which are later
> >> used for setting up event eth Rx/Tx adapters.
> >>
> >> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> >> ---
> >>  examples/l3fwd/l3fwd.h       |  10 +++
> >>  examples/l3fwd/l3fwd_event.c | 129
> >> ++++++++++++++++++++++++++++++++++-
> >>  examples/l3fwd/l3fwd_event.h |   2 +-
> >>  examples/l3fwd/main.c        |  15 ++--
> >>  4 files changed, 144 insertions(+), 12 deletions(-)
> >>
> >
> ><snip>
> >
> >> +
> >> +		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
> >> +
> >> 	dev_info.flow_type_rss_offloads;
> >> +		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
> >> +				port_conf-
> >>rx_adv_conf.rss_conf.rss_hf) {
> >> +			printf("Port %u modified RSS hash function "
> >> +			       "based on hardware support,"
> >> +			       "requested:%#"PRIx64"
> >> configured:%#"PRIx64"\n",
> >> +			       port_id,
> >> +			       port_conf->rx_adv_conf.rss_conf.rss_hf,
> >> +
> >local_port_conf.rx_adv_conf.rss_conf.rss_hf);
> >> +		}
> >
> >We are using 1 queue, but using RSS hash function?
> 
> rte_event::flow_id which uniquely identifies a given flow is generated using
> RSS Hash function on the required fields in the packet.

Okay. Got it.

> 
> >
> >> +
> >> +		ret = rte_eth_dev_configure(port_id, 1, 1,
> >&local_port_conf);
> >> +		if (ret < 0)
> >> +			rte_exit(EXIT_FAILURE,
> >> +				 "Cannot configure device: err=%d,
> >> port=%d\n",
> >> +				 ret, port_id);
> >> +
> >
> >We should be using number of RX queues as per the config option
> >provided in the arguments.
> >L3fwd is supposed to support multiple queue. Right?
> 
> The entire premise of using event device is to showcase packet scheduling to
> cores
> without the need for splitting packets across multiple queues.
> 
> Queue config is ignored when event mode is selected.

For atomic queues, we have single queue providing packets to a single core at a time till processing on that core is completed, irrespective of the flows on that hardware queue.
And multiple queues are required to distribute separate packets on separate cores, with these atomic queues maintaining the ordering and not scheduling on other core, until processing core has completed its job.
To have this solution generic, we should also take config parameter - (port, number of queues) to enable multiple ethernet RX queues.

Regards,
Nipun

> 
> >
> >Regards,
> >Nipun
> >
> 
> Regards,
> Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2019-12-30  7:40       ` Nipun Gupta
@ 2020-01-02  6:21         ` Pavan Nikhilesh Bhagavatula
  2020-01-02  8:49           ` Nipun Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-02  6:21 UTC (permalink / raw)
  To: Nipun Gupta, Jerin Jacob Kollanukkaran, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Hemant Agrawal
  Cc: dev

>> >&local_port_conf);
>> >> +		if (ret < 0)
>> >> +			rte_exit(EXIT_FAILURE,
>> >> +				 "Cannot configure device: err=%d,
>> >> port=%d\n",
>> >> +				 ret, port_id);
>> >> +
>> >
>> >We should be using number of RX queues as per the config option
>> >provided in the arguments.
>> >L3fwd is supposed to support multiple queue. Right?
>>
>> The entire premise of using event device is to showcase packet
>scheduling to
>> cores
>> without the need for splitting packets across multiple queues.
>>
>> Queue config is ignored when event mode is selected.
>
>For atomic queues, we have single queue providing packets to a single
>core at a time till processing on that core is completed, irrespective of
>the flows on that hardware queue.
>And multiple queues are required to distribute separate packets on
>separate cores, with these atomic queues maintaining the ordering and
>not scheduling on other core, until processing core has completed its
>job.
>To have this solution generic, we should also take config parameter -
>(port, number of queues) to enable multiple ethernet RX queues.
>

Not sure I follow we connect Rx queue to an event queue which is then linked to multiple event ports which are polled on 
by respective cores.
How would increasing Rx queues help? Distributing flows from single event queue to multiple event ports is the responsibility
of Event device as per spec.
Does DPAA/2 function differently? 

Regards,
Pavan.

>Regards,
>Nipun
>
>>
>> >
>> >Regards,
>> >Nipun
>> >
>>
>> Regards,
>> Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-02  6:21         ` Pavan Nikhilesh Bhagavatula
@ 2020-01-02  8:49           ` Nipun Gupta
  2020-01-02  9:33             ` Jerin Jacob
  0 siblings, 1 reply; 78+ messages in thread
From: Nipun Gupta @ 2020-01-02  8:49 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Hemant Agrawal
  Cc: dev



> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: Thursday, January 2, 2020 11:52 AM
> To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> Kam <orika@mellanox.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> >> >&local_port_conf);
> >> >> +		if (ret < 0)
> >> >> +			rte_exit(EXIT_FAILURE,
> >> >> +				 "Cannot configure device: err=%d,
> >> >> port=%d\n",
> >> >> +				 ret, port_id);
> >> >> +
> >> >
> >> >We should be using number of RX queues as per the config option
> >> >provided in the arguments.
> >> >L3fwd is supposed to support multiple queue. Right?
> >>
> >> The entire premise of using event device is to showcase packet
> >scheduling to
> >> cores
> >> without the need for splitting packets across multiple queues.
> >>
> >> Queue config is ignored when event mode is selected.
> >
> >For atomic queues, we have single queue providing packets to a single
> >core at a time till processing on that core is completed, irrespective of
> >the flows on that hardware queue.
> >And multiple queues are required to distribute separate packets on
> >separate cores, with these atomic queues maintaining the ordering and
> >not scheduling on other core, until processing core has completed its
> >job.
> >To have this solution generic, we should also take config parameter -
> >(port, number of queues) to enable multiple ethernet RX queues.
> >
> 
> Not sure I follow we connect Rx queue to an event queue which is then
> linked to multiple event ports which are polled on
> by respective cores.

This is what we too support, but with atomic queue case the scenario gets little complex.
Each atomic queue can be scheduled only to one event port at a time, until all the events from
that event port are processed. Then only it can move to other event port.

To have separate event ports process packets at same time in atomic scenario, multiple queues
are required. As l3fwd supports multiple queues, it seems legitimate to add the support.

Thanks,
Nipun

> How would increasing Rx queues help? Distributing flows from single event
> queue to multiple event ports is the responsibility
> of Event device as per spec.
> Does DPAA/2 function differently?
> 
> Regards,
> Pavan.
> 
> >Regards,
> >Nipun
> >
> >>
> >> >
> >> >Regards,
> >> >Nipun
> >> >
> >>
> >> Regards,
> >> Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-02  8:49           ` Nipun Gupta
@ 2020-01-02  9:33             ` Jerin Jacob
  2020-01-03  9:06               ` Nipun Gupta
  0 siblings, 1 reply; 78+ messages in thread
From: Jerin Jacob @ 2020-01-02  9:33 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Hemant Agrawal,
	dev

On Thu, Jan 2, 2020 at 2:20 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > Sent: Thursday, January 2, 2020 11:52 AM
> > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> > Kam <orika@mellanox.com>; Bruce Richardson
> > <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> > <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > based on eventdev
> >
> > >> >&local_port_conf);
> > >> >> +               if (ret < 0)
> > >> >> +                       rte_exit(EXIT_FAILURE,
> > >> >> +                                "Cannot configure device: err=%d,
> > >> >> port=%d\n",
> > >> >> +                                ret, port_id);
> > >> >> +
> > >> >
> > >> >We should be using number of RX queues as per the config option
> > >> >provided in the arguments.
> > >> >L3fwd is supposed to support multiple queue. Right?
> > >>
> > >> The entire premise of using event device is to showcase packet
> > >scheduling to
> > >> cores
> > >> without the need for splitting packets across multiple queues.
> > >>
> > >> Queue config is ignored when event mode is selected.
> > >
> > >For atomic queues, we have single queue providing packets to a single
> > >core at a time till processing on that core is completed, irrespective of
> > >the flows on that hardware queue.
> > >And multiple queues are required to distribute separate packets on
> > >separate cores, with these atomic queues maintaining the ordering and
> > >not scheduling on other core, until processing core has completed its
> > >job.
> > >To have this solution generic, we should also take config parameter -
> > >(port, number of queues) to enable multiple ethernet RX queues.
> > >
> >
> > Not sure I follow we connect Rx queue to an event queue which is then
> > linked to multiple event ports which are polled on
> > by respective cores.
>
> This is what we too support, but with atomic queue case the scenario gets little complex.
> Each atomic queue can be scheduled only to one event port at a time, until all the events from
> that event port are processed. Then only it can move to other event port.

This would make it a poll mode. We might as well use normal PMD + RSS
for the same instead.
i.e use l3fwd in poll mode. It will be the same in terms of performance. Right?

>
> To have separate event ports process packets at same time in atomic scenario, multiple queues
> are required. As l3fwd supports multiple queues, it seems legitimate to add the support.
>
> Thanks,
> Nipun
>
> > How would increasing Rx queues help? Distributing flows from single event
> > queue to multiple event ports is the responsibility
> > of Event device as per spec.
> > Does DPAA/2 function differently?
> >
> > Regards,
> > Pavan.
> >
> > >Regards,
> > >Nipun
> > >
> > >>
> > >> >
> > >> >Regards,
> > >> >Nipun
> > >> >
> > >>
> > >> Regards,
> > >> Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-02  9:33             ` Jerin Jacob
@ 2020-01-03  9:06               ` Nipun Gupta
  2020-01-03  9:09                 ` Jerin Jacob
  0 siblings, 1 reply; 78+ messages in thread
From: Nipun Gupta @ 2020-01-03  9:06 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Hemant Agrawal,
	dev



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, January 2, 2020 3:04 PM
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>; Bruce
> Richardson <bruce.richardson@intel.com>; Radu Nicolau
> <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; Tomasz
> Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> based on eventdev
> 
> On Thu, Jan 2, 2020 at 2:20 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > > Sent: Thursday, January 2, 2020 11:52 AM
> > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > > <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> > > Kam <orika@mellanox.com>; Bruce Richardson
> > > <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > > Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> > > <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > > Hemant Agrawal <hemant.agrawal@nxp.com>
> > > Cc: dev@dpdk.org
> > > Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > > based on eventdev
> > >
> > > >> >&local_port_conf);
> > > >> >> +               if (ret < 0)
> > > >> >> +                       rte_exit(EXIT_FAILURE,
> > > >> >> +                                "Cannot configure device: err=%d,
> > > >> >> port=%d\n",
> > > >> >> +                                ret, port_id);
> > > >> >> +
> > > >> >
> > > >> >We should be using number of RX queues as per the config option
> > > >> >provided in the arguments.
> > > >> >L3fwd is supposed to support multiple queue. Right?
> > > >>
> > > >> The entire premise of using event device is to showcase packet
> > > >scheduling to
> > > >> cores
> > > >> without the need for splitting packets across multiple queues.
> > > >>
> > > >> Queue config is ignored when event mode is selected.
> > > >
> > > >For atomic queues, we have single queue providing packets to a single
> > > >core at a time till processing on that core is completed, irrespective of
> > > >the flows on that hardware queue.
> > > >And multiple queues are required to distribute separate packets on
> > > >separate cores, with these atomic queues maintaining the ordering and
> > > >not scheduling on other core, until processing core has completed its
> > > >job.
> > > >To have this solution generic, we should also take config parameter -
> > > >(port, number of queues) to enable multiple ethernet RX queues.
> > > >
> > >
> > > Not sure I follow we connect Rx queue to an event queue which is then
> > > linked to multiple event ports which are polled on
> > > by respective cores.
> >
> > This is what we too support, but with atomic queue case the scenario gets
> little complex.
> > Each atomic queue can be scheduled only to one event port at a time, until all
> the events from
> > that event port are processed. Then only it can move to other event port.
> 
> This would make it a poll mode. We might as well use normal PMD + RSS
> for the same instead.
> i.e use l3fwd in poll mode. It will be the same in terms of performance. Right?

We do not need to have a complete config, but can have a parameter as number of RX
queues per port. We will send a patch on top of this to support the same.

Thanks,
Nipun

> 
> >
> > To have separate event ports process packets at same time in atomic scenario,
> multiple queues
> > are required. As l3fwd supports multiple queues, it seems legitimate to add the
> support.
> >
> > Thanks,
> > Nipun
> >
> > > How would increasing Rx queues help? Distributing flows from single event
> > > queue to multiple event ports is the responsibility
> > > of Event device as per spec.
> > > Does DPAA/2 function differently?
> > >
> > > Regards,
> > > Pavan.
> > >
> > > >Regards,
> > > >Nipun
> > > >
> > > >>
> > > >> >
> > > >> >Regards,
> > > >> >Nipun
> > > >> >
> > > >>
> > > >> Regards,
> > > >> Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-03  9:06               ` Nipun Gupta
@ 2020-01-03  9:09                 ` Jerin Jacob
  0 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2020-01-03  9:09 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Hemant Agrawal,
	dev

On Fri, Jan 3, 2020 at 2:36 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, January 2, 2020 3:04 PM
> > To: Nipun Gupta <nipun.gupta@nxp.com>
> > Cc: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin Jacob
> > Kollanukkaran <jerinj@marvell.com>; Marko Kovacevic
> > <marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>; Bruce
> > Richardson <bruce.richardson@intel.com>; Radu Nicolau
> > <radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>; Tomasz
> > Kantecki <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > based on eventdev
> >
> > On Thu, Jan 2, 2020 at 2:20 PM Nipun Gupta <nipun.gupta@nxp.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> > > > Sent: Thursday, January 2, 2020 11:52 AM
> > > > To: Nipun Gupta <nipun.gupta@nxp.com>; Jerin Jacob Kollanukkaran
> > > > <jerinj@marvell.com>; Marko Kovacevic <marko.kovacevic@intel.com>; Ori
> > > > Kam <orika@mellanox.com>; Bruce Richardson
> > > > <bruce.richardson@intel.com>; Radu Nicolau <radu.nicolau@intel.com>;
> > > > Akhil Goyal <akhil.goyal@nxp.com>; Tomasz Kantecki
> > > > <tomasz.kantecki@intel.com>; Sunil Kumar Kori <skori@marvell.com>;
> > > > Hemant Agrawal <hemant.agrawal@nxp.com>
> > > > Cc: dev@dpdk.org
> > > > Subject: RE: [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup
> > > > based on eventdev
> > > >
> > > > >> >&local_port_conf);
> > > > >> >> +               if (ret < 0)
> > > > >> >> +                       rte_exit(EXIT_FAILURE,
> > > > >> >> +                                "Cannot configure device: err=%d,
> > > > >> >> port=%d\n",
> > > > >> >> +                                ret, port_id);
> > > > >> >> +
> > > > >> >
> > > > >> >We should be using number of RX queues as per the config option
> > > > >> >provided in the arguments.
> > > > >> >L3fwd is supposed to support multiple queue. Right?
> > > > >>
> > > > >> The entire premise of using event device is to showcase packet
> > > > >scheduling to
> > > > >> cores
> > > > >> without the need for splitting packets across multiple queues.
> > > > >>
> > > > >> Queue config is ignored when event mode is selected.
> > > > >
> > > > >For atomic queues, we have single queue providing packets to a single
> > > > >core at a time till processing on that core is completed, irrespective of
> > > > >the flows on that hardware queue.
> > > > >And multiple queues are required to distribute separate packets on
> > > > >separate cores, with these atomic queues maintaining the ordering and
> > > > >not scheduling on other core, until processing core has completed its
> > > > >job.
> > > > >To have this solution generic, we should also take config parameter -
> > > > >(port, number of queues) to enable multiple ethernet RX queues.
> > > > >
> > > >
> > > > Not sure I follow we connect Rx queue to an event queue which is then
> > > > linked to multiple event ports which are polled on
> > > > by respective cores.
> > >
> > > This is what we too support, but with atomic queue case the scenario gets
> > little complex.
> > > Each atomic queue can be scheduled only to one event port at a time, until all
> > the events from
> > > that event port are processed. Then only it can move to other event port.
> >
> > This would make it a poll mode. We might as well use normal PMD + RSS
> > for the same instead.
> > i.e use l3fwd in poll mode. It will be the same in terms of performance. Right?
>
> We do not need to have a complete config, but can have a parameter as number of RX
> queues per port. We will send a patch on top of this to support the same.

Looks good to me.

>
> Thanks,
> Nipun
>
> >
> > >
> > > To have separate event ports process packets at same time in atomic scenario,
> > multiple queues
> > > are required. As l3fwd supports multiple queues, it seems legitimate to add the
> > support.
> > >
> > > Thanks,
> > > Nipun
> > >
> > > > How would increasing Rx queues help? Distributing flows from single event
> > > > queue to multiple event ports is the responsibility
> > > > of Event device as per spec.
> > > > Does DPAA/2 function differently?
> > > >
> > > > Regards,
> > > > Pavan.
> > > >
> > > > >Regards,
> > > > >Nipun
> > > > >
> > > > >>
> > > > >> >
> > > > >> >Regards,
> > > > >> >Nipun
> > > > >> >
> > > > >>
> > > > >> Regards,
> > > > >> Pavan.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device pbhagavatula
@ 2020-01-03 12:49   ` Ananyev, Konstantin
  2020-01-06  4:27     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-03 12:49 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kovacevic, Marko, Ori Kam, Richardson,
	Bruce, Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz,
	Sunil Kumar Kori
  Cc: dev


> Add framework to enable event device as a producer of packets.
> To switch between event mode and poll mode the following options
> have been added:
> 	`--mode="eventdev"` or `--mode="poll"`
> Also, allow the user to select the schedule type to be either
> RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_PARALLEL
> through:
> 	`--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
> 		`--eventq-sched="parallel"`
> 
> Poll mode is still the default operation mode.
> 
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> ---
>  examples/l3fwd/Makefile      |  2 +-
>  examples/l3fwd/l3fwd.h       |  6 +++
>  examples/l3fwd/l3fwd_event.c | 75 ++++++++++++++++++++++++++++++++++++
>  examples/l3fwd/l3fwd_event.h | 54 ++++++++++++++++++++++++++
>  examples/l3fwd/main.c        | 41 +++++++++++++++++---
>  examples/l3fwd/meson.build   |  4 +-
>  6 files changed, 174 insertions(+), 8 deletions(-)
>  create mode 100644 examples/l3fwd/l3fwd_event.c
>  create mode 100644 examples/l3fwd/l3fwd_event.h
> 
> diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
> index b2dbf2607..c892b867b 100644
> --- a/examples/l3fwd/Makefile
> +++ b/examples/l3fwd/Makefile
> @@ -5,7 +5,7 @@
>  APP = l3fwd
> 
>  # all source are stored in SRCS-y
> -SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c
> +SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
> 
>  # Build using pkg-config variables if possible
>  ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
> index 293fb1fa2..cd17a41b3 100644
> --- a/examples/l3fwd/l3fwd.h
> +++ b/examples/l3fwd/l3fwd.h
> @@ -5,6 +5,9 @@
>  #ifndef __L3_FWD_H__
>  #define __L3_FWD_H__
> 
> +#include <stdbool.h>
> +

Why do we need it here?

> +#include <rte_ethdev.h>
>  #include <rte_vect.h>
> 
>  #define DO_RFC_1812_CHECKS
> @@ -169,6 +172,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
>  }
>  #endif /* DO_RFC_1812_CHECKS */
> 
> +void
> +print_usage(const char *prgname);
> +
>  /* Function pointers for LPM or EM functionality. */
>  void
>  setup_lpm(const int socketid);
> diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
> new file mode 100644
> index 000000000..3892720be
> --- /dev/null
> +++ b/examples/l3fwd/l3fwd_event.c
> @@ -0,0 +1,75 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2019 Marvell International Ltd.
> + */
> +
> +#include <stdbool.h>
> +#include <getopt.h>
> +
> +#include "l3fwd.h"
> +#include "l3fwd_event.h"
> +
> +static void
> +parse_mode(const char *optarg)
> +{
> +	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
> +
> +	if (!strncmp(optarg, "poll", 4))

That looks a bit clumsy and error-prone.
Just strcmp(optarg, "poll") seems enough here.
Same for other similar places.

> +		evt_rsrc->enabled = false;
> +	else if (!strncmp(optarg, "eventdev", 8))
> +		evt_rsrc->enabled = true;
> +}
> +
> +static void
> +parse_eventq_sync(const char *optarg)
> +{
> +	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
> +
> +	if (!strncmp(optarg, "ordered", 7))
> +		evt_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
> +	if (!strncmp(optarg, "atomic", 6))
> +		evt_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
> +	if (!strncmp(optarg, "parallel", 8))
> +		evt_rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
> +}
> +
> +static void
> +l3fwd_parse_eventdev_args(char **argv, int argc)
> +{
> +	const struct option eventdev_lgopts[] = {
> +		{CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
> +		{CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
> +		{NULL, 0, 0, 0}
> +	};
> +	char *prgname = argv[0];
> +	char **argvopt = argv;
> +	int32_t option_index;
> +	int32_t opt;
> +
> +	while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts,
> +					&option_index)) != EOF) {
> +		switch (opt) {
> +		case CMD_LINE_OPT_MODE_NUM:
> +			parse_mode(optarg);
> +			break;
> +
> +		case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
> +			parse_eventq_sync(optarg);
> +			break;
> +
> +		default:
> +			print_usage(prgname);
> +			exit(1);
> +		}
> +	}
> +}
> +
> +void
> +l3fwd_event_resource_setup(void)
> +{
> +	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
> +
> +	/* Parse eventdev command line options */
> +	l3fwd_parse_eventdev_args(evt_rsrc->args, evt_rsrc->nb_args);
> +	if (!evt_rsrc->enabled)
> +		return;
> +}
> diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
> new file mode 100644
> index 000000000..c95296c38
> --- /dev/null
> +++ b/examples/l3fwd/l3fwd_event.h
> @@ -0,0 +1,54 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2019 Marvell International Ltd.
> + */
> +
> +#ifndef __L3FWD_EVENTDEV_H__
> +#define __L3FWD_EVENTDEV_H__
> +
> +#include <rte_common.h>
> +#include <rte_eventdev.h>
> +#include <rte_spinlock.h>
> +
> +#include "l3fwd.h"
> +
> +#define CMD_LINE_OPT_MODE "mode"
> +#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
> +
> +enum {
> +	CMD_LINE_OPT_MODE_NUM = 265,
> +	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
> +};
> +
> +struct l3fwd_event_resources {
> +	uint8_t sched_type;
> +	uint8_t enabled;
> +	uint8_t nb_args;
> +	char **args;
> +};
> +
> +static inline struct l3fwd_event_resources *
> +l3fwd_get_eventdev_rsrc(void)
> +{
> +	static const char name[RTE_MEMZONE_NAMESIZE] = "l3fwd_event_rsrc";
> +	const struct rte_memzone *mz;
> +
> +	mz = rte_memzone_lookup(name);
> +
> +	if (mz != NULL)
> +		return mz->addr;
> +
> +	mz = rte_memzone_reserve(name, sizeof(struct l3fwd_event_resources),
> +				 0, 0);
> +	if (mz != NULL) {
> +		memset(mz->addr, 0, sizeof(struct l3fwd_event_resources));
> +		return mz->addr;
> +	}
> +
> +	rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
> +
> +	return NULL;
> +}

Does this function really need to be inline?
It wouldn't be fast anyway.
Another question - do you really need memzone here?
Wouldn't just rte_malloc() be enough?

> +
> +void l3fwd_event_resource_setup(void);
> +
> +#endif /* __L3FWD_EVENTDEV_H__ */
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 4dea12a65..19ca4483c 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -13,12 +13,12 @@
>  #include <errno.h>
>  #include <getopt.h>
>  #include <signal.h>
> -#include <stdbool.h>
> 
>  #include <rte_common.h>
>  #include <rte_vect.h>
>  #include <rte_byteorder.h>
>  #include <rte_log.h>
> +#include <rte_malloc.h>
>  #include <rte_memory.h>
>  #include <rte_memcpy.h>
>  #include <rte_eal.h>
> @@ -33,7 +33,6 @@
>  #include <rte_random.h>
>  #include <rte_debug.h>
>  #include <rte_ether.h>
> -#include <rte_ethdev.h>
>  #include <rte_mempool.h>
>  #include <rte_mbuf.h>
>  #include <rte_ip.h>
> @@ -46,6 +45,7 @@
>  #include <cmdline_parse_etheraddr.h>
> 
>  #include "l3fwd.h"
> +#include "l3fwd_event.h"
> 
>  /*
>   * Configurable number of RX/TX ring descriptors
> @@ -274,7 +274,7 @@ init_lcore_rx_queues(void)
>  }
> 
>  /* display usage */
> -static void
> +void
>  print_usage(const char *prgname)
>  {
>  	fprintf(stderr, "%s [EAL options] --"
> @@ -289,7 +289,9 @@ print_usage(const char *prgname)
>  		" [--hash-entry-num]"
>  		" [--ipv6]"
>  		" [--parse-ptype]"
> -		" [--per-port-pool]\n\n"
> +		" [--per-port-pool]"
> +		" [--mode]"
> +		" [--eventq-sched]\n\n"
> 
>  		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
>  		"  -P : Enable promiscuous mode\n"
> @@ -304,7 +306,13 @@ print_usage(const char *prgname)
>  		"  --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
>  		"  --ipv6: Set if running ipv6 packets\n"
>  		"  --parse-ptype: Set to use software to analyze packet type\n"
> -		"  --per-port-pool: Use separate buffer pool per port\n\n",
> +		"  --per-port-pool: Use separate buffer pool per port\n"
> +		"  --mode: Packet transfer mode for I/O, poll or eventdev\n"
> +		"          Default mode = poll\n"
> +		"  --eventq-sched: Event queue synchronization method "
> +		"                  ordered, atomic or parallel.\n\t\t"
> +		"		   Default: atomic\n\t\t"
> +		"                  Valid only if --mode=eventdev\n\n",
>  		prgname);
>  }
> 
> @@ -504,11 +512,19 @@ static const struct option lgopts[] = {
>  static int
>  parse_args(int argc, char **argv)
>  {
> +	struct l3fwd_event_resources *evt_rsrc;
>  	int opt, ret;
>  	char **argvopt;
>  	int option_index;
>  	char *prgname = argv[0];
> 
> +	evt_rsrc = l3fwd_get_eventdev_rsrc();
> +	evt_rsrc->args = rte_zmalloc("l3fwd_event_args", sizeof(char *), 0);
> +	if (evt_rsrc->args == NULL)
> +		rte_exit(EXIT_FAILURE,
> +				"Unable to allocate memory for eventdev arg");
> +	evt_rsrc->args[0] = argv[0];
> +	evt_rsrc->nb_args++;
>  	argvopt = argv;
> 
>  	/* Error or normal output strings. */
> @@ -538,6 +554,15 @@ parse_args(int argc, char **argv)
>  			l3fwd_lpm_on = 1;
>  			break;
> 
> +		case '?':
> +			/* Eventdev options are encountered skip for
> +			 * now and processed later.
> +			 */
> +			evt_rsrc->args[evt_rsrc->nb_args] =
> +				argv[optind - 1];
> +			evt_rsrc->nb_args++;
> +			break;
> +

All this construction with first allocating space for eventdev specific params
copying them and parsing in a special function - looks like an overkill to me.
Why not just to call parse_mode()/parse_eventq.. functions straight from here? 


>  		/* long options */
>  		case CMD_LINE_OPT_CONFIG_NUM:
>  			ret = parse_config(optarg);
> @@ -811,6 +836,7 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
>  int
>  main(int argc, char **argv)
>  {
> +	struct l3fwd_event_resources *evt_rsrc;
>  	struct lcore_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
>  	struct rte_eth_txconf *txconf;
> @@ -839,11 +865,16 @@ main(int argc, char **argv)
>  		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
>  	}
> 
> +	evt_rsrc = l3fwd_get_eventdev_rsrc();
> +	RTE_SET_USED(evt_rsrc);
>  	/* parse application arguments (after the EAL ones) */
>  	ret = parse_args(argc, argv);
>  	if (ret < 0)
>  		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
> 
> +	/* Configure eventdev parameters if user has requested */
> +	l3fwd_event_resource_setup();
> +
>  	if (check_lcore_params() < 0)
>  		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
> 
> diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
> index 6dd4b9022..864327c7b 100644
> --- a/examples/l3fwd/meson.build
> +++ b/examples/l3fwd/meson.build
> @@ -6,7 +6,7 @@
>  # To build this example as a standalone application with an already-installed
>  # DPDK instance, use 'make'
> 
> -deps += ['hash', 'lpm']
> +deps += ['hash', 'lpm', 'eventdev']
>  sources = files(
> -	'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c'
> +	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
>  )
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
@ 2020-01-03 13:07   ` Ananyev, Konstantin
  2020-01-06  4:29     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-03 13:07 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kovacevic, Marko, Ori Kam, Richardson,
	Bruce, Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz,
	Sunil Kumar Kori
  Cc: dev

> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 20df12748..69d212bc2 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -826,6 +826,93 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
>  	return 0;
>  }
> 
> +static inline int
> +l3fwd_service_enable(uint32_t service_id)
> +{
> +	uint8_t min_service_count = UINT8_MAX;
> +	uint32_t slcore_array[RTE_MAX_LCORE];
> +	unsigned int slcore = 0;
> +	uint8_t service_count;
> +	int32_t slcore_count;
> +
> +	if (!rte_service_lcore_count())
> +		return -ENOENT;
> +
> +	slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
> +	if (slcore_count < 0)
> +		return -ENOENT;
> +	/* Get the core which has least number of services running. */
> +	while (slcore_count--) {
> +		/* Reset default mapping */
> +		rte_service_map_lcore_set(service_id,
> +				slcore_array[slcore_count], 0);
> +		service_count = rte_service_lcore_count_services(
> +				slcore_array[slcore_count]);
> +		if (service_count < min_service_count) {
> +			slcore = slcore_array[slcore_count];
> +			min_service_count = service_count;
> +		}
> +	}
> +	if (rte_service_map_lcore_set(service_id, slcore, 1))
> +		return -ENOENT;
> +	rte_service_lcore_start(slcore);
> +
> +	return 0;
> +}
> +
> +static void
> +l3fwd_event_service_setup(void)
> +{
> +	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
> +	struct rte_event_dev_info evdev_info;
> +	uint32_t service_id, caps;
> +	int ret, i;
> +
> +	rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
> +	if (evdev_info.event_dev_cap  & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
> +		ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
> +				&service_id);
> +		if (ret != -ESRCH && ret != 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Error in starting eventdev service\n");
> +		l3fwd_service_enable(service_id);
> +	}
> +
> +	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
> +		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
> +				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
> +		if (ret < 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Failed to get Rx adapter[%d] caps\n",
> +				 evt_rsrc->rx_adptr.rx_adptr[i]);
> +		ret = rte_event_eth_rx_adapter_service_id_get(
> +				evt_rsrc->event_d_id,
> +				&service_id);
> +		if (ret != -ESRCH && ret != 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Error in starting Rx adapter[%d] service\n",
> +				 evt_rsrc->rx_adptr.rx_adptr[i]);
> +		l3fwd_service_enable(service_id);
> +	}
> +
> +	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
> +		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
> +				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
> +		if (ret < 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Failed to get Rx adapter[%d] caps\n",
> +				 evt_rsrc->tx_adptr.tx_adptr[i]);
> +		ret = rte_event_eth_tx_adapter_service_id_get(
> +				evt_rsrc->event_d_id,
> +				&service_id);
> +		if (ret != -ESRCH && ret != 0)
> +			rte_exit(EXIT_FAILURE,
> +				 "Error in starting Rx adapter[%d] service\n",
> +				 evt_rsrc->tx_adptr.tx_adptr[i]);
> +		l3fwd_service_enable(service_id);
> +	}
> +}
> +
>  int
>  main(int argc, char **argv)
>  {
> @@ -869,6 +956,8 @@ main(int argc, char **argv)
>  	evt_rsrc->port_mask = enabled_port_mask;
>  	/* Configure eventdev parameters if user has requested */
>  	l3fwd_event_resource_setup(&port_conf);
> +	if (evt_rsrc->enabled)
> +		goto skip_port_config;

Please try to avoid introducing more gotos.
If init code below is poll mode specific, let's move it int a separate function.
Then we can have something like:
If (evt_rsrc->enabled)  {
  ...
  ret =  l3fwd_event_resource_setup(...);
  l3fwd_event_service_setup();
 ...
} else 
   ret = l3fwd_poll_resource_setup(...);

...

> 
>  	if (check_lcore_params() < 0)
>  		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
> @@ -1054,6 +1143,7 @@ main(int argc, char **argv)
>  		}
>  	}
> 
> +skip_port_config:
>  	printf("\n");
> 
>  	/* start ports */
> @@ -1083,6 +1173,9 @@ main(int argc, char **argv)
>  		}
>  	}
> 
> +	if (evt_rsrc->enabled)
> +		l3fwd_event_service_setup();
> +
>  	printf("\n");
> 
>  	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
@ 2020-01-03 13:16   ` Ananyev, Konstantin
  2020-01-06  4:43     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-03 13:16 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kovacevic, Marko, Ori Kam, Richardson,
	Bruce, Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz,
	Sunil Kumar Kori
  Cc: dev

> Add lpm main loop for handling events based on capabilities of the
> event device.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  examples/l3fwd/l3fwd.h       |   9 ++
>  examples/l3fwd/l3fwd_event.c |   9 ++
>  examples/l3fwd/l3fwd_event.h |   5 +
>  examples/l3fwd/l3fwd_lpm.c   | 231 +++++++++++++++++++++++++++++++++++
>  examples/l3fwd/main.c        |  10 +-
>  5 files changed, 260 insertions(+), 4 deletions(-)
> 
> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
> index 6d16cde74..8f2e4be23 100644
> --- a/examples/l3fwd/l3fwd.h
> +++ b/examples/l3fwd/l3fwd.h
> @@ -212,6 +212,15 @@ em_main_loop(__attribute__((unused)) void *dummy);
>  int
>  lpm_main_loop(__attribute__((unused)) void *dummy);
> 
> +int
> +lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
> +int
> +lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
> +int
> +lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
> +int
> +lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);

No need to add unused attribute in function declaration.
BTW, if all event_loop_cb functions don't use parameter, why just not
make them 'typedef int (*event_loop_cb)(void)'?

> +
>  /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
>  void *
>  em_get_ipv4_l3fwd_lookup_struct(const int socketid);
> diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
> index 0e796f003..c7de046e3 100644
> --- a/examples/l3fwd/l3fwd_event.c
> +++ b/examples/l3fwd/l3fwd_event.c
> @@ -235,6 +235,12 @@ void
>  l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
>  {
>  	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
> +	const event_loop_cb lpm_event_loop[2][2] = {
> +		[0][0] = lpm_event_main_loop_tx_d,
> +		[0][1] = lpm_event_main_loop_tx_d_burst,
> +		[1][0] = lpm_event_main_loop_tx_q,
> +		[1][1] = lpm_event_main_loop_tx_q_burst,
> +	};
>  	uint32_t event_queue_cfg;
>  	int ret;
> 
> @@ -268,4 +274,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
>  	ret = rte_event_dev_start(evt_rsrc->event_d_id);
>  	if (ret < 0)
>  		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
> +
> +	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
> +						       [evt_rsrc->has_burst];
>  }
> diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
> index 9d8bd5a36..fcc0ce51a 100644
> --- a/examples/l3fwd/l3fwd_event.h
> +++ b/examples/l3fwd/l3fwd_event.h
> @@ -14,6 +14,11 @@
> 
>  #include "l3fwd.h"
> 
> +#define L3FWD_EVENT_SINGLE     0x1
> +#define L3FWD_EVENT_BURST      0x2
> +#define L3FWD_EVENT_TX_DIRECT  0x4
> +#define L3FWD_EVENT_TX_ENQ     0x8
> +
>  #define CMD_LINE_OPT_MODE "mode"
>  #define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
> 
> diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
> index 349de2703..c4669d6d5 100644
> --- a/examples/l3fwd/l3fwd_lpm.c
> +++ b/examples/l3fwd/l3fwd_lpm.c
> @@ -28,6 +28,7 @@
>  #include <rte_lpm6.h>
> 
>  #include "l3fwd.h"
> +#include "l3fwd_event.h"
> 
>  struct ipv4_l3fwd_lpm_route {
>  	uint32_t ip;
> @@ -254,6 +255,236 @@ lpm_main_loop(__attribute__((unused)) void *dummy)
>  	return 0;
>  }
> 
> +static __rte_always_inline void
> +lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
> +		const uint8_t flags)
> +{
> +	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
> +	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
> +		evt_rsrc->evq.nb_queues - 1];
> +	const uint8_t event_d_id = evt_rsrc->event_d_id;
> +	struct lcore_conf *lconf;
> +	unsigned int lcore_id;
> +	struct rte_event ev;
> +
> +	if (event_p_id < 0)
> +		return;
> +
> +	lcore_id = rte_lcore_id();
> +	lconf = &lcore_conf[lcore_id];
> +
> +	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
> +	while (!force_quit) {
> +		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
> +			continue;
> +
> +		struct rte_mbuf *mbuf = ev.mbuf;
> +		mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
> +
> +#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
> +				|| defined RTE_ARCH_PPC_64
> +		process_packet(mbuf, &mbuf->port);
> +#else
> +
> +		struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
> +							struct rte_ether_hdr *);
> +#ifdef DO_RFC_1812_CHECKS
> +		struct rte_ipv4_hdr *ipv4_hdr;
> +		if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
> +			/* Handle IPv4 headers.*/
> +			ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
> +					struct rte_ipv4_hdr *,
> +					sizeof(struct rte_ether_hdr));
> +
> +			if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
> +					< 0) {
> +				mbuf->port = BAD_PORT;
> +				continue;
> +			}
> +			/* Update time to live and header checksum */
> +			--(ipv4_hdr->time_to_live);
> +			++(ipv4_hdr->hdr_checksum);
> +		}
> +#endif
> +		/* dst addr */
> +		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[mbuf->port];
> +
> +		/* src addr */
> +		rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
> +				&eth_hdr->s_addr);
> +#endif

The code snippet above looks pretty much the same as in l3fwd_lpm_simple_forward()
except missing IPv6 part.
Same for loop_burst() below.
Instead od duplicating the code in dozen places, can we put this common code
into a function and use it? 



> +		if (mbuf->port == BAD_PORT) {
> +			rte_pktmbuf_free(mbuf);
> +			continue;
> +		}
> +
> +		if (flags & L3FWD_EVENT_TX_ENQ) {
> +			ev.queue_id = tx_q_id;
> +			ev.op = RTE_EVENT_OP_FORWARD;
> +			while (rte_event_enqueue_burst(event_d_id, event_p_id,
> +						&ev, 1) && !force_quit)
> +				;
> +		}
> +
> +		if (flags & L3FWD_EVENT_TX_DIRECT) {
> +			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> +			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
> +						event_p_id, &ev, 1, 0) &&
> +					!force_quit)
> +				;
> +		}
> +	}
> +}
> +
> +static __rte_always_inline void
> +lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
> +		const uint8_t flags)
> +{
> +	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
> +	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
> +		evt_rsrc->evq.nb_queues - 1];
> +	const uint8_t event_d_id = evt_rsrc->event_d_id;
> +	const uint16_t deq_len = evt_rsrc->deq_depth;
> +	struct rte_event events[MAX_PKT_BURST];
> +	struct lcore_conf *lconf;
> +	unsigned int lcore_id;
> +	int i, nb_enq, nb_deq;
> +
> +	if (event_p_id < 0)
> +		return;
> +
> +	lcore_id = rte_lcore_id();
> +
> +	lconf = &lcore_conf[lcore_id];
> +
> +	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
> +
> +	while (!force_quit) {
> +		/* Read events from RX queues */
> +		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
> +				events, deq_len, 0);
> +		if (nb_deq == 0) {
> +			rte_pause();
> +			continue;
> +		}
> +
> +		for (i = 0; i < nb_deq; i++) {
> +			struct rte_mbuf *mbuf = events[i].mbuf;
> +
> +			mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
> +
> +#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
> +					|| defined RTE_ARCH_PPC_64
> +			process_packet(mbuf, &mbuf->port);
> +#else
> +			struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
> +						struct rte_ether_hdr *);
> +
> +#ifdef DO_RFC_1812_CHECKS
> +			struct rte_ipv4_hdr *ipv4_hdr;
> +			if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
> +				/* Handle IPv4 headers.*/
> +				ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
> +						struct rte_ipv4_hdr *,
> +						sizeof(struct rte_ether_hdr));
> +
> +				if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
> +						< 0) {
> +					mbuf->port = BAD_PORT;
> +					continue;
> +				}
> +				/* Update time to live and header checksum */
> +				--(ipv4_hdr->time_to_live);
> +				++(ipv4_hdr->hdr_checksum);
> +			}
> +#endif
> +			/* dst addr */
> +			*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[
> +								mbuf->port];
> +			/* src addr */
> +			rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
> +					&eth_hdr->s_addr);
> +#endif
> +			if (flags & L3FWD_EVENT_TX_ENQ) {
> +				events[i].queue_id = tx_q_id;
> +				events[i].op = RTE_EVENT_OP_FORWARD;
> +			}
> +
> +			if (flags & L3FWD_EVENT_TX_DIRECT)
> +				rte_event_eth_tx_adapter_txq_set(mbuf, 0);
> +		}
> +
> +		if (flags & L3FWD_EVENT_TX_ENQ) {
> +			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
> +					events, nb_deq);
> +			while (nb_enq < nb_deq && !force_quit)
> +				nb_enq += rte_event_enqueue_burst(event_d_id,
> +						event_p_id, events + nb_enq,
> +						nb_deq - nb_enq);
> +		}
> +
> +		if (flags & L3FWD_EVENT_TX_DIRECT) {
> +			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
> +					event_p_id, events, nb_deq, 0);
> +			while (nb_enq < nb_deq && !force_quit)
> +				nb_enq += rte_event_eth_tx_adapter_enqueue(
> +						event_d_id, event_p_id,
> +						events + nb_enq,
> +						nb_deq - nb_enq, 0);
> +		}
> +	}
> +}
> +
> +static __rte_always_inline void
> +lpm_event_loop(struct l3fwd_event_resources *evt_rsrc,
> +		 const uint8_t flags)
> +{
> +	if (flags & L3FWD_EVENT_SINGLE)
> +		lpm_event_loop_single(evt_rsrc, flags);
> +	if (flags & L3FWD_EVENT_BURST)
> +		lpm_event_loop_burst(evt_rsrc, flags);
> +}
> +
> +int __rte_noinline
> +lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
> +{
> +	struct l3fwd_event_resources *evt_rsrc =
> +					l3fwd_get_eventdev_rsrc();
> +
> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
> +	return 0;
> +}
> +
> +int __rte_noinline
> +lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
> +{
> +	struct l3fwd_event_resources *evt_rsrc =
> +					l3fwd_get_eventdev_rsrc();
> +
> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
> +	return 0;
> +}
> +
> +int __rte_noinline
> +lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
> +{
> +	struct l3fwd_event_resources *evt_rsrc =
> +					l3fwd_get_eventdev_rsrc();
> +
> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
> +	return 0;
> +}
> +
> +int __rte_noinline
> +lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
> +{
> +	struct l3fwd_event_resources *evt_rsrc =
> +					l3fwd_get_eventdev_rsrc();
> +
> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
> +	return 0;
> +}
> +
>  void
>  setup_lpm(const int socketid)
>  {
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 69d212bc2..b88fd88db 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -951,13 +951,18 @@ main(int argc, char **argv)
>  	if (ret < 0)
>  		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
> 
> +	/* Setup function pointers for lookup method. */
> +	setup_l3fwd_lookup_tables();
> +
>  	evt_rsrc->per_port_pool = per_port_pool;
>  	evt_rsrc->pkt_pool = pktmbuf_pool;
>  	evt_rsrc->port_mask = enabled_port_mask;
>  	/* Configure eventdev parameters if user has requested */
>  	l3fwd_event_resource_setup(&port_conf);
> -	if (evt_rsrc->enabled)
> +	if (evt_rsrc->enabled) {
> +		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
>  		goto skip_port_config;
> +	}
> 
>  	if (check_lcore_params() < 0)
>  		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
> @@ -973,9 +978,6 @@ main(int argc, char **argv)
> 
>  	nb_lcores = rte_lcore_count();
> 
> -	/* Setup function pointers for lookup method. */
> -	setup_l3fwd_lookup_tables();
> -
>  	/* initialize all ports */
>  	RTE_ETH_FOREACH_DEV(portid) {
>  		struct rte_eth_conf local_port_conf = port_conf;
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em main loop
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em " pbhagavatula
@ 2020-01-03 13:45   ` Ananyev, Konstantin
  2020-01-06  4:44     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-03 13:45 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kovacevic, Marko, Ori Kam, Richardson,
	Bruce, Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz,
	Sunil Kumar Kori
  Cc: dev



> Add em main loop for handling events based on capabilities of the
> event device.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  examples/l3fwd/l3fwd.h               |  10 ++
>  examples/l3fwd/l3fwd_em.c            | 177 +++++++++++++++++++++++++++
>  examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
>  examples/l3fwd/l3fwd_em_hlm.h        | 131 ++++++++++++++++++++
>  examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
>  examples/l3fwd/l3fwd_event.c         |   9 ++
>  examples/l3fwd/main.c                |   5 +-
>  7 files changed, 470 insertions(+), 47 deletions(-)
> 
> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
> index 8f2e4be23..2d02fa731 100644
> --- a/examples/l3fwd/l3fwd.h
> +++ b/examples/l3fwd/l3fwd.h
> @@ -221,6 +221,16 @@ lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
>  int
>  lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
> 
> +int
> +em_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
> +int
> +em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
> +int
> +em_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
> +int
> +em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);

Same question as for lpm: if your functions don't need params,
why not to define them as ones without params? 

> +
> +
>  /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
>  void *
>  em_get_ipv4_l3fwd_lookup_struct(const int socketid);

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
@ 2020-01-03 13:52   ` Ananyev, Konstantin
  2020-01-06  4:50     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-03 13:52 UTC (permalink / raw)
  To: pbhagavatula, jerinj, Kovacevic, Marko, Ori Kam, Richardson,
	Bruce, Nicolau, Radu, Akhil Goyal, Kantecki, Tomasz,
	Sunil Kumar Kori
  Cc: dev

> Add graceful teardown that addresses both event mode and poll mode.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  examples/l3fwd/main.c | 49 ++++++++++++++++++++++++++++++-------------
>  1 file changed, 34 insertions(+), 15 deletions(-)
> 
> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> index 0ae64dd41..68998f42c 100644
> --- a/examples/l3fwd/main.c
> +++ b/examples/l3fwd/main.c
> @@ -920,7 +920,7 @@ main(int argc, char **argv)
>  	struct lcore_conf *qconf;
>  	struct rte_eth_dev_info dev_info;
>  	struct rte_eth_txconf *txconf;
> -	int ret;
> +	int i, ret;
>  	unsigned nb_ports;
>  	uint16_t queueid, portid;
>  	unsigned lcore_id;
> @@ -1195,27 +1195,46 @@ main(int argc, char **argv)
>  		}
>  	}
> 
> -
>  	check_all_ports_link_status(enabled_port_mask);
> 
>  	ret = 0;
>  	/* launch per-lcore init on every lcore */
>  	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);
> -	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
> -		if (rte_eal_wait_lcore(lcore_id) < 0) {
> -			ret = -1;
> -			break;
> +	if (evt_rsrc->enabled) {
> +		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
> +			rte_event_eth_rx_adapter_stop(
> +					evt_rsrc->rx_adptr.rx_adptr[i]);
> +		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
> +			rte_event_eth_tx_adapter_stop(
> +					evt_rsrc->tx_adptr.tx_adptr[i]);
> +
> +		RTE_ETH_FOREACH_DEV(portid) {
> +			if ((enabled_port_mask & (1 << portid)) == 0)
> +				continue;
> +			rte_eth_dev_stop(portid);
>  		}
> -	}
> 
> -	/* stop ports */
> -	RTE_ETH_FOREACH_DEV(portid) {
> -		if ((enabled_port_mask & (1 << portid)) == 0)
> -			continue;
> -		printf("Closing port %d...", portid);
> -		rte_eth_dev_stop(portid);
> -		rte_eth_dev_close(portid);
> -		printf(" Done\n");

Why to stop ports *before* making sure all lcores are stopped?
Shouldn't that peace of code be identical for both poll and event mode?
Something like:
rte_eal_mp_wait_lcore();

                RTE_ETH_FOREACH_DEV(portid) {
                        if ((enabled_port_mask & (1 << portid)) == 0)
                                continue;
                        rte_eth_dev_stop(portid);
                        rte_eth_dev_close(portid);
                }
?

> +		rte_eal_mp_wait_lcore();
> +		RTE_ETH_FOREACH_DEV(portid) {
> +			if ((enabled_port_mask & (1 << portid)) == 0)
> +				continue;
> +			rte_eth_dev_close(portid);
> +		}
> +
> +		rte_event_dev_stop(evt_rsrc->event_d_id);
> +		rte_event_dev_close(evt_rsrc->event_d_id);
> +
> +	} else {
> +		rte_eal_mp_wait_lcore();
> +
> +		RTE_ETH_FOREACH_DEV(portid) {
> +			if ((enabled_port_mask & (1 << portid)) == 0)
> +				continue;
> +			printf("Closing port %d...", portid);
> +			rte_eth_dev_stop(portid);
> +			rte_eth_dev_close(portid);
> +			printf(" Done\n");
> +		}
>  	}
>  	printf("Bye...\n");
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device
  2020-01-03 12:49   ` Ananyev, Konstantin
@ 2020-01-06  4:27     ` Pavan Nikhilesh Bhagavatula
  2020-01-06 12:32       ` Ananyev, Konstantin
  0 siblings, 1 reply; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-06  4:27 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev



>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Ananyev,
>Konstantin
>Sent: Friday, January 3, 2020 6:19 PM
>To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
>Jacob Kollanukkaran <jerinj@marvell.com>; Kovacevic, Marko
><marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>;
>Richardson, Bruce <bruce.richardson@intel.com>; Nicolau, Radu
><radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>;
>Kantecki, Tomasz <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>
>Cc: dev@dpdk.org
>Subject: Re: [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add
>framework for event device
>
>
>> Add framework to enable event device as a producer of packets.
>> To switch between event mode and poll mode the following options
>> have been added:
>> 	`--mode="eventdev"` or `--mode="poll"`
>> Also, allow the user to select the schedule type to be either
>> RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or
>RTE_SCHED_TYPE_PARALLEL
>> through:
>> 	`--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
>> 		`--eventq-sched="parallel"`
>>
>> Poll mode is still the default operation mode.
>>
>> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
>> ---
>>  examples/l3fwd/Makefile      |  2 +-
>>  examples/l3fwd/l3fwd.h       |  6 +++
>>  examples/l3fwd/l3fwd_event.c | 75
>++++++++++++++++++++++++++++++++++++
>>  examples/l3fwd/l3fwd_event.h | 54
>++++++++++++++++++++++++++
>>  examples/l3fwd/main.c        | 41 +++++++++++++++++---
>>  examples/l3fwd/meson.build   |  4 +-
>>  6 files changed, 174 insertions(+), 8 deletions(-)
>>  create mode 100644 examples/l3fwd/l3fwd_event.c
>>  create mode 100644 examples/l3fwd/l3fwd_event.h
>>
>> diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
>> index b2dbf2607..c892b867b 100644
>> --- a/examples/l3fwd/Makefile
>> +++ b/examples/l3fwd/Makefile
>> @@ -5,7 +5,7 @@
>>  APP = l3fwd
>>
>>  # all source are stored in SRCS-y
>> -SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c
>> +SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
>>
>>  # Build using pkg-config variables if possible
>>  ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
>> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
>> index 293fb1fa2..cd17a41b3 100644
>> --- a/examples/l3fwd/l3fwd.h
>> +++ b/examples/l3fwd/l3fwd.h
>> @@ -5,6 +5,9 @@
>>  #ifndef __L3_FWD_H__
>>  #define __L3_FWD_H__
>>
>> +#include <stdbool.h>
>> +
>
>Why do we need it here?

Looks like an artifact I will remove it in next version.

>
>> +#include <rte_ethdev.h>
>>  #include <rte_vect.h>
>>
>>  #define DO_RFC_1812_CHECKS
>> @@ -169,6 +172,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt,
>uint32_t link_len)
>>  }
>>  #endif /* DO_RFC_1812_CHECKS */
>>
>> +void
>> +print_usage(const char *prgname);
>> +
>>  /* Function pointers for LPM or EM functionality. */
>>  void
>>  setup_lpm(const int socketid);
>> diff --git a/examples/l3fwd/l3fwd_event.c
>b/examples/l3fwd/l3fwd_event.c
>> new file mode 100644
>> index 000000000..3892720be
>> --- /dev/null
>> +++ b/examples/l3fwd/l3fwd_event.c
>> @@ -0,0 +1,75 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(C) 2019 Marvell International Ltd.
>> + */
>> +
>> +#include <stdbool.h>
>> +#include <getopt.h>
>> +
>> +#include "l3fwd.h"
>> +#include "l3fwd_event.h"
>> +
>> +static void
>> +parse_mode(const char *optarg)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>l3fwd_get_eventdev_rsrc();
>> +
>> +	if (!strncmp(optarg, "poll", 4))
>
>That looks a bit clumsy and error-prone.
>Just strcmp(optarg, "poll") seems enough here.
>Same for other similar places.

Will simplify in next version.

>
>> +		evt_rsrc->enabled = false;
>> +	else if (!strncmp(optarg, "eventdev", 8))
>> +		evt_rsrc->enabled = true;
>> +}
>> +
>> +static void
>> +parse_eventq_sync(const char *optarg)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>l3fwd_get_eventdev_rsrc();
>> +
>> +	if (!strncmp(optarg, "ordered", 7))
>> +		evt_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
>> +	if (!strncmp(optarg, "atomic", 6))
>> +		evt_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
>> +	if (!strncmp(optarg, "parallel", 8))
>> +		evt_rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
>> +}
>> +
>> +static void
>> +l3fwd_parse_eventdev_args(char **argv, int argc)
>> +{
>> +	const struct option eventdev_lgopts[] = {
>> +		{CMD_LINE_OPT_MODE, 1, 0,
>CMD_LINE_OPT_MODE_NUM},
>> +		{CMD_LINE_OPT_EVENTQ_SYNC, 1, 0,
>CMD_LINE_OPT_EVENTQ_SYNC_NUM},
>> +		{NULL, 0, 0, 0}
>> +	};
>> +	char *prgname = argv[0];
>> +	char **argvopt = argv;
>> +	int32_t option_index;
>> +	int32_t opt;
>> +
>> +	while ((opt = getopt_long(argc, argvopt, "", eventdev_lgopts,
>> +					&option_index)) != EOF) {
>> +		switch (opt) {
>> +		case CMD_LINE_OPT_MODE_NUM:
>> +			parse_mode(optarg);
>> +			break;
>> +
>> +		case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
>> +			parse_eventq_sync(optarg);
>> +			break;
>> +
>> +		default:
>> +			print_usage(prgname);
>> +			exit(1);
>> +		}
>> +	}
>> +}
>> +
>> +void
>> +l3fwd_event_resource_setup(void)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>l3fwd_get_eventdev_rsrc();
>> +
>> +	/* Parse eventdev command line options */
>> +	l3fwd_parse_eventdev_args(evt_rsrc->args, evt_rsrc-
>>nb_args);
>> +	if (!evt_rsrc->enabled)
>> +		return;
>> +}
>> diff --git a/examples/l3fwd/l3fwd_event.h
>b/examples/l3fwd/l3fwd_event.h
>> new file mode 100644
>> index 000000000..c95296c38
>> --- /dev/null
>> +++ b/examples/l3fwd/l3fwd_event.h
>> @@ -0,0 +1,54 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(C) 2019 Marvell International Ltd.
>> + */
>> +
>> +#ifndef __L3FWD_EVENTDEV_H__
>> +#define __L3FWD_EVENTDEV_H__
>> +
>> +#include <rte_common.h>
>> +#include <rte_eventdev.h>
>> +#include <rte_spinlock.h>
>> +
>> +#include "l3fwd.h"
>> +
>> +#define CMD_LINE_OPT_MODE "mode"
>> +#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
>> +
>> +enum {
>> +	CMD_LINE_OPT_MODE_NUM = 265,
>> +	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
>> +};
>> +
>> +struct l3fwd_event_resources {
>> +	uint8_t sched_type;
>> +	uint8_t enabled;
>> +	uint8_t nb_args;
>> +	char **args;
>> +};
>> +
>> +static inline struct l3fwd_event_resources *
>> +l3fwd_get_eventdev_rsrc(void)
>> +{
>> +	static const char name[RTE_MEMZONE_NAMESIZE] =
>"l3fwd_event_rsrc";
>> +	const struct rte_memzone *mz;
>> +
>> +	mz = rte_memzone_lookup(name);
>> +
>> +	if (mz != NULL)
>> +		return mz->addr;
>> +
>> +	mz = rte_memzone_reserve(name, sizeof(struct
>l3fwd_event_resources),
>> +				 0, 0);
>> +	if (mz != NULL) {
>> +		memset(mz->addr, 0, sizeof(struct
>l3fwd_event_resources));
>> +		return mz->addr;
>> +	}
>> +
>> +	rte_exit(EXIT_FAILURE, "Unable to allocate memory for
>eventdev cfg\n");
>> +
>> +	return NULL;
>> +}
>
>Does this function really need to be inline?
>It wouldn't be fast anyway.
>Another question - do you really need memzone here?
>Wouldn't just rte_malloc() be enough?

Will remove inline in next version. 
rte_malloc would call for a global variable which I'm 
trying to avoid. I don't think there is any harm in using
named memzone.

>
>> +
>> +void l3fwd_event_resource_setup(void);
>> +
>> +#endif /* __L3FWD_EVENTDEV_H__ */
>> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
>> index 4dea12a65..19ca4483c 100644
>> --- a/examples/l3fwd/main.c
>> +++ b/examples/l3fwd/main.c
>> @@ -13,12 +13,12 @@
>>  #include <errno.h>
>>  #include <getopt.h>
>>  #include <signal.h>
>> -#include <stdbool.h>
>>
>>  #include <rte_common.h>
>>  #include <rte_vect.h>
>>  #include <rte_byteorder.h>
>>  #include <rte_log.h>
>> +#include <rte_malloc.h>
>>  #include <rte_memory.h>
>>  #include <rte_memcpy.h>
>>  #include <rte_eal.h>
>> @@ -33,7 +33,6 @@
>>  #include <rte_random.h>
>>  #include <rte_debug.h>
>>  #include <rte_ether.h>
>> -#include <rte_ethdev.h>
>>  #include <rte_mempool.h>
>>  #include <rte_mbuf.h>
>>  #include <rte_ip.h>
>> @@ -46,6 +45,7 @@
>>  #include <cmdline_parse_etheraddr.h>
>>
>>  #include "l3fwd.h"
>> +#include "l3fwd_event.h"
>>
>>  /*
>>   * Configurable number of RX/TX ring descriptors
>> @@ -274,7 +274,7 @@ init_lcore_rx_queues(void)
>>  }
>>
>>  /* display usage */
>> -static void
>> +void
>>  print_usage(const char *prgname)
>>  {
>>  	fprintf(stderr, "%s [EAL options] --"
>> @@ -289,7 +289,9 @@ print_usage(const char *prgname)
>>  		" [--hash-entry-num]"
>>  		" [--ipv6]"
>>  		" [--parse-ptype]"
>> -		" [--per-port-pool]\n\n"
>> +		" [--per-port-pool]"
>> +		" [--mode]"
>> +		" [--eventq-sched]\n\n"
>>
>>  		"  -p PORTMASK: Hexadecimal bitmask of ports to
>configure\n"
>>  		"  -P : Enable promiscuous mode\n"
>> @@ -304,7 +306,13 @@ print_usage(const char *prgname)
>>  		"  --hash-entry-num: Specify the hash entry number in
>hexadecimal to be setup\n"
>>  		"  --ipv6: Set if running ipv6 packets\n"
>>  		"  --parse-ptype: Set to use software to analyze packet
>type\n"
>> -		"  --per-port-pool: Use separate buffer pool per
>port\n\n",
>> +		"  --per-port-pool: Use separate buffer pool per port\n"
>> +		"  --mode: Packet transfer mode for I/O, poll or
>eventdev\n"
>> +		"          Default mode = poll\n"
>> +		"  --eventq-sched: Event queue synchronization
>method "
>> +		"                  ordered, atomic or parallel.\n\t\t"
>> +		"		   Default: atomic\n\t\t"
>> +		"                  Valid only if --mode=eventdev\n\n",
>>  		prgname);
>>  }
>>
>> @@ -504,11 +512,19 @@ static const struct option lgopts[] = {
>>  static int
>>  parse_args(int argc, char **argv)
>>  {
>> +	struct l3fwd_event_resources *evt_rsrc;
>>  	int opt, ret;
>>  	char **argvopt;
>>  	int option_index;
>>  	char *prgname = argv[0];
>>
>> +	evt_rsrc = l3fwd_get_eventdev_rsrc();
>> +	evt_rsrc->args = rte_zmalloc("l3fwd_event_args", sizeof(char
>*), 0);
>> +	if (evt_rsrc->args == NULL)
>> +		rte_exit(EXIT_FAILURE,
>> +				"Unable to allocate memory for
>eventdev arg");
>> +	evt_rsrc->args[0] = argv[0];
>> +	evt_rsrc->nb_args++;
>>  	argvopt = argv;
>>
>>  	/* Error or normal output strings. */
>> @@ -538,6 +554,15 @@ parse_args(int argc, char **argv)
>>  			l3fwd_lpm_on = 1;
>>  			break;
>>
>> +		case '?':
>> +			/* Eventdev options are encountered skip for
>> +			 * now and processed later.
>> +			 */
>> +			evt_rsrc->args[evt_rsrc->nb_args] =
>> +				argv[optind - 1];
>> +			evt_rsrc->nb_args++;
>> +			break;
>> +
>
>All this construction with first allocating space for eventdev specific
>params
>copying them and parsing in a special function - looks like an overkill to
>me.
>Why not just to call parse_mode()/parse_eventq.. functions straight
>from here?

We were trying to make minimal changes to l3fwd but I guess we could 
have common cmdline parse functions.

>
>
>>  		/* long options */
>>  		case CMD_LINE_OPT_CONFIG_NUM:
>>  			ret = parse_config(optarg);
>> @@ -811,6 +836,7 @@ prepare_ptype_parser(uint16_t portid,
>uint16_t queueid)
>>  int
>>  main(int argc, char **argv)
>>  {
>> +	struct l3fwd_event_resources *evt_rsrc;
>>  	struct lcore_conf *qconf;
>>  	struct rte_eth_dev_info dev_info;
>>  	struct rte_eth_txconf *txconf;
>> @@ -839,11 +865,16 @@ main(int argc, char **argv)
>>  		*(uint64_t *)(val_eth + portid) =
>dest_eth_addr[portid];
>>  	}
>>
>> +	evt_rsrc = l3fwd_get_eventdev_rsrc();
>> +	RTE_SET_USED(evt_rsrc);
>>  	/* parse application arguments (after the EAL ones) */
>>  	ret = parse_args(argc, argv);
>>  	if (ret < 0)
>>  		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
>>
>> +	/* Configure eventdev parameters if user has requested */
>> +	l3fwd_event_resource_setup();
>> +
>>  	if (check_lcore_params() < 0)
>>  		rte_exit(EXIT_FAILURE, "check_lcore_params
>failed\n");
>>
>> diff --git a/examples/l3fwd/meson.build
>b/examples/l3fwd/meson.build
>> index 6dd4b9022..864327c7b 100644
>> --- a/examples/l3fwd/meson.build
>> +++ b/examples/l3fwd/meson.build
>> @@ -6,7 +6,7 @@
>>  # To build this example as a standalone application with an already-
>installed
>>  # DPDK instance, use 'make'
>>
>> -deps += ['hash', 'lpm']
>> +deps += ['hash', 'lpm', 'eventdev']
>>  sources = files(
>> -	'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c'
>> +	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
>>  )
>> --
>> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps
  2020-01-03 13:07   ` Ananyev, Konstantin
@ 2020-01-06  4:29     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-06  4:29 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev


>> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
>> index 20df12748..69d212bc2 100644
>> --- a/examples/l3fwd/main.c
>> +++ b/examples/l3fwd/main.c
>> @@ -826,6 +826,93 @@ prepare_ptype_parser(uint16_t portid,
>uint16_t queueid)
>>  	return 0;
>>  }
>>
>> +static inline int
>> +l3fwd_service_enable(uint32_t service_id)
>> +{
>> +	uint8_t min_service_count = UINT8_MAX;
>> +	uint32_t slcore_array[RTE_MAX_LCORE];
>> +	unsigned int slcore = 0;
>> +	uint8_t service_count;
>> +	int32_t slcore_count;
>> +
>> +	if (!rte_service_lcore_count())
>> +		return -ENOENT;
>> +
>> +	slcore_count = rte_service_lcore_list(slcore_array,
>RTE_MAX_LCORE);
>> +	if (slcore_count < 0)
>> +		return -ENOENT;
>> +	/* Get the core which has least number of services running. */
>> +	while (slcore_count--) {
>> +		/* Reset default mapping */
>> +		rte_service_map_lcore_set(service_id,
>> +				slcore_array[slcore_count], 0);
>> +		service_count = rte_service_lcore_count_services(
>> +				slcore_array[slcore_count]);
>> +		if (service_count < min_service_count) {
>> +			slcore = slcore_array[slcore_count];
>> +			min_service_count = service_count;
>> +		}
>> +	}
>> +	if (rte_service_map_lcore_set(service_id, slcore, 1))
>> +		return -ENOENT;
>> +	rte_service_lcore_start(slcore);
>> +
>> +	return 0;
>> +}
>> +
>> +static void
>> +l3fwd_event_service_setup(void)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>l3fwd_get_eventdev_rsrc();
>> +	struct rte_event_dev_info evdev_info;
>> +	uint32_t service_id, caps;
>> +	int ret, i;
>> +
>> +	rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
>> +	if (evdev_info.event_dev_cap  &
>RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
>> +		ret = rte_event_dev_service_id_get(evt_rsrc-
>>event_d_id,
>> +				&service_id);
>> +		if (ret != -ESRCH && ret != 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Error in starting eventdev service\n");
>> +		l3fwd_service_enable(service_id);
>> +	}
>> +
>> +	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
>> +		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc-
>>event_d_id,
>> +				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
>> +		if (ret < 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Failed to get Rx adapter[%d] caps\n",
>> +				 evt_rsrc->rx_adptr.rx_adptr[i]);
>> +		ret = rte_event_eth_rx_adapter_service_id_get(
>> +				evt_rsrc->event_d_id,
>> +				&service_id);
>> +		if (ret != -ESRCH && ret != 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Error in starting Rx adapter[%d]
>service\n",
>> +				 evt_rsrc->rx_adptr.rx_adptr[i]);
>> +		l3fwd_service_enable(service_id);
>> +	}
>> +
>> +	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
>> +		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc-
>>event_d_id,
>> +				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
>> +		if (ret < 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Failed to get Rx adapter[%d] caps\n",
>> +				 evt_rsrc->tx_adptr.tx_adptr[i]);
>> +		ret = rte_event_eth_tx_adapter_service_id_get(
>> +				evt_rsrc->event_d_id,
>> +				&service_id);
>> +		if (ret != -ESRCH && ret != 0)
>> +			rte_exit(EXIT_FAILURE,
>> +				 "Error in starting Rx adapter[%d]
>service\n",
>> +				 evt_rsrc->tx_adptr.tx_adptr[i]);
>> +		l3fwd_service_enable(service_id);
>> +	}
>> +}
>> +
>>  int
>>  main(int argc, char **argv)
>>  {
>> @@ -869,6 +956,8 @@ main(int argc, char **argv)
>>  	evt_rsrc->port_mask = enabled_port_mask;
>>  	/* Configure eventdev parameters if user has requested */
>>  	l3fwd_event_resource_setup(&port_conf);
>> +	if (evt_rsrc->enabled)
>> +		goto skip_port_config;
>
>Please try to avoid introducing more gotos.
>If init code below is poll mode specific, let's move it int a separate
>function.
>Then we can have something like:
>If (evt_rsrc->enabled)  {
>  ...
>  ret =  l3fwd_event_resource_setup(...);
>  l3fwd_event_service_setup();
> ...
>} else
>   ret = l3fwd_poll_resource_setup(...);
>
>...
>

Sure, if we are ok to change/mode existing l3fwd code it makes things a lot 
easier.

>>
>>  	if (check_lcore_params() < 0)
>>  		rte_exit(EXIT_FAILURE, "check_lcore_params
>failed\n");
>> @@ -1054,6 +1143,7 @@ main(int argc, char **argv)
>>  		}
>>  	}
>>
>> +skip_port_config:
>>  	printf("\n");
>>
>>  	/* start ports */
>> @@ -1083,6 +1173,9 @@ main(int argc, char **argv)
>>  		}
>>  	}
>>
>> +	if (evt_rsrc->enabled)
>> +		l3fwd_event_service_setup();
>> +
>>  	printf("\n");
>>
>>  	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
>> --
>> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop
  2020-01-03 13:16   ` Ananyev, Konstantin
@ 2020-01-06  4:43     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-06  4:43 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev



>> Add lpm main loop for handling events based on capabilities of the
>> event device.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>>  examples/l3fwd/l3fwd.h       |   9 ++
>>  examples/l3fwd/l3fwd_event.c |   9 ++
>>  examples/l3fwd/l3fwd_event.h |   5 +
>>  examples/l3fwd/l3fwd_lpm.c   | 231
>+++++++++++++++++++++++++++++++++++
>>  examples/l3fwd/main.c        |  10 +-
>>  5 files changed, 260 insertions(+), 4 deletions(-)
>>
>> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
>> index 6d16cde74..8f2e4be23 100644
>> --- a/examples/l3fwd/l3fwd.h
>> +++ b/examples/l3fwd/l3fwd.h
>> @@ -212,6 +212,15 @@ em_main_loop(__attribute__((unused)) void
>*dummy);
>>  int
>>  lpm_main_loop(__attribute__((unused)) void *dummy);
>>
>> +int
>> +lpm_event_main_loop_tx_d(__attribute__((unused)) void
>*dummy);
>> +int
>> +lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void
>*dummy);
>> +int
>> +lpm_event_main_loop_tx_q(__attribute__((unused)) void
>*dummy);
>> +int
>> +lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void
>*dummy);
>
>No need to add unused attribute in function declaration.
>BTW, if all event_loop_cb functions don't use parameter, why just not
>make them 'typedef int (*event_loop_cb)(void)'?
>

These functions are used in remote launch so, we can't modify the prototype.

rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);

I will remove the unnecessary attribute in next version.

>> +
>>  /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
>>  void *
>>  em_get_ipv4_l3fwd_lookup_struct(const int socketid);
>> diff --git a/examples/l3fwd/l3fwd_event.c
>b/examples/l3fwd/l3fwd_event.c
>> index 0e796f003..c7de046e3 100644
>> --- a/examples/l3fwd/l3fwd_event.c
>> +++ b/examples/l3fwd/l3fwd_event.c
>> @@ -235,6 +235,12 @@ void
>>  l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
>>  {
>>  	struct l3fwd_event_resources *evt_rsrc =
>l3fwd_get_eventdev_rsrc();
>> +	const event_loop_cb lpm_event_loop[2][2] = {
>> +		[0][0] = lpm_event_main_loop_tx_d,
>> +		[0][1] = lpm_event_main_loop_tx_d_burst,
>> +		[1][0] = lpm_event_main_loop_tx_q,
>> +		[1][1] = lpm_event_main_loop_tx_q_burst,
>> +	};
>>  	uint32_t event_queue_cfg;
>>  	int ret;
>>
>> @@ -268,4 +274,7 @@ l3fwd_event_resource_setup(struct
>rte_eth_conf *port_conf)
>>  	ret = rte_event_dev_start(evt_rsrc->event_d_id);
>>  	if (ret < 0)
>>  		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
>> +
>> +	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc-
>>tx_mode_q]
>> +						       [evt_rsrc-
>>has_burst];
>>  }
>> diff --git a/examples/l3fwd/l3fwd_event.h
>b/examples/l3fwd/l3fwd_event.h
>> index 9d8bd5a36..fcc0ce51a 100644
>> --- a/examples/l3fwd/l3fwd_event.h
>> +++ b/examples/l3fwd/l3fwd_event.h
>> @@ -14,6 +14,11 @@
>>
>>  #include "l3fwd.h"
>>
>> +#define L3FWD_EVENT_SINGLE     0x1
>> +#define L3FWD_EVENT_BURST      0x2
>> +#define L3FWD_EVENT_TX_DIRECT  0x4
>> +#define L3FWD_EVENT_TX_ENQ     0x8
>> +
>>  #define CMD_LINE_OPT_MODE "mode"
>>  #define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
>>
>> diff --git a/examples/l3fwd/l3fwd_lpm.c
>b/examples/l3fwd/l3fwd_lpm.c
>> index 349de2703..c4669d6d5 100644
>> --- a/examples/l3fwd/l3fwd_lpm.c
>> +++ b/examples/l3fwd/l3fwd_lpm.c
>> @@ -28,6 +28,7 @@
>>  #include <rte_lpm6.h>
>>
>>  #include "l3fwd.h"
>> +#include "l3fwd_event.h"
>>
>>  struct ipv4_l3fwd_lpm_route {
>>  	uint32_t ip;
>> @@ -254,6 +255,236 @@ lpm_main_loop(__attribute__((unused))
>void *dummy)
>>  	return 0;
>>  }
>>
>> +static __rte_always_inline void
>> +lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
>> +		const uint8_t flags)
>> +{
>> +	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
>> +	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
>> +		evt_rsrc->evq.nb_queues - 1];
>> +	const uint8_t event_d_id = evt_rsrc->event_d_id;
>> +	struct lcore_conf *lconf;
>> +	unsigned int lcore_id;
>> +	struct rte_event ev;
>> +
>> +	if (event_p_id < 0)
>> +		return;
>> +
>> +	lcore_id = rte_lcore_id();
>> +	lconf = &lcore_conf[lcore_id];
>> +
>> +	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__,
>lcore_id);
>> +	while (!force_quit) {
>> +		if (!rte_event_dequeue_burst(event_d_id,
>event_p_id, &ev, 1, 0))
>> +			continue;
>> +
>> +		struct rte_mbuf *mbuf = ev.mbuf;
>> +		mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf-
>>port);
>> +
>> +#if defined RTE_ARCH_X86 || defined
>RTE_MACHINE_CPUFLAG_NEON \
>> +				|| defined RTE_ARCH_PPC_64
>> +		process_packet(mbuf, &mbuf->port);
>> +#else
>> +
>> +		struct rte_ether_hdr *eth_hdr =
>rte_pktmbuf_mtod(mbuf,
>> +							struct
>rte_ether_hdr *);
>> +#ifdef DO_RFC_1812_CHECKS
>> +		struct rte_ipv4_hdr *ipv4_hdr;
>> +		if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
>> +			/* Handle IPv4 headers.*/
>> +			ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
>> +					struct rte_ipv4_hdr *,
>> +					sizeof(struct rte_ether_hdr));
>> +
>> +			if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
>> +					< 0) {
>> +				mbuf->port = BAD_PORT;
>> +				continue;
>> +			}
>> +			/* Update time to live and header checksum */
>> +			--(ipv4_hdr->time_to_live);
>> +			++(ipv4_hdr->hdr_checksum);
>> +		}
>> +#endif
>> +		/* dst addr */
>> +		*(uint64_t *)&eth_hdr->d_addr =
>dest_eth_addr[mbuf->port];
>> +
>> +		/* src addr */
>> +		rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
>> +				&eth_hdr->s_addr);
>> +#endif
>
>The code snippet above looks pretty much the same as in
>l3fwd_lpm_simple_forward()
>except missing IPv6 part.
>Same for loop_burst() below.
>Instead od duplicating the code in dozen places, can we put this
>common code
>into a function and use it?

Sure, I will move it to l3fwd_lpm.h and use it across.

>
>
>
>> +		if (mbuf->port == BAD_PORT) {
>> +			rte_pktmbuf_free(mbuf);
>> +			continue;
>> +		}
>> +
>> +		if (flags & L3FWD_EVENT_TX_ENQ) {
>> +			ev.queue_id = tx_q_id;
>> +			ev.op = RTE_EVENT_OP_FORWARD;
>> +			while (rte_event_enqueue_burst(event_d_id,
>event_p_id,
>> +						&ev, 1) && !force_quit)
>> +				;
>> +		}
>> +
>> +		if (flags & L3FWD_EVENT_TX_DIRECT) {
>> +			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
>> +			while
>(!rte_event_eth_tx_adapter_enqueue(event_d_id,
>> +						event_p_id, &ev, 1, 0)
>&&
>> +					!force_quit)
>> +				;
>> +		}
>> +	}
>> +}
>> +
>> +static __rte_always_inline void
>> +lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
>> +		const uint8_t flags)
>> +{
>> +	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
>> +	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
>> +		evt_rsrc->evq.nb_queues - 1];
>> +	const uint8_t event_d_id = evt_rsrc->event_d_id;
>> +	const uint16_t deq_len = evt_rsrc->deq_depth;
>> +	struct rte_event events[MAX_PKT_BURST];
>> +	struct lcore_conf *lconf;
>> +	unsigned int lcore_id;
>> +	int i, nb_enq, nb_deq;
>> +
>> +	if (event_p_id < 0)
>> +		return;
>> +
>> +	lcore_id = rte_lcore_id();
>> +
>> +	lconf = &lcore_conf[lcore_id];
>> +
>> +	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__,
>lcore_id);
>> +
>> +	while (!force_quit) {
>> +		/* Read events from RX queues */
>> +		nb_deq = rte_event_dequeue_burst(event_d_id,
>event_p_id,
>> +				events, deq_len, 0);
>> +		if (nb_deq == 0) {
>> +			rte_pause();
>> +			continue;
>> +		}
>> +
>> +		for (i = 0; i < nb_deq; i++) {
>> +			struct rte_mbuf *mbuf = events[i].mbuf;
>> +
>> +			mbuf->port = lpm_get_dst_port(lconf, mbuf,
>mbuf->port);
>> +
>> +#if defined RTE_ARCH_X86 || defined
>RTE_MACHINE_CPUFLAG_NEON \
>> +					|| defined RTE_ARCH_PPC_64
>> +			process_packet(mbuf, &mbuf->port);
>> +#else
>> +			struct rte_ether_hdr *eth_hdr =
>rte_pktmbuf_mtod(mbuf,
>> +						struct rte_ether_hdr *);
>> +
>> +#ifdef DO_RFC_1812_CHECKS
>> +			struct rte_ipv4_hdr *ipv4_hdr;
>> +			if (RTE_ETH_IS_IPV4_HDR(mbuf-
>>packet_type)) {
>> +				/* Handle IPv4 headers.*/
>> +				ipv4_hdr =
>rte_pktmbuf_mtod_offset(mbuf,
>> +						struct rte_ipv4_hdr *,
>> +						sizeof(struct
>rte_ether_hdr));
>> +
>> +				if (is_valid_ipv4_pkt(ipv4_hdr, mbuf-
>>pkt_len)
>> +						< 0) {
>> +					mbuf->port = BAD_PORT;
>> +					continue;
>> +				}
>> +				/* Update time to live and header
>checksum */
>> +				--(ipv4_hdr->time_to_live);
>> +				++(ipv4_hdr->hdr_checksum);
>> +			}
>> +#endif
>> +			/* dst addr */
>> +			*(uint64_t *)&eth_hdr->d_addr =
>dest_eth_addr[
>> +								mbuf-
>>port];
>> +			/* src addr */
>> +			rte_ether_addr_copy(&ports_eth_addr[mbuf-
>>port],
>> +					&eth_hdr->s_addr);
>> +#endif
>> +			if (flags & L3FWD_EVENT_TX_ENQ) {
>> +				events[i].queue_id = tx_q_id;
>> +				events[i].op =
>RTE_EVENT_OP_FORWARD;
>> +			}
>> +
>> +			if (flags & L3FWD_EVENT_TX_DIRECT)
>> +
>	rte_event_eth_tx_adapter_txq_set(mbuf, 0);
>> +		}
>> +
>> +		if (flags & L3FWD_EVENT_TX_ENQ) {
>> +			nb_enq =
>rte_event_enqueue_burst(event_d_id, event_p_id,
>> +					events, nb_deq);
>> +			while (nb_enq < nb_deq && !force_quit)
>> +				nb_enq +=
>rte_event_enqueue_burst(event_d_id,
>> +						event_p_id, events +
>nb_enq,
>> +						nb_deq - nb_enq);
>> +		}
>> +
>> +		if (flags & L3FWD_EVENT_TX_DIRECT) {
>> +			nb_enq =
>rte_event_eth_tx_adapter_enqueue(event_d_id,
>> +					event_p_id, events, nb_deq, 0);
>> +			while (nb_enq < nb_deq && !force_quit)
>> +				nb_enq +=
>rte_event_eth_tx_adapter_enqueue(
>> +						event_d_id,
>event_p_id,
>> +						events + nb_enq,
>> +						nb_deq - nb_enq, 0);
>> +		}
>> +	}
>> +}
>> +
>> +static __rte_always_inline void
>> +lpm_event_loop(struct l3fwd_event_resources *evt_rsrc,
>> +		 const uint8_t flags)
>> +{
>> +	if (flags & L3FWD_EVENT_SINGLE)
>> +		lpm_event_loop_single(evt_rsrc, flags);
>> +	if (flags & L3FWD_EVENT_BURST)
>> +		lpm_event_loop_burst(evt_rsrc, flags);
>> +}
>> +
>> +int __rte_noinline
>> +lpm_event_main_loop_tx_d(__attribute__((unused)) void
>*dummy)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>> +					l3fwd_get_eventdev_rsrc();
>> +
>> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT |
>L3FWD_EVENT_SINGLE);
>> +	return 0;
>> +}
>> +
>> +int __rte_noinline
>> +lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void
>*dummy)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>> +					l3fwd_get_eventdev_rsrc();
>> +
>> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT |
>L3FWD_EVENT_BURST);
>> +	return 0;
>> +}
>> +
>> +int __rte_noinline
>> +lpm_event_main_loop_tx_q(__attribute__((unused)) void
>*dummy)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>> +					l3fwd_get_eventdev_rsrc();
>> +
>> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ |
>L3FWD_EVENT_SINGLE);
>> +	return 0;
>> +}
>> +
>> +int __rte_noinline
>> +lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void
>*dummy)
>> +{
>> +	struct l3fwd_event_resources *evt_rsrc =
>> +					l3fwd_get_eventdev_rsrc();
>> +
>> +	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ |
>L3FWD_EVENT_BURST);
>> +	return 0;
>> +}
>> +
>>  void
>>  setup_lpm(const int socketid)
>>  {
>> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
>> index 69d212bc2..b88fd88db 100644
>> --- a/examples/l3fwd/main.c
>> +++ b/examples/l3fwd/main.c
>> @@ -951,13 +951,18 @@ main(int argc, char **argv)
>>  	if (ret < 0)
>>  		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
>>
>> +	/* Setup function pointers for lookup method. */
>> +	setup_l3fwd_lookup_tables();
>> +
>>  	evt_rsrc->per_port_pool = per_port_pool;
>>  	evt_rsrc->pkt_pool = pktmbuf_pool;
>>  	evt_rsrc->port_mask = enabled_port_mask;
>>  	/* Configure eventdev parameters if user has requested */
>>  	l3fwd_event_resource_setup(&port_conf);
>> -	if (evt_rsrc->enabled)
>> +	if (evt_rsrc->enabled) {
>> +		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
>>  		goto skip_port_config;
>> +	}
>>
>>  	if (check_lcore_params() < 0)
>>  		rte_exit(EXIT_FAILURE, "check_lcore_params
>failed\n");
>> @@ -973,9 +978,6 @@ main(int argc, char **argv)
>>
>>  	nb_lcores = rte_lcore_count();
>>
>> -	/* Setup function pointers for lookup method. */
>> -	setup_l3fwd_lookup_tables();
>> -
>>  	/* initialize all ports */
>>  	RTE_ETH_FOREACH_DEV(portid) {
>>  		struct rte_eth_conf local_port_conf = port_conf;
>> --
>> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em main loop
  2020-01-03 13:45   ` Ananyev, Konstantin
@ 2020-01-06  4:44     ` Pavan Nikhilesh Bhagavatula
  2020-01-06 12:27       ` Ananyev, Konstantin
  0 siblings, 1 reply; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-06  4:44 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev

>> Add em main loop for handling events based on capabilities of the
>> event device.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>>  examples/l3fwd/l3fwd.h               |  10 ++
>>  examples/l3fwd/l3fwd_em.c            | 177
>+++++++++++++++++++++++++++
>>  examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
>>  examples/l3fwd/l3fwd_em_hlm.h        | 131
>++++++++++++++++++++
>>  examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
>>  examples/l3fwd/l3fwd_event.c         |   9 ++
>>  examples/l3fwd/main.c                |   5 +-
>>  7 files changed, 470 insertions(+), 47 deletions(-)
>>
>> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
>> index 8f2e4be23..2d02fa731 100644
>> --- a/examples/l3fwd/l3fwd.h
>> +++ b/examples/l3fwd/l3fwd.h
>> @@ -221,6 +221,16 @@
>lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
>>  int
>>  lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void
>*dummy);
>>
>> +int
>> +em_event_main_loop_tx_d(__attribute__((unused)) void
>*dummy);
>> +int
>> +em_event_main_loop_tx_d_burst(__attribute__((unused)) void
>*dummy);
>> +int
>> +em_event_main_loop_tx_q(__attribute__((unused)) void
>*dummy);
>> +int
>> +em_event_main_loop_tx_q_burst(__attribute__((unused)) void
>*dummy);
>
>Same question as for lpm: if your functions don't need params,
>why not to define them as ones without params?

Need to satisfy the prototype requirements for
rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);

>
>> +
>> +
>>  /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
>>  void *
>>  em_get_ipv4_l3fwd_lookup_struct(const int socketid);

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2020-01-03 13:52   ` Ananyev, Konstantin
@ 2020-01-06  4:50     ` Pavan Nikhilesh Bhagavatula
  2020-01-06 12:12       ` Ananyev, Konstantin
  0 siblings, 1 reply; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-06  4:50 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev

>> Add graceful teardown that addresses both event mode and poll
>mode.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>>  examples/l3fwd/main.c | 49 ++++++++++++++++++++++++++++++-
>------------
>>  1 file changed, 34 insertions(+), 15 deletions(-)
>>
>> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
>> index 0ae64dd41..68998f42c 100644
>> --- a/examples/l3fwd/main.c
>> +++ b/examples/l3fwd/main.c
>> @@ -920,7 +920,7 @@ main(int argc, char **argv)
>>  	struct lcore_conf *qconf;
>>  	struct rte_eth_dev_info dev_info;
>>  	struct rte_eth_txconf *txconf;
>> -	int ret;
>> +	int i, ret;
>>  	unsigned nb_ports;
>>  	uint16_t queueid, portid;
>>  	unsigned lcore_id;
>> @@ -1195,27 +1195,46 @@ main(int argc, char **argv)
>>  		}
>>  	}
>>
>> -
>>  	check_all_ports_link_status(enabled_port_mask);
>>
>>  	ret = 0;
>>  	/* launch per-lcore init on every lcore */
>>  	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL,
>CALL_MASTER);
>> -	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>> -		if (rte_eal_wait_lcore(lcore_id) < 0) {
>> -			ret = -1;
>> -			break;
>> +	if (evt_rsrc->enabled) {
>> +		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
>> +			rte_event_eth_rx_adapter_stop(
>> +					evt_rsrc->rx_adptr.rx_adptr[i]);
>> +		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
>> +			rte_event_eth_tx_adapter_stop(
>> +					evt_rsrc->tx_adptr.tx_adptr[i]);
>> +
>> +		RTE_ETH_FOREACH_DEV(portid) {
>> +			if ((enabled_port_mask & (1 << portid)) == 0)
>> +				continue;
>> +			rte_eth_dev_stop(portid);
>>  		}
>> -	}
>>
>> -	/* stop ports */
>> -	RTE_ETH_FOREACH_DEV(portid) {
>> -		if ((enabled_port_mask & (1 << portid)) == 0)
>> -			continue;
>> -		printf("Closing port %d...", portid);
>> -		rte_eth_dev_stop(portid);
>> -		rte_eth_dev_close(portid);
>> -		printf(" Done\n");
>
>Why to stop ports *before* making sure all lcores are stopped?
>Shouldn't that peace of code be identical for both poll and event mode?
>Something like:
>rte_eal_mp_wait_lcore();
>
>                RTE_ETH_FOREACH_DEV(portid) {
>                        if ((enabled_port_mask & (1 << portid)) == 0)
>                                continue;
>                        rte_eth_dev_stop(portid);
>                        rte_eth_dev_close(portid);
>                }
>?
>

Event dev spec requires stopping producers before consumers else we might run into 
deadlock in some cases.

>> +		rte_eal_mp_wait_lcore();
>> +		RTE_ETH_FOREACH_DEV(portid) {
>> +			if ((enabled_port_mask & (1 << portid)) == 0)
>> +				continue;
>> +			rte_eth_dev_close(portid);
>> +		}
>> +
>> +		rte_event_dev_stop(evt_rsrc->event_d_id);
>> +		rte_event_dev_close(evt_rsrc->event_d_id);
>> +
>> +	} else {
>> +		rte_eal_mp_wait_lcore();
>> +
>> +		RTE_ETH_FOREACH_DEV(portid) {
>> +			if ((enabled_port_mask & (1 << portid)) == 0)
>> +				continue;
>> +			printf("Closing port %d...", portid);
>> +			rte_eth_dev_stop(portid);
>> +			rte_eth_dev_close(portid);
>> +			printf(" Done\n");
>> +		}
>>  	}
>>  	printf("Bye...\n");
>>
>> --
>> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2020-01-06  4:50     ` Pavan Nikhilesh Bhagavatula
@ 2020-01-06 12:12       ` Ananyev, Konstantin
  2020-01-07 16:28         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-06 12:12 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Kovacevic, Marko, Ori Kam, Richardson, Bruce, Nicolau, Radu,
	Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
  Cc: dev


> >> Add graceful teardown that addresses both event mode and poll
> >mode.
> >>
> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> ---
> >>  examples/l3fwd/main.c | 49 ++++++++++++++++++++++++++++++-
> >------------
> >>  1 file changed, 34 insertions(+), 15 deletions(-)
> >>
> >> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
> >> index 0ae64dd41..68998f42c 100644
> >> --- a/examples/l3fwd/main.c
> >> +++ b/examples/l3fwd/main.c
> >> @@ -920,7 +920,7 @@ main(int argc, char **argv)
> >>  	struct lcore_conf *qconf;
> >>  	struct rte_eth_dev_info dev_info;
> >>  	struct rte_eth_txconf *txconf;
> >> -	int ret;
> >> +	int i, ret;
> >>  	unsigned nb_ports;
> >>  	uint16_t queueid, portid;
> >>  	unsigned lcore_id;
> >> @@ -1195,27 +1195,46 @@ main(int argc, char **argv)
> >>  		}
> >>  	}
> >>
> >> -
> >>  	check_all_ports_link_status(enabled_port_mask);
> >>
> >>  	ret = 0;
> >>  	/* launch per-lcore init on every lcore */
> >>  	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL,
> >CALL_MASTER);
> >> -	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
> >> -		if (rte_eal_wait_lcore(lcore_id) < 0) {
> >> -			ret = -1;
> >> -			break;
> >> +	if (evt_rsrc->enabled) {
> >> +		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
> >> +			rte_event_eth_rx_adapter_stop(
> >> +					evt_rsrc->rx_adptr.rx_adptr[i]);
> >> +		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
> >> +			rte_event_eth_tx_adapter_stop(
> >> +					evt_rsrc->tx_adptr.tx_adptr[i]);
> >> +
> >> +		RTE_ETH_FOREACH_DEV(portid) {
> >> +			if ((enabled_port_mask & (1 << portid)) == 0)
> >> +				continue;
> >> +			rte_eth_dev_stop(portid);
> >>  		}
> >> -	}
> >>
> >> -	/* stop ports */
> >> -	RTE_ETH_FOREACH_DEV(portid) {
> >> -		if ((enabled_port_mask & (1 << portid)) == 0)
> >> -			continue;
> >> -		printf("Closing port %d...", portid);
> >> -		rte_eth_dev_stop(portid);
> >> -		rte_eth_dev_close(portid);
> >> -		printf(" Done\n");
> >
> >Why to stop ports *before* making sure all lcores are stopped?
> >Shouldn't that peace of code be identical for both poll and event mode?
> >Something like:
> >rte_eal_mp_wait_lcore();
> >
> >                RTE_ETH_FOREACH_DEV(portid) {
> >                        if ((enabled_port_mask & (1 << portid)) == 0)
> >                                continue;
> >                        rte_eth_dev_stop(portid);
> >                        rte_eth_dev_close(portid);
> >                }
> >?
> >
> 
> Event dev spec requires stopping producers before consumers else we might run into
> deadlock in some cases.

Ok... but for TX path wouldn't core be a producer?
Also for that wouldn't rte_event_eth_(rx|tx)_adapter_stop(0 be enough?
I am not familiar with event-dev spec at all, so forgive for possibly dumb questions 😉

> 
> >> +		rte_eal_mp_wait_lcore();
> >> +		RTE_ETH_FOREACH_DEV(portid) {
> >> +			if ((enabled_port_mask & (1 << portid)) == 0)
> >> +				continue;
> >> +			rte_eth_dev_close(portid);
> >> +		}
> >> +
> >> +		rte_event_dev_stop(evt_rsrc->event_d_id);
> >> +		rte_event_dev_close(evt_rsrc->event_d_id);
> >> +
> >> +	} else {
> >> +		rte_eal_mp_wait_lcore();
> >> +
> >> +		RTE_ETH_FOREACH_DEV(portid) {
> >> +			if ((enabled_port_mask & (1 << portid)) == 0)
> >> +				continue;
> >> +			printf("Closing port %d...", portid);
> >> +			rte_eth_dev_stop(portid);
> >> +			rte_eth_dev_close(portid);
> >> +			printf(" Done\n");
> >> +		}
> >>  	}
> >>  	printf("Bye...\n");
> >>
> >> --
> >> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em main loop
  2020-01-06  4:44     ` Pavan Nikhilesh Bhagavatula
@ 2020-01-06 12:27       ` Ananyev, Konstantin
  0 siblings, 0 replies; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-06 12:27 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Kovacevic, Marko, Ori Kam, Richardson, Bruce, Nicolau, Radu,
	Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
  Cc: dev

> 
> >> Add em main loop for handling events based on capabilities of the
> >> event device.
> >>
> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >> ---
> >>  examples/l3fwd/l3fwd.h               |  10 ++
> >>  examples/l3fwd/l3fwd_em.c            | 177
> >+++++++++++++++++++++++++++
> >>  examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
> >>  examples/l3fwd/l3fwd_em_hlm.h        | 131
> >++++++++++++++++++++
> >>  examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
> >>  examples/l3fwd/l3fwd_event.c         |   9 ++
> >>  examples/l3fwd/main.c                |   5 +-
> >>  7 files changed, 470 insertions(+), 47 deletions(-)
> >>
> >> diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
> >> index 8f2e4be23..2d02fa731 100644
> >> --- a/examples/l3fwd/l3fwd.h
> >> +++ b/examples/l3fwd/l3fwd.h
> >> @@ -221,6 +221,16 @@
> >lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
> >>  int
> >>  lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void
> >*dummy);
> >>
> >> +int
> >> +em_event_main_loop_tx_d(__attribute__((unused)) void
> >*dummy);
> >> +int
> >> +em_event_main_loop_tx_d_burst(__attribute__((unused)) void
> >*dummy);
> >> +int
> >> +em_event_main_loop_tx_q(__attribute__((unused)) void
> >*dummy);
> >> +int
> >> +em_event_main_loop_tx_q_burst(__attribute__((unused)) void
> >*dummy);
> >
> >Same question as for lpm: if your functions don't need params,
> >why not to define them as ones without params?
> 
> Need to satisfy the prototype requirements for
> rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);

Ok.

> 
> >
> >> +
> >> +
> >>  /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
> >>  void *
> >>  em_get_ipv4_l3fwd_lookup_struct(const int socketid);

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device
  2020-01-06  4:27     ` Pavan Nikhilesh Bhagavatula
@ 2020-01-06 12:32       ` Ananyev, Konstantin
  2020-01-07 16:34         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 78+ messages in thread
From: Ananyev, Konstantin @ 2020-01-06 12:32 UTC (permalink / raw)
  To: Pavan Nikhilesh Bhagavatula, Jerin Jacob Kollanukkaran,
	Kovacevic, Marko, Ori Kam, Richardson, Bruce, Nicolau, Radu,
	Akhil Goyal, Kantecki, Tomasz, Sunil Kumar Kori
  Cc: dev

> >> +struct l3fwd_event_resources {
> >> +	uint8_t sched_type;
> >> +	uint8_t enabled;
> >> +	uint8_t nb_args;
> >> +	char **args;
> >> +};
> >> +
> >> +static inline struct l3fwd_event_resources *
> >> +l3fwd_get_eventdev_rsrc(void)
> >> +{
> >> +	static const char name[RTE_MEMZONE_NAMESIZE] =
> >"l3fwd_event_rsrc";
> >> +	const struct rte_memzone *mz;
> >> +
> >> +	mz = rte_memzone_lookup(name);
> >> +
> >> +	if (mz != NULL)
> >> +		return mz->addr;
> >> +
> >> +	mz = rte_memzone_reserve(name, sizeof(struct
> >l3fwd_event_resources),
> >> +				 0, 0);
> >> +	if (mz != NULL) {
> >> +		memset(mz->addr, 0, sizeof(struct
> >l3fwd_event_resources));
> >> +		return mz->addr;
> >> +	}
> >> +
> >> +	rte_exit(EXIT_FAILURE, "Unable to allocate memory for
> >eventdev cfg\n");
> >> +
> >> +	return NULL;
> >> +}
> >
> >Does this function really need to be inline?
> >It wouldn't be fast anyway.
> >Another question - do you really need memzone here?
> >Wouldn't just rte_malloc() be enough?
> 
> Will remove inline in next version.
> rte_malloc would call for a global variable which I'm
> trying to avoid.

If you plan to move that function into .c file,
you don't need a global var.  it could be static local one.

> I don't think there is any harm in using
> named memzone.

I don't see any harm, though malloc+var will be faster I think.
Though up to you - no strong opinion here.

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2020-01-06 12:12       ` Ananyev, Konstantin
@ 2020-01-07 16:28         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-07 16:28 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev



>-----Original Message-----
>From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
>Sent: Monday, January 6, 2020 5:43 PM
>To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>; Jerin
>Jacob Kollanukkaran <jerinj@marvell.com>; Kovacevic, Marko
><marko.kovacevic@intel.com>; Ori Kam <orika@mellanox.com>;
>Richardson, Bruce <bruce.richardson@intel.com>; Nicolau, Radu
><radu.nicolau@intel.com>; Akhil Goyal <akhil.goyal@nxp.com>;
>Kantecki, Tomasz <tomasz.kantecki@intel.com>; Sunil Kumar Kori
><skori@marvell.com>
>Cc: dev@dpdk.org
>Subject: [EXT] RE: [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add
>graceful teardown for eventdevice
>
>External Email
>
>----------------------------------------------------------------------
>
>> >> Add graceful teardown that addresses both event mode and poll
>> >mode.
>> >>
>> >> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> >> ---
>> >>  examples/l3fwd/main.c | 49
>++++++++++++++++++++++++++++++-
>> >------------
>> >>  1 file changed, 34 insertions(+), 15 deletions(-)
>> >>
>> >> diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
>> >> index 0ae64dd41..68998f42c 100644
>> >> --- a/examples/l3fwd/main.c
>> >> +++ b/examples/l3fwd/main.c
>> >> @@ -920,7 +920,7 @@ main(int argc, char **argv)
>> >>  	struct lcore_conf *qconf;
>> >>  	struct rte_eth_dev_info dev_info;
>> >>  	struct rte_eth_txconf *txconf;
>> >> -	int ret;
>> >> +	int i, ret;
>> >>  	unsigned nb_ports;
>> >>  	uint16_t queueid, portid;
>> >>  	unsigned lcore_id;
>> >> @@ -1195,27 +1195,46 @@ main(int argc, char **argv)
>> >>  		}
>> >>  	}
>> >>
>> >> -
>> >>  	check_all_ports_link_status(enabled_port_mask);
>> >>
>> >>  	ret = 0;
>> >>  	/* launch per-lcore init on every lcore */
>> >>  	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL,
>> >CALL_MASTER);
>> >> -	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
>> >> -		if (rte_eal_wait_lcore(lcore_id) < 0) {
>> >> -			ret = -1;
>> >> -			break;
>> >> +	if (evt_rsrc->enabled) {
>> >> +		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
>> >> +			rte_event_eth_rx_adapter_stop(
>> >> +					evt_rsrc->rx_adptr.rx_adptr[i]);
>> >> +		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
>> >> +			rte_event_eth_tx_adapter_stop(
>> >> +					evt_rsrc->tx_adptr.tx_adptr[i]);
>> >> +
>> >> +		RTE_ETH_FOREACH_DEV(portid) {
>> >> +			if ((enabled_port_mask & (1 << portid)) == 0)
>> >> +				continue;
>> >> +			rte_eth_dev_stop(portid);
>> >>  		}
>> >> -	}
>> >>
>> >> -	/* stop ports */
>> >> -	RTE_ETH_FOREACH_DEV(portid) {
>> >> -		if ((enabled_port_mask & (1 << portid)) == 0)
>> >> -			continue;
>> >> -		printf("Closing port %d...", portid);
>> >> -		rte_eth_dev_stop(portid);
>> >> -		rte_eth_dev_close(portid);
>> >> -		printf(" Done\n");
>> >
>> >Why to stop ports *before* making sure all lcores are stopped?
>> >Shouldn't that peace of code be identical for both poll and event
>mode?
>> >Something like:
>> >rte_eal_mp_wait_lcore();
>> >
>> >                RTE_ETH_FOREACH_DEV(portid) {
>> >                        if ((enabled_port_mask & (1 << portid)) == 0)
>> >                                continue;
>> >                        rte_eth_dev_stop(portid);
>> >                        rte_eth_dev_close(portid);
>> >                }
>> >?
>> >
>>
>> Event dev spec requires stopping producers before consumers else
>we might run into
>> deadlock in some cases.
>
>Ok... but for TX path wouldn't core be a producer?

Not in all cases, true in case of SW event device and implementation defined in HW event devices.

Also both the cases of Tx poll on force_quit in case Tx path fails so that cores can exit.

                if (flags & L3FWD_EVENT_TX_ENQ) {
                        ev.queue_id = tx_q_id;
                        ev.op = RTE_EVENT_OP_FORWARD;
                        while (rte_event_enqueue_burst(event_d_id, event_p_id,
                                                &ev, 1) && !force_quit)
                                ;
                }

                if (flags & L3FWD_EVENT_TX_DIRECT) {
                        rte_event_eth_tx_adapter_txq_set(mbuf, 0);
                        while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
                                                event_p_id, &ev, 1, 0) &&
                                        !force_quit)
                                ;
                }

>Also for that wouldn't rte_event_eth_(rx|tx)_adapter_stop(0 be
>enough?

In case of HW event devices the above might be nop as control lies with driver/net.
And since anyway we need to stop ethernet device we might as well do it here.

>I am not familiar with event-dev spec at all, so forgive for possibly dumb
>questions 😉

😊

>
>>
>> >> +		rte_eal_mp_wait_lcore();
>> >> +		RTE_ETH_FOREACH_DEV(portid) {
>> >> +			if ((enabled_port_mask & (1 << portid)) == 0)
>> >> +				continue;
>> >> +			rte_eth_dev_close(portid);
>> >> +		}
>> >> +
>> >> +		rte_event_dev_stop(evt_rsrc->event_d_id);
>> >> +		rte_event_dev_close(evt_rsrc->event_d_id);
>> >> +
>> >> +	} else {
>> >> +		rte_eal_mp_wait_lcore();
>> >> +
>> >> +		RTE_ETH_FOREACH_DEV(portid) {
>> >> +			if ((enabled_port_mask & (1 << portid)) == 0)
>> >> +				continue;
>> >> +			printf("Closing port %d...", portid);
>> >> +			rte_eth_dev_stop(portid);
>> >> +			rte_eth_dev_close(portid);
>> >> +			printf(" Done\n");
>> >> +		}
>> >>  	}
>> >>  	printf("Bye...\n");
>> >>
>> >> --
>> >> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device
  2020-01-06 12:32       ` Ananyev, Konstantin
@ 2020-01-07 16:34         ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 78+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-01-07 16:34 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob Kollanukkaran, Kovacevic, Marko,
	Ori Kam, Richardson, Bruce, Nicolau, Radu, Akhil Goyal, Kantecki,
	Tomasz, Sunil Kumar Kori
  Cc: dev

>> >> +struct l3fwd_event_resources {
>> >> +	uint8_t sched_type;
>> >> +	uint8_t enabled;
>> >> +	uint8_t nb_args;
>> >> +	char **args;
>> >> +};
>> >> +
>> >> +static inline struct l3fwd_event_resources *
>> >> +l3fwd_get_eventdev_rsrc(void)
>> >> +{
>> >> +	static const char name[RTE_MEMZONE_NAMESIZE] =
>> >"l3fwd_event_rsrc";
>> >> +	const struct rte_memzone *mz;
>> >> +
>> >> +	mz = rte_memzone_lookup(name);
>> >> +
>> >> +	if (mz != NULL)
>> >> +		return mz->addr;
>> >> +
>> >> +	mz = rte_memzone_reserve(name, sizeof(struct
>> >l3fwd_event_resources),
>> >> +				 0, 0);
>> >> +	if (mz != NULL) {
>> >> +		memset(mz->addr, 0, sizeof(struct
>> >l3fwd_event_resources));
>> >> +		return mz->addr;
>> >> +	}
>> >> +
>> >> +	rte_exit(EXIT_FAILURE, "Unable to allocate memory for
>> >eventdev cfg\n");
>> >> +
>> >> +	return NULL;
>> >> +}
>> >
>> >Does this function really need to be inline?
>> >It wouldn't be fast anyway.
>> >Another question - do you really need memzone here?
>> >Wouldn't just rte_malloc() be enough?
>>
>> Will remove inline in next version.
>> rte_malloc would call for a global variable which I'm
>> trying to avoid.
>
>If you plan to move that function into .c file,
>you don't need a global var.  it could be static local one.
>
>> I don't think there is any harm in using
>> named memzone.
>
>I don't see any harm, though malloc+var will be faster I think.
>Though up to you - no strong opinion here.

Maybe we can cut down some init time. I will move it to .c next version. 
Thanks,
Pavan

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 00/11] example/l3fwd: introduce event device support
  2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
                   ` (11 preceding siblings ...)
  2019-12-04 15:16 ` [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support Jerin Jacob
@ 2020-01-11 13:47 ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device pbhagavatula
                     ` (12 more replies)
  12 siblings, 13 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

This patchset adds support for event mode in l3fwd.
The poll mode is still the default mode of operation.

The following new command line parameters are added:
 --mode: Dictates the mode of operation either poll or event. If unset then
         by default poll mode is used.
 --eventq_sched: Dictates event synchronization mode i.e. either ordered,
		 atomic or parallel.
 --event-eth-rxqs: Number of ethernet RX queues per device.

Based on event device capability the configuration is done as follows:
    - A single event device is enabled.
    - The number of event ports is equal to the number of worker
      cores enabled in the core mask. Additional event ports might
      be configured based on Rx/Tx adapter capability.
    - The number of event queues is equal to the number of ethernet
      ports. If Tx adapter doesn't have internal port capability then
      an additional single link event queue is used to enqueue events
      to Tx adapter.
    - Each event port is linked to all existing event queues.
    - Dedicated Rx/Tx adapters for each Ethernet port.

v3 Changes:
----------
- Unify argument parsing.
- Segregate poll mode and event mode initialization.
- Simplify event resource creation and accesses.
- Integrate http://patches.dpdk.org/project/dpdk/list/?series=8002 series.
- Reduce code duplication in lpm.

v2 Changes:
----------
- Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
- Use rte_panic instead of rte_exit.
- Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
- Add parallel mode support.
- Fix missing eventdev args parsing.

Pavan Nikhilesh (7):
  examples/l3fwd: add event device configuration
  examples/l3fwd: add event eth Rx/Tx adapter setup
  examples/l3fwd: add service core setup based on caps
  examples/l3fwd: add event lpm main loop
  examples/l3fwd: add event em main loop
  examples/l3fwd: add graceful teardown for eventdevice
  doc: update l3fwd user guide to support eventdev

Sunil Kumar Kori (4):
  examples/l3fwd: add framework for event device
  examples/l3fwd: split pipelines based on capability
  examples/l3fwd: add ethdev setup based on eventdev
  examples/l3fwd: add event port and queue setup

 doc/guides/sample_app_ug/l3_forward.rst    |  76 ++++-
 examples/l3fwd/Makefile                    |   3 +-
 examples/l3fwd/l3fwd.h                     |  30 ++
 examples/l3fwd/l3fwd_em.c                  | 177 ++++++++++++
 examples/l3fwd/l3fwd_em.h                  | 159 ++++++++---
 examples/l3fwd/l3fwd_em_hlm.h              | 131 +++++++++
 examples/l3fwd/l3fwd_em_sequential.h       |  26 ++
 examples/l3fwd/l3fwd_event.c               | 263 +++++++++++++++++
 examples/l3fwd/l3fwd_event.h               |  86 ++++++
 examples/l3fwd/l3fwd_event_generic.c       | 302 ++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 281 ++++++++++++++++++
 examples/l3fwd/l3fwd_lpm.c                 | 231 +++++++++++++++
 examples/l3fwd/main.c                      | 316 +++++++++++++++++----
 examples/l3fwd/meson.build                 |   5 +-
 14 files changed, 1971 insertions(+), 115 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

--
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-21  8:44     ` Jerin Jacob
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
                     ` (11 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Nipun Gupta

From: Sunil Kumar Kori <skori@marvell.com>

Add framework to enable event device as a producer of packets.
To switch between event mode and poll mode the following options
have been added:
	`--mode="eventdev"` or `--mode="poll"`
Also, allow the user to select the schedule type to be either
RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_PARALLEL
through:
	`--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
		`--eventq-sched="parallel"`

Poll mode is still the default operation mode.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 examples/l3fwd/Makefile      |  2 +-
 examples/l3fwd/l3fwd.h       |  1 +
 examples/l3fwd/l3fwd_event.c | 40 ++++++++++++++++
 examples/l3fwd/l3fwd_event.h | 23 ++++++++++
 examples/l3fwd/main.c        | 88 ++++++++++++++++++++++++++++++++++--
 examples/l3fwd/meson.build   |  4 +-
 6 files changed, 152 insertions(+), 6 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index b2dbf2607..c892b867b 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -5,7 +5,7 @@
 APP = l3fwd
 
 # all source are stored in SRCS-y
-SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c
+SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index 293fb1fa2..e2399d76b 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -5,6 +5,7 @@
 #ifndef __L3_FWD_H__
 #define __L3_FWD_H__
 
+#include <rte_ethdev.h>
 #include <rte_vect.h>
 
 #define DO_RFC_1812_CHECKS
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
new file mode 100644
index 000000000..1040da4ea
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.c
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_malloc.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+struct l3fwd_event_resources *
+l3fwd_get_eventdev_rsrc(void)
+{
+	static struct l3fwd_event_resources *rsrc;
+
+	if (rsrc != NULL)
+		return rsrc;
+
+	rsrc = rte_zmalloc("l3fwd", sizeof(struct l3fwd_event_resources), 0);
+	if (rsrc != NULL) {
+		rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+		rsrc->eth_rx_queues = 1;
+		return rsrc;
+	}
+
+	rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+	return NULL;
+}
+
+void
+l3fwd_event_resource_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!evt_rsrc->enabled)
+		return;
+}
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
new file mode 100644
index 000000000..4c23c4e1a
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L3FWD_EVENTDEV_H__
+#define __L3FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_eventdev.h>
+#include <rte_spinlock.h>
+
+#include "l3fwd.h"
+
+struct l3fwd_event_resources {
+	uint8_t sched_type;
+	uint8_t enabled;
+	uint8_t eth_rx_queues;
+};
+
+struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
+void l3fwd_event_resource_setup(void);
+
+#endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4dea12a65..582f2f135 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -19,6 +19,7 @@
 #include <rte_vect.h>
 #include <rte_byteorder.h>
 #include <rte_log.h>
+#include <rte_malloc.h>
 #include <rte_memory.h>
 #include <rte_memcpy.h>
 #include <rte_eal.h>
@@ -33,7 +34,6 @@
 #include <rte_random.h>
 #include <rte_debug.h>
 #include <rte_ether.h>
-#include <rte_ethdev.h>
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
 #include <rte_ip.h>
@@ -46,6 +46,7 @@
 #include <cmdline_parse_etheraddr.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 /*
  * Configurable number of RX/TX ring descriptors
@@ -289,7 +290,9 @@ print_usage(const char *prgname)
 		" [--hash-entry-num]"
 		" [--ipv6]"
 		" [--parse-ptype]"
-		" [--per-port-pool]\n\n"
+		" [--per-port-pool]"
+		" [--mode]"
+		" [--eventq-sched]\n\n"
 
 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -304,7 +307,16 @@ print_usage(const char *prgname)
 		"  --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
 		"  --ipv6: Set if running ipv6 packets\n"
 		"  --parse-ptype: Set to use software to analyze packet type\n"
-		"  --per-port-pool: Use separate buffer pool per port\n\n",
+		"  --per-port-pool: Use separate buffer pool per port\n"
+		"  --mode: Packet transfer mode for I/O, poll or eventdev\n"
+		"          Default mode = poll\n"
+		"  --eventq-sched: Event queue synchronization method\n"
+		"                  ordered, atomic or parallel.\n"
+		"                  Default: atomic\n"
+		"                  Valid only if --mode=eventdev\n"
+		"  --event-eth-rxqs: Number of ethernet RX queues per device.\n"
+		"                    Default: 1\n"
+		"                    Valid only if --mode=eventdev\n\n",
 		prgname);
 }
 
@@ -440,6 +452,48 @@ parse_eth_dest(const char *optarg)
 	*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 }
 
+static void
+parse_mode(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strcmp(optarg, "poll"))
+		evt_rsrc->enabled = false;
+	else if (!strcmp(optarg, "eventdev"))
+		evt_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sync(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strcmp(optarg, "ordered"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+	if (!strcmp(optarg, "atomic"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+	if (!strcmp(optarg, "parallel"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
+}
+
+static void
+parse_event_eth_rx_queues(const char *eth_rx_queues)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	char *end = NULL;
+	uint8_t num_eth_rx_queues;
+
+	/* parse decimal string */
+	num_eth_rx_queues = strtoul(eth_rx_queues, &end, 10);
+	if ((eth_rx_queues[0] == '\0') || (end == NULL) || (*end != '\0'))
+		return;
+
+	if (num_eth_rx_queues == 0)
+		return;
+
+	evt_rsrc->eth_rx_queues = num_eth_rx_queues;
+}
+
 #define MAX_JUMBO_PKT_LEN  9600
 #define MEMPOOL_CACHE_SIZE 256
 
@@ -458,6 +512,9 @@ static const char short_options[] =
 #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
 #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
 #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
+#define CMD_LINE_OPT_EVENT_ETH_RX_QUEUES "event-eth-rxqs"
 enum {
 	/* long options mapped to a short option */
 
@@ -472,6 +529,9 @@ enum {
 	CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
 	CMD_LINE_OPT_PARSE_PTYPE_NUM,
 	CMD_LINE_OPT_PARSE_PER_PORT_POOL,
+	CMD_LINE_OPT_MODE_NUM,
+	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+	CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM,
 };
 
 static const struct option lgopts[] = {
@@ -483,6 +543,10 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
 	{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
 	{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
+	{CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+	{CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+	{CMD_LINE_OPT_EVENT_ETH_RX_QUEUES, 1, 0,
+					CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -607,6 +671,18 @@ parse_args(int argc, char **argv)
 			per_port_pool = 1;
 			break;
 
+		case CMD_LINE_OPT_MODE_NUM:
+			parse_mode(optarg);
+			break;
+
+		case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+			parse_eventq_sync(optarg);
+			break;
+
+		case CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM:
+			parse_event_eth_rx_queues(optarg);
+			break;
+
 		default:
 			print_usage(prgname);
 			return -1;
@@ -811,6 +887,7 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 int
 main(int argc, char **argv)
 {
+	struct l3fwd_event_resources *evt_rsrc;
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_txconf *txconf;
@@ -839,11 +916,16 @@ main(int argc, char **argv)
 		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 	}
 
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	/* Configure eventdev parameters if user has requested */
+	l3fwd_event_resource_setup();
+
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
 
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 6dd4b9022..864327c7b 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -6,7 +6,7 @@
 # To build this example as a standalone application with an already-installed
 # DPDK instance, use 'make'
 
-deps += ['hash', 'lpm']
+deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 02/11] examples/l3fwd: split pipelines based on capability
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-21  8:46     ` Jerin Jacob
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 03/11] examples/l3fwd: add event device configuration pbhagavatula
                     ` (10 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/Makefile                    |  1 +
 examples/l3fwd/l3fwd_event.c               | 31 ++++++++++++++++++++++
 examples/l3fwd/l3fwd_event.h               | 20 ++++++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 14 ++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 14 ++++++++++
 examples/l3fwd/meson.build                 |  3 ++-
 6 files changed, 82 insertions(+), 1 deletion(-)
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index c892b867b..59a110d12 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -6,6 +6,7 @@ APP = l3fwd
 
 # all source are stored in SRCS-y
 SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
+SRCS-y += l3fwd_event_generic.c l3fwd_event_internal_port.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 1040da4ea..62218f3ca 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -30,6 +30,31 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
+static void
+l3fwd_event_capability_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t caps = 0;
+	uint16_t i;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(i) {
+		ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+		if (ret)
+			rte_exit(EXIT_FAILURE,
+				 "Invalid capability for Tx adptr port %d\n",
+				 i);
+
+		evt_rsrc->tx_mode_q |= !(caps &
+				   RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+	}
+
+	if (evt_rsrc->tx_mode_q)
+		l3fwd_event_set_generic_ops(&evt_rsrc->ops);
+	else
+		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
+}
+
 void
 l3fwd_event_resource_setup(void)
 {
@@ -37,4 +62,10 @@ l3fwd_event_resource_setup(void)
 
 	if (!evt_rsrc->enabled)
 		return;
+
+	if (!rte_event_dev_count())
+		rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+	/* Setup eventdev capability callbacks */
+	l3fwd_event_capability_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 4c23c4e1a..d25c8d222 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,17 +7,37 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_tx_adapter.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
 
+typedef uint32_t (*event_device_setup_cb)(void);
+typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*adapter_setup_cb)(void);
+typedef int (*event_loop_cb)(void *);
+
+struct l3fwd_event_setup_ops {
+	event_device_setup_cb event_device_setup;
+	event_queue_setup_cb event_queue_setup;
+	event_port_setup_cb event_port_setup;
+	adapter_setup_cb adapter_setup;
+	event_loop_cb lpm_event_loop;
+	event_loop_cb em_event_loop;
+};
+
 struct l3fwd_event_resources {
+	struct l3fwd_event_setup_ops ops;
 	uint8_t sched_type;
+	uint8_t tx_mode_q;
 	uint8_t enabled;
 	uint8_t eth_rx_queues;
 };
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
 void l3fwd_event_resource_setup(void);
+void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
+void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
 #endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
new file mode 100644
index 000000000..7fff850e5
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
new file mode 100644
index 000000000..085e9c825
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 864327c7b..ebed3b518 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -8,5 +8,6 @@
 
 deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c',
+	'l3fwd_event_internal_port.c', 'l3fwd_event_generic.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 03/11] examples/l3fwd: add event device configuration
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-21  8:51     ` Jerin Jacob
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
                     ` (9 subsequent siblings)
  12 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event device configuration based on the capabilities of the
probed event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  3 +
 examples/l3fwd/l3fwd_event.h               | 36 ++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 75 ++++++++++++++++++++-
 examples/l3fwd/l3fwd_event_internal_port.c | 78 +++++++++++++++++++++-
 4 files changed, 190 insertions(+), 2 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 62218f3ca..9cb46c711 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -68,4 +68,7 @@ l3fwd_event_resource_setup(void)
 
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
+
+	/* Event device configuration */
+	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index d25c8d222..fc3862ec1 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -8,6 +8,7 @@
 #include <rte_common.h>
 #include <rte_eventdev.h>
 #include <rte_event_eth_tx_adapter.h>
+#include <rte_service.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
@@ -18,6 +19,29 @@ typedef void (*event_port_setup_cb)(void);
 typedef void (*adapter_setup_cb)(void);
 typedef int (*event_loop_cb)(void *);
 
+struct l3fwd_event_queues {
+	uint8_t *event_q_id;
+	uint8_t	nb_queues;
+};
+
+struct l3fwd_event_ports {
+	uint8_t *event_p_id;
+	uint8_t	nb_ports;
+	rte_spinlock_t lock;
+};
+
+struct l3fwd_event_rx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_rx_adptr;
+	uint8_t *rx_adptr;
+};
+
+struct l3fwd_event_tx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_tx_adptr;
+	uint8_t *tx_adptr;
+};
+
 struct l3fwd_event_setup_ops {
 	event_device_setup_cb event_device_setup;
 	event_queue_setup_cb event_queue_setup;
@@ -28,9 +52,21 @@ struct l3fwd_event_setup_ops {
 };
 
 struct l3fwd_event_resources {
+	struct rte_event_port_conf def_p_conf;
+	struct l3fwd_event_rx_adptr rx_adptr;
+	struct l3fwd_event_tx_adptr tx_adptr;
+	uint8_t disable_implicit_release;
 	struct l3fwd_event_setup_ops ops;
+	struct rte_mempool * (*pkt_pool)[NB_SOCKETS];
+	struct l3fwd_event_queues evq;
+	struct l3fwd_event_ports evp;
+	uint32_t port_mask;
+	uint8_t per_port_pool;
+	uint8_t event_d_id;
 	uint8_t sched_type;
 	uint8_t tx_mode_q;
+	uint8_t deq_depth;
+	uint8_t has_burst;
 	uint8_t enabled;
 	uint8_t eth_rx_queues;
 };
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 7fff850e5..d6f23a196 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -7,8 +7,81 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+	evt_rsrc->disable_implicit_release = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	/* One queue for each ethdev port + one Tx adapter Single link queue. */
+	event_d_conf.nb_event_queues = ethdev_count + 1;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count() - rte_service_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 085e9c825..05e2aba53 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -7,8 +7,84 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	uint8_t disable_implicit_release;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+
+	disable_implicit_release = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE);
+	evt_rsrc->disable_implicit_release =
+						disable_implicit_release;
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	event_d_conf.nb_event_queues = ethdev_count;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (2 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 03/11] examples/l3fwd: add event device configuration pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
                     ` (8 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Nipun Gupta

From: Sunil Kumar Kori <skori@marvell.com>

Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 examples/l3fwd/l3fwd.h       |  10 +++
 examples/l3fwd/l3fwd_event.c | 138 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/l3fwd_event.h |   2 +-
 examples/l3fwd/main.c        |  15 ++--
 4 files changed, 153 insertions(+), 12 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index e2399d76b..ed5ba5f3b 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -16,9 +16,16 @@
 #define NO_HASH_MULTI_LOOKUP 1
 #endif
 
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
 #define MAX_PKT_BURST     32
 #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
 
+#define MEMPOOL_CACHE_SIZE 256
 #define MAX_RX_QUEUE_PER_LCORE 16
 
 /*
@@ -170,6 +177,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
 }
 #endif /* DO_RFC_1812_CHECKS */
 
+int
+init_mem(uint16_t portid, unsigned int nb_mbuf);
+
 /* Function pointers for LPM or EM functionality. */
 void
 setup_lpm(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 9cb46c711..f9491ecc6 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -10,6 +10,14 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+	char buf[RTE_ETHER_ADDR_FMT_SIZE];
+	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+	printf("%s%s", name, buf);
+}
+
 struct l3fwd_event_resources *
 l3fwd_get_eventdev_rsrc(void)
 {
@@ -30,6 +38,131 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
+static void
+l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint16_t nb_ports = rte_eth_dev_count_avail();
+	uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+	uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+	unsigned int nb_lcores = rte_lcore_count();
+	struct rte_eth_conf local_port_conf;
+	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf txconf;
+	struct rte_eth_rxconf rxconf;
+	unsigned int nb_mbuf;
+	uint16_t port_id;
+	uint8_t eth_qid;
+	int32_t ret;
+
+	/* initialize all ports */
+	RTE_ETH_FOREACH_DEV(port_id) {
+		local_port_conf = *port_conf;
+		/* skip ports that are not enabled */
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0) {
+			printf("\nSkipping disabled port %d\n", port_id);
+			continue;
+		}
+
+		/* init port */
+		printf("Initializing port %d ... ", port_id);
+		fflush(stdout);
+		printf("Creating queues: nb_rxq=%d nb_txq=1...\n",
+		       evt_rsrc->eth_rx_queues);
+
+		rte_eth_dev_info_get(port_id, &dev_info);
+		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+			local_port_conf.txmode.offloads |=
+						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+						dev_info.flow_type_rss_offloads;
+		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+				port_conf->rx_adv_conf.rss_conf.rss_hf) {
+			printf("Port %u modified RSS hash function "
+			       "based on hardware support,"
+			       "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+			       port_id,
+			       port_conf->rx_adv_conf.rss_conf.rss_hf,
+			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+		}
+
+		ret = rte_eth_dev_configure(port_id, evt_rsrc->eth_rx_queues,
+					    1, &local_port_conf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot configure device: err=%d, port=%d\n",
+				 ret, port_id);
+
+		ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+						       &nb_txd);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot adjust number of descriptors: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+		print_ethaddr(" Address:", &ports_eth_addr[port_id]);
+		printf(", ");
+		print_ethaddr("Destination:",
+			(const struct rte_ether_addr *)&dest_eth_addr[port_id]);
+		printf(", ");
+
+		/* prepare source MAC for each port. */
+		rte_ether_addr_copy(&ports_eth_addr[port_id],
+			(struct rte_ether_addr *)(val_eth + port_id) + 1);
+
+		/* init memory */
+		if (!evt_rsrc->per_port_pool) {
+			/* port_id = 0; this is *not* signifying the first port,
+			 * rather, it signifies that port_id is ignored.
+			 */
+			nb_mbuf = RTE_MAX(nb_ports * nb_rxd +
+					  nb_ports * nb_txd +
+					  nb_ports * nb_lcores *
+							MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(0, nb_mbuf);
+		} else {
+			nb_mbuf = RTE_MAX(nb_rxd + nb_rxd +
+					  nb_lcores * MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(port_id, nb_mbuf);
+		}
+		/* init Rx queues per port */
+		rxconf = dev_info.default_rxconf;
+		rxconf.offloads = local_port_conf.rxmode.offloads;
+
+		for (eth_qid = 0; eth_qid < evt_rsrc->eth_rx_queues;
+		     eth_qid++) {
+			if (!evt_rsrc->per_port_pool)
+				ret = rte_eth_rx_queue_setup(port_id, eth_qid,
+					nb_rxd, 0, &rxconf,
+					evt_rsrc->pkt_pool[0][0]);
+			else
+				ret = rte_eth_rx_queue_setup(port_id, eth_qid,
+					nb_rxd, 0, &rxconf,
+					evt_rsrc->pkt_pool[port_id][0]);
+			if (ret < 0)
+				rte_exit(EXIT_FAILURE,
+					 "rte_eth_rx_queue_setup: err=%d, "
+					 "port=%d, eth_qid: %d\n",
+					 ret, port_id, eth_qid);
+		}
+
+		/* init one Tx queue per port */
+		txconf = dev_info.default_txconf;
+		txconf.offloads = local_port_conf.txmode.offloads;
+		ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_tx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+	}
+}
+
 static void
 l3fwd_event_capability_setup(void)
 {
@@ -56,7 +189,7 @@ l3fwd_event_capability_setup(void)
 }
 
 void
-l3fwd_event_resource_setup(void)
+l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 
@@ -69,6 +202,9 @@ l3fwd_event_resource_setup(void)
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
 
+	/* Ethernet device configuration */
+	l3fwd_eth_dev_port_setup(port_conf);
+
 	/* Event device configuration */
 	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index fc3862ec1..53feea069 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -72,7 +72,7 @@ struct l3fwd_event_resources {
 };
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
-void l3fwd_event_resource_setup(void);
+void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 582f2f135..0618daa38 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -48,12 +48,6 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
-/*
- * Configurable number of RX/TX ring descriptors
- */
-#define RTE_TEST_RX_DESC_DEFAULT 1024
-#define RTE_TEST_TX_DESC_DEFAULT 1024
-
 #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
 #define MAX_RX_QUEUE_PER_PORT 128
 
@@ -495,7 +489,6 @@ parse_event_eth_rx_queues(const char *eth_rx_queues)
 }
 
 #define MAX_JUMBO_PKT_LEN  9600
-#define MEMPOOL_CACHE_SIZE 256
 
 static const char short_options[] =
 	"p:"  /* portmask */
@@ -730,7 +723,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
 	printf("%s%s", name, buf);
 }
 
-static int
+int
 init_mem(uint16_t portid, unsigned int nb_mbuf)
 {
 	struct lcore_conf *qconf;
@@ -917,14 +910,16 @@ main(int argc, char **argv)
 	}
 
 	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup();
+	l3fwd_event_resource_setup(&port_conf);
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 05/11] examples/l3fwd: add event port and queue setup
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (3 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
                     ` (7 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add event device queue and port setup based on event eth Tx adapter
capabilities.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  28 +++++-
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 103 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  98 ++++++++++++++++++++
 4 files changed, 229 insertions(+), 1 deletion(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index f9491ecc6..b58f9b79a 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -188,10 +188,30 @@ l3fwd_event_capability_setup(void)
 		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
 }
 
+int
+l3fwd_get_free_event_port(struct l3fwd_event_resources *evt_rsrc)
+{
+	static int index;
+	int port_id;
+
+	rte_spinlock_lock(&evt_rsrc->evp.lock);
+	if (index >= evt_rsrc->evp.nb_ports) {
+		printf("No free event port is available\n");
+		return -1;
+	}
+
+	port_id = evt_rsrc->evp.event_p_id[index];
+	index++;
+	rte_spinlock_unlock(&evt_rsrc->evp.lock);
+
+	return port_id;
+}
+
 void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t event_queue_cfg;
 
 	if (!evt_rsrc->enabled)
 		return;
@@ -206,5 +226,11 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	l3fwd_eth_dev_port_setup(port_conf);
 
 	/* Event device configuration */
-	evt_rsrc->ops.event_device_setup();
+	event_queue_cfg = evt_rsrc->ops.event_device_setup();
+
+	/* Event queue configuration */
+	evt_rsrc->ops.event_queue_setup(event_queue_cfg);
+
+	/* Event port configuration */
+	evt_rsrc->ops.event_port_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 53feea069..4bceca920 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -73,6 +73,7 @@ struct l3fwd_event_resources {
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
 void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
+int l3fwd_get_free_event_port(struct l3fwd_event_resources *eventdev_rsrc);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index d6f23a196..3c88fe7c1 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -80,8 +80,111 @@ l3fwd_event_device_setup_generic(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("No space is available\n");
+
+	memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+			def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id,
+					  evt_rsrc->evq.event_q_id,
+					  NULL,
+					  evt_rsrc->evq.nb_queues - 1);
+		if (ret != (evt_rsrc->evq.nb_queues - 1))
+			rte_panic("Error in linking event port %d to queues\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+	}
+	/* init spinlock */
+	rte_spinlock_init(&evt_rsrc->evp.lock);
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id;
+	int32_t ret;
+
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	for (event_q_id = 0; event_q_id < (evt_rsrc->evq.nb_queues - 1);
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+
+	event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+	ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event queue for Tx adapter\n");
+	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
+	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
+	ops->event_port_setup = l3fwd_event_port_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 05e2aba53..4fc193a3b 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -82,9 +82,107 @@ l3fwd_event_device_setup_internal_port(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("Failed to allocate memory for Event Ports\n");
+
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+						def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+					  NULL, 0);
+		if (ret < 0)
+			rte_panic("Error in linking event port %d to queue\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+		/* init spinlock */
+		rte_spinlock_init(&evt_rsrc->evp.lock);
+	}
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id = 0;
+	int32_t ret;
+
+	rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	if (def_q_conf.nb_atomic_order_sequences <
+					event_q_conf.nb_atomic_order_sequences)
+		event_q_conf.nb_atomic_order_sequences =
+					def_q_conf.nb_atomic_order_sequences;
+
+	event_q_conf.event_queue_cfg = event_queue_cfg;
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	for (event_q_id = 0; event_q_id < evt_rsrc->evq.nb_queues;
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+}
 
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
+	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
+	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (4 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
                     ` (6 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |   3 +
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 112 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  93 +++++++++++++++++
 4 files changed, 209 insertions(+)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index b58f9b79a..0a75e39ee 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -233,4 +233,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Event port configuration */
 	evt_rsrc->ops.event_port_setup();
+
+	/* Rx/Tx adapters configuration */
+	evt_rsrc->ops.adapter_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 4bceca920..b1ae970d4 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,6 +7,7 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
 #include <rte_event_eth_tx_adapter.h>
 #include <rte_service.h>
 #include <rte_spinlock.h>
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 3c88fe7c1..ad371e44c 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -181,10 +181,122 @@ l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
 	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint8_t rx_adptr_id = 0;
+	uint8_t tx_adptr_id = 0;
+	uint8_t tx_port_id = 0;
+	uint16_t port_id;
+	uint32_t service_id;
+	int32_t ret, i = 0;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	/* Rx adapter setup */
+	evt_rsrc->rx_adptr.nb_rx_adptr = 1;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create rx adapter\n");
+
+	/* Configure user requested sched type */
+	eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[i];
+		ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+		if (i < evt_rsrc->evq.nb_queues)
+			i++;
+	}
+
+	ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Error getting the service ID for rx adptr\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->rx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+	if (ret)
+		rte_panic("Rx adapter[%d] start Failed\n", rx_adptr_id);
+
+	evt_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+	/* Tx adapter setup */
+	evt_rsrc->tx_adptr.nb_tx_adptr = 1;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create tx adapter\n");
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Failed to get Tx adapter service ID\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->tx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+	if (ret)
+		rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+	ret = rte_event_port_link(event_d_id, tx_port_id,
+				  &evt_rsrc->evq.event_q_id[
+					evt_rsrc->evq.nb_queues - 1],
+				  NULL, 1);
+	if (ret != 1)
+		rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+			 ret);
+
+	ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+	if (ret)
+		rte_panic("Tx adapter[%d] start Failed\n", tx_adptr_id);
+
+	evt_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
 	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
 	ops->event_port_setup = l3fwd_event_port_setup_generic;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 4fc193a3b..8269fdf95 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -179,10 +179,103 @@ l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
 	}
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint16_t adapter_id = 0;
+	uint16_t nb_adapter = 0;
+	uint16_t port_id;
+	uint8_t q_id = 0;
+	int ret;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		nb_adapter++;
+	}
+
+	evt_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create rx adapter[%d]\n",
+				  adapter_id);
+
+		/* Configure user requested sched type*/
+		eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+		ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+
+		ret = rte_event_eth_rx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+		if (q_id < evt_rsrc->evq.nb_queues)
+			q_id++;
+	}
+
+	evt_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	adapter_id = 0;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create tx adapter[%d]\n",
+				  adapter_id);
+
+		ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+
+		ret = rte_event_eth_tx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+	}
+}
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
 	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 07/11] examples/l3fwd: add service core setup based on caps
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (5 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
                     ` (5 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c |   6 ++
 examples/l3fwd/main.c        | 188 +++++++++++++++++++++++++++--------
 2 files changed, 150 insertions(+), 44 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 0a75e39ee..1ed42c3ab 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -212,6 +212,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 	uint32_t event_queue_cfg;
+	int ret;
 
 	if (!evt_rsrc->enabled)
 		return;
@@ -236,4 +237,9 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Rx/Tx adapters configuration */
 	evt_rsrc->ops.adapter_setup();
+
+	/* Start event device */
+	ret = rte_event_dev_start(evt_rsrc->event_d_id);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 0618daa38..b39dc6ff6 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -877,49 +877,18 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 	return 0;
 }
 
-int
-main(int argc, char **argv)
+static void
+l3fwd_poll_resource_setup(void)
 {
-	struct l3fwd_event_resources *evt_rsrc;
-	struct lcore_conf *qconf;
+	uint8_t nb_rx_queue, queue, socketid;
 	struct rte_eth_dev_info dev_info;
+	uint32_t n_tx_queue, nb_lcores;
 	struct rte_eth_txconf *txconf;
-	int ret;
-	unsigned nb_ports;
+	struct lcore_conf *qconf;
 	uint16_t queueid, portid;
-	unsigned lcore_id;
-	uint32_t n_tx_queue, nb_lcores;
-	uint8_t nb_rx_queue, queue, socketid;
-
-	/* init EAL */
-	ret = rte_eal_init(argc, argv);
-	if (ret < 0)
-		rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
-	argc -= ret;
-	argv += ret;
-
-	force_quit = false;
-	signal(SIGINT, signal_handler);
-	signal(SIGTERM, signal_handler);
-
-	/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
-	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
-		dest_eth_addr[portid] =
-			RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
-		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
-	}
-
-	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	/* parse application arguments (after the EAL ones) */
-	ret = parse_args(argc, argv);
-	if (ret < 0)
-		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
-
-	evt_rsrc->per_port_pool = per_port_pool;
-	evt_rsrc->pkt_pool = pktmbuf_pool;
-	evt_rsrc->port_mask = enabled_port_mask;
-	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup(&port_conf);
+	unsigned int nb_ports;
+	unsigned int lcore_id;
+	int ret;
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
@@ -935,9 +904,6 @@ main(int argc, char **argv)
 
 	nb_lcores = rte_lcore_count();
 
-	/* Setup function pointers for lookup method. */
-	setup_l3fwd_lookup_tables();
-
 	/* initialize all ports */
 	RTE_ETH_FOREACH_DEV(portid) {
 		struct rte_eth_conf local_port_conf = port_conf;
@@ -1105,7 +1071,142 @@ main(int argc, char **argv)
 		}
 	}
 
-	printf("\n");
+
+}
+
+static inline int
+l3fwd_service_enable(uint32_t service_id)
+{
+	uint8_t min_service_count = UINT8_MAX;
+	uint32_t slcore_array[RTE_MAX_LCORE];
+	unsigned int slcore = 0;
+	uint8_t service_count;
+	int32_t slcore_count;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+	if (slcore_count < 0)
+		return -ENOENT;
+	/* Get the core which has least number of services running. */
+	while (slcore_count--) {
+		/* Reset default mapping */
+		rte_service_map_lcore_set(service_id,
+				slcore_array[slcore_count], 0);
+		service_count = rte_service_lcore_count_services(
+				slcore_array[slcore_count]);
+		if (service_count < min_service_count) {
+			slcore = slcore_array[slcore_count];
+			min_service_count = service_count;
+		}
+	}
+	if (rte_service_map_lcore_set(service_id, slcore, 1))
+		return -ENOENT;
+	rte_service_lcore_start(slcore);
+
+	return 0;
+}
+
+static void
+l3fwd_event_service_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_info evdev_info;
+	uint32_t service_id, caps;
+	int ret, i;
+
+	rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+	if (evdev_info.event_dev_cap  & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
+		ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting eventdev service\n");
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		ret = rte_event_eth_rx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		ret = rte_event_eth_tx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+}
+
+int
+main(int argc, char **argv)
+{
+	struct l3fwd_event_resources *evt_rsrc;
+	struct lcore_conf *qconf;
+	uint16_t queueid, portid;
+	unsigned int lcore_id;
+	uint8_t queue;
+	int ret;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
+	argc -= ret;
+	argv += ret;
+
+	force_quit = false;
+	signal(SIGINT, signal_handler);
+	signal(SIGTERM, signal_handler);
+
+	/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		dest_eth_addr[portid] =
+			RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
+		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
+	}
+
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	/* parse application arguments (after the EAL ones) */
+	ret = parse_args(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
+
+	/* Setup function pointers for lookup method. */
+	setup_l3fwd_lookup_tables();
+
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
+	/* Configure eventdev parameters if user has requested */
+	if (evt_rsrc->enabled) {
+		l3fwd_event_resource_setup(&port_conf);
+		l3fwd_event_service_setup();
+	} else
+		l3fwd_poll_resource_setup();
 
 	/* start ports */
 	RTE_ETH_FOREACH_DEV(portid) {
@@ -1148,7 +1249,6 @@ main(int argc, char **argv)
 		}
 	}
 
-
 	check_all_ports_link_status(enabled_port_mask);
 
 	ret = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 08/11] examples/l3fwd: add event lpm main loop
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (6 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 09/11] examples/l3fwd: add event em " pbhagavatula
                     ` (4 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add lpm main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h       |   9 ++
 examples/l3fwd/l3fwd_event.c |   9 ++
 examples/l3fwd/l3fwd_event.h |   5 +
 examples/l3fwd/l3fwd_lpm.c   | 203 +++++++++++++++++++++++++++++++++++
 examples/l3fwd/main.c        |   1 +
 5 files changed, 227 insertions(+)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index ed5ba5f3b..b3cdcd496 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -207,6 +207,15 @@ em_main_loop(__attribute__((unused)) void *dummy);
 int
 lpm_main_loop(__attribute__((unused)) void *dummy);
 
+int
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 1ed42c3ab..ebddd8f97 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -211,6 +211,12 @@ void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	const event_loop_cb lpm_event_loop[2][2] = {
+		[0][0] = lpm_event_main_loop_tx_d,
+		[0][1] = lpm_event_main_loop_tx_d_burst,
+		[1][0] = lpm_event_main_loop_tx_q,
+		[1][1] = lpm_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -242,4 +248,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	ret = rte_event_dev_start(evt_rsrc->event_d_id);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index b1ae970d4..0e4616417 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -14,6 +14,11 @@
 
 #include "l3fwd.h"
 
+#define L3FWD_EVENT_SINGLE     0x1
+#define L3FWD_EVENT_BURST      0x2
+#define L3FWD_EVENT_TX_DIRECT  0x4
+#define L3FWD_EVENT_TX_ENQ     0x8
+
 typedef uint32_t (*event_device_setup_cb)(void);
 typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
 typedef void (*event_port_setup_cb)(void);
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 349de2703..058a60b1d 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -28,6 +28,7 @@
 #include <rte_lpm6.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 struct ipv4_l3fwd_lpm_route {
 	uint32_t ip;
@@ -254,6 +255,208 @@ lpm_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline uint16_t
+lpm_process_event_pkt(const struct lcore_conf *lconf, struct rte_mbuf *mbuf)
+{
+	mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
+	|| defined RTE_ARCH_PPC_64
+	process_packet(mbuf, &mbuf->port);
+#else
+
+	struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
+			struct rte_ether_hdr *);
+#ifdef DO_RFC_1812_CHECKS
+	struct rte_ipv4_hdr *ipv4_hdr;
+	if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
+		/* Handle IPv4 headers.*/
+		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
+				struct rte_ipv4_hdr *,
+				sizeof(struct rte_ether_hdr));
+
+		if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
+				< 0) {
+			mbuf->port = BAD_PORT;
+			continue;
+		}
+		/* Update time to live and header checksum */
+		--(ipv4_hdr->time_to_live);
+		++(ipv4_hdr->hdr_checksum);
+	}
+#endif
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[mbuf->port];
+
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
+			&eth_hdr->s_addr);
+#endif
+	return mbuf->port;
+}
+
+static __rte_always_inline void
+lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		if (lpm_process_event_pkt(lconf, ev.mbuf) == BAD_PORT) {
+			rte_pktmbuf_free(ev.mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+
+			lpm_process_event_pkt(lconf, events[i].mbuf);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		lpm_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		lpm_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 void
 setup_lpm(const int socketid)
 {
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index b39dc6ff6..5ef6576bc 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1204,6 +1204,7 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	if (evt_rsrc->enabled) {
 		l3fwd_event_resource_setup(&port_conf);
+		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		l3fwd_event_service_setup();
 	} else
 		l3fwd_poll_resource_setup();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 09/11] examples/l3fwd: add event em main loop
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (7 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
                     ` (3 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add em main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h               |  10 ++
 examples/l3fwd/l3fwd_em.c            | 177 +++++++++++++++++++++++++++
 examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
 examples/l3fwd/l3fwd_em_hlm.h        | 131 ++++++++++++++++++++
 examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
 examples/l3fwd/l3fwd_event.c         |   9 ++
 examples/l3fwd/main.c                |   5 +-
 7 files changed, 470 insertions(+), 47 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index b3cdcd496..c786f9062 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -216,6 +216,16 @@ lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
 int
 lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
 
+int
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c
index 74a7c8fa4..1a8bc9168 100644
--- a/examples/l3fwd/l3fwd_em.c
+++ b/examples/l3fwd/l3fwd_em.c
@@ -26,6 +26,7 @@
 #include <rte_hash.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 #if defined(RTE_ARCH_X86) || defined(RTE_MACHINE_CPUFLAG_CRC32)
 #define EM_HASH_CRC 1
@@ -699,6 +700,182 @@ em_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline void
+em_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		struct rte_mbuf *mbuf = ev.mbuf;
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		mbuf->port = em_get_dst_port(lconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+#else
+		l3fwd_em_simple_process(mbuf, lconf);
+#endif
+		if (mbuf->port == BAD_PORT) {
+			rte_pktmbuf_free(mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		l3fwd_em_process_events(nb_deq, (struct rte_event **)&events,
+					lconf);
+#else
+		l3fwd_em_no_opt_process_events(nb_deq,
+					       (struct rte_event **)&events,
+					       lconf);
+#endif
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		em_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		em_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 /*
  * Initialize exact match (hash) parameters.
  */
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index 090c1b448..b992a21da 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -5,73 +5,92 @@
 #ifndef __L3FWD_EM_H__
 #define __L3FWD_EM_H__
 
-static __rte_always_inline void
-l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
-		struct lcore_conf *qconf)
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
+		     struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
 {
-	struct rte_ether_hdr *eth_hdr;
 	struct rte_ipv4_hdr *ipv4_hdr;
 	uint16_t dst_port;
-	uint32_t tcp_or_udp;
-	uint32_t l3_ptypes;
-
-	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
-	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
 
-	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
-		/* Handle IPv4 headers.*/
-		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
-						sizeof(struct rte_ether_hdr));
+	/* Handle IPv4 headers.*/
+	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
+			sizeof(struct rte_ether_hdr));
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Check to make sure the packet is valid (RFC1812) */
-		if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
-			rte_pktmbuf_free(m);
-			return;
-		}
+	/* Check to make sure the packet is valid (RFC1812) */
+	if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
+		rte_pktmbuf_free(m);
+		return BAD_PORT;
+	}
 #endif
-		dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
-						qconf->ipv4_lookup_struct);
+	dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
+			qconf->ipv4_lookup_struct);
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Update time to live and header checksum */
-		--(ipv4_hdr->time_to_live);
-		++(ipv4_hdr->hdr_checksum);
+	/* Update time to live and header checksum */
+	--(ipv4_hdr->time_to_live);
+	++(ipv4_hdr->hdr_checksum);
 #endif
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		send_single_packet(qconf, m, dst_port);
-	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
-		/* Handle IPv6 headers.*/
-		struct rte_ipv6_hdr *ipv6_hdr;
+	return dst_port;
+}
 
-		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
-						sizeof(struct rte_ether_hdr));
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
+		struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
+{
+	/* Handle IPv6 headers.*/
+	struct rte_ipv6_hdr *ipv6_hdr;
+	uint16_t dst_port;
 
-		dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
-					qconf->ipv6_lookup_struct);
+	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
+			sizeof(struct rte_ether_hdr));
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
+			qconf->ipv6_lookup_struct);
+
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
+
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	return dst_port;
+}
 
+static __rte_always_inline void
+l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
+		struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint16_t dst_port;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
+		dst_port = l3fwd_em_handle_ipv4(m, portid, eth_hdr, qconf);
+		send_single_packet(qconf, m, dst_port);
+	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
+		dst_port = l3fwd_em_handle_ipv6(m, portid, eth_hdr, qconf);
 		send_single_packet(qconf, m, dst_port);
 	} else {
 		/* Free the mbuf that contains non-IPV4/IPV6 packet */
@@ -79,6 +98,25 @@ l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
 	}
 }
 
+static __rte_always_inline void
+l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4))
+		m->port = l3fwd_em_handle_ipv4(m, m->port, eth_hdr, qconf);
+	else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6))
+		m->port = l3fwd_em_handle_ipv6(m, m->port, eth_hdr, qconf);
+	else
+		m->port = BAD_PORT;
+}
+
 /*
  * Buffer non-optimized handling of packets, invoked
  * from main_loop.
@@ -108,4 +146,33 @@ l3fwd_em_no_opt_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 		l3fwd_em_simple_forward(pkts_burst[j], portid, qconf);
 }
 
+/*
+ * Buffer non-optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_no_opt_process_events(int nb_rx, struct rte_event **events,
+			       struct lcore_conf *qconf)
+{
+	int32_t j;
+
+	/* Prefetch first packets */
+	for (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++)
+		rte_prefetch0(rte_pktmbuf_mtod(events[j]->mbuf, void *));
+
+	/*
+	 * Prefetch and forward already prefetched
+	 * packets.
+	 */
+	for (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) {
+		rte_prefetch0(rte_pktmbuf_mtod(events[
+				j + PREFETCH_OFFSET]->mbuf, void *));
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+	}
+
+	/* Forward remaining prefetched packets */
+	for (; j < nb_rx; j++)
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+}
+
 #endif /* __L3FWD_EM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h
index ad8b9ce87..79812716c 100644
--- a/examples/l3fwd/l3fwd_em_hlm.h
+++ b/examples/l3fwd/l3fwd_em_hlm.h
@@ -75,6 +75,60 @@ em_get_dst_port_ipv6xN(struct lcore_conf *qconf, struct rte_mbuf *m[],
 	}
 }
 
+static __rte_always_inline void
+em_get_dst_port_ipv4xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv4_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv4_5tuple(m[i], mask0.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv4_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv4_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
+static __rte_always_inline void
+em_get_dst_port_ipv6xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv6_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv6_5tuple(m[i], mask1.x, mask2.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv6_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv6_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
 static __rte_always_inline uint16_t
 em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 		uint16_t portid)
@@ -187,4 +241,81 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **ev,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j, pos;
+	uint16_t dst_port[MAX_PKT_BURST];
+	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+
+	/*
+	 * Send nb_rx - nb_rx % EM_HASH_LOOKUP_COUNT packets
+	 * in groups of EM_HASH_LOOKUP_COUNT.
+	 */
+	int32_t n = RTE_ALIGN_FLOOR(nb_rx, EM_HASH_LOOKUP_COUNT);
+
+	for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < nb_rx; j++) {
+		pkts_burst[j] = ev[j]->mbuf;
+		rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j],
+					       struct rte_ether_hdr *) + 1);
+	}
+
+	for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) {
+
+		uint32_t pkt_type = RTE_PTYPE_L3_MASK |
+				    RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP;
+		uint32_t l3_type, tcp_or_udp;
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkt_type &= pkts_burst[j + i]->packet_type;
+
+		l3_type = pkt_type & RTE_PTYPE_L3_MASK;
+		tcp_or_udp = pkt_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+
+		for (i = 0, pos = j + EM_HASH_LOOKUP_COUNT;
+		     i < EM_HASH_LOOKUP_COUNT && pos < nb_rx; i++, pos++) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					pkts_burst[pos],
+					struct rte_ether_hdr *) + 1);
+		}
+
+		if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV4)) {
+
+			em_get_dst_port_ipv4xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV6)) {
+
+			em_get_dst_port_ipv6xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else {
+			for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+				pkts_burst[j + i]->port = em_get_dst_port(qconf,
+						pkts_burst[j + i],
+						pkts_burst[j + i]->port);
+				process_packet(pkts_burst[j + i],
+						&pkts_burst[j + i]->port);
+			}
+			continue;
+		}
+		processx4_step3(&pkts_burst[j], &dst_port[j]);
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkts_burst[j + i]->port = dst_port[j + i];
+
+	}
+
+	for (; j < nb_rx; j++) {
+		pkts_burst[j]->port = em_get_dst_port(qconf, pkts_burst[j],
+						      pkts_burst[j]->port);
+		process_packet(pkts_burst[j], &pkts_burst[j]->port);
+	}
+}
 #endif /* __L3FWD_EM_HLM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h
index 23fe9dec8..b231b9994 100644
--- a/examples/l3fwd/l3fwd_em_sequential.h
+++ b/examples/l3fwd/l3fwd_em_sequential.h
@@ -95,4 +95,30 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **events,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j;
+
+	rte_prefetch0(rte_pktmbuf_mtod(events[0]->mbuf,
+		      struct rte_ether_hdr *) + 1);
+
+	for (i = 1, j = 0; j < nb_rx; i++, j++) {
+		struct rte_mbuf *mbuf = events[j]->mbuf;
+
+		if (i < nb_rx) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					events[i]->mbuf,
+					struct rte_ether_hdr *) + 1);
+		}
+		mbuf->port = em_get_dst_port(qconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+	}
+}
 #endif /* __L3FWD_EM_SEQUENTIAL_H__ */
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index ebddd8f97..43c47eade 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -217,6 +217,12 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 		[1][0] = lpm_event_main_loop_tx_q,
 		[1][1] = lpm_event_main_loop_tx_q_burst,
 	};
+	const event_loop_cb em_event_loop[2][2] = {
+		[0][0] = em_event_main_loop_tx_d,
+		[0][1] = em_event_main_loop_tx_d_burst,
+		[1][0] = em_event_main_loop_tx_q,
+		[1][1] = em_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -251,4 +257,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
 						       [evt_rsrc->has_burst];
+
+	evt_rsrc->ops.em_event_loop = em_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 5ef6576bc..c811b0db9 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1204,7 +1204,10 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	if (evt_rsrc->enabled) {
 		l3fwd_event_resource_setup(&port_conf);
-		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
+		if (l3fwd_em_on)
+			l3fwd_lkp.main_loop = evt_rsrc->ops.em_event_loop;
+		else
+			l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		l3fwd_event_service_setup();
 	} else
 		l3fwd_poll_resource_setup();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (8 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 09/11] examples/l3fwd: add event em " pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
                     ` (2 subsequent siblings)
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add graceful teardown that addresses both event mode and poll mode.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/main.c | 50 +++++++++++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 16 deletions(-)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index c811b0db9..ee7b3b4b3 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1070,8 +1070,6 @@ l3fwd_poll_resource_setup(void)
 				ret, portid);
 		}
 	}
-
-
 }
 
 static inline int
@@ -1169,7 +1167,7 @@ main(int argc, char **argv)
 	uint16_t queueid, portid;
 	unsigned int lcore_id;
 	uint8_t queue;
-	int ret;
+	int i, ret;
 
 	/* init EAL */
 	ret = rte_eal_init(argc, argv);
@@ -1258,21 +1256,41 @@ main(int argc, char **argv)
 	ret = 0;
 	/* launch per-lcore init on every lcore */
 	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
-		if (rte_eal_wait_lcore(lcore_id) < 0) {
-			ret = -1;
-			break;
+	if (evt_rsrc->enabled) {
+		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
+			rte_event_eth_rx_adapter_stop(
+					evt_rsrc->rx_adptr.rx_adptr[i]);
+		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
+			rte_event_eth_tx_adapter_stop(
+					evt_rsrc->tx_adptr.tx_adptr[i]);
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_stop(portid);
 		}
-	}
 
-	/* stop ports */
-	RTE_ETH_FOREACH_DEV(portid) {
-		if ((enabled_port_mask & (1 << portid)) == 0)
-			continue;
-		printf("Closing port %d...", portid);
-		rte_eth_dev_stop(portid);
-		rte_eth_dev_close(portid);
-		printf(" Done\n");
+		rte_eal_mp_wait_lcore();
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_close(portid);
+		}
+
+		rte_event_dev_stop(evt_rsrc->event_d_id);
+		rte_event_dev_close(evt_rsrc->event_d_id);
+
+	} else {
+		rte_eal_mp_wait_lcore();
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			printf("Closing port %d...", portid);
+			rte_eth_dev_stop(portid);
+			rte_eth_dev_close(portid);
+			printf(" Done\n");
+		}
 	}
 	printf("Bye...\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v3 11/11] doc: update l3fwd user guide to support eventdev
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (9 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
@ 2020-01-11 13:47   ` pbhagavatula
  2020-01-13 12:02   ` [dpdk-dev] [PATCH v3 00/11] example/l3fwd: introduce event device support Nipun Gupta
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
  12 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-11 13:47 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh, John McNamara
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Update l3fwd user guide to include event device related information.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 doc/guides/sample_app_ug/l3_forward.rst | 76 +++++++++++++++++++++++--
 1 file changed, 70 insertions(+), 6 deletions(-)

diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 4cb4b18da..e86c4dfd5 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -4,16 +4,23 @@
 L3 Forwarding Sample Application
 ================================
 
-The L3 Forwarding application is a simple example of packet processing using the DPDK.
+The L3 Forwarding application is a simple example of packet processing using
+DPDK to demonstrate usage of poll and event mode packet I/O mechanism.
 The application performs L3 forwarding.
 
 Overview
 --------
 
-The application demonstrates the use of the hash and LPM libraries in the DPDK to implement packet forwarding.
-The initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
-The main difference from the L2 Forwarding sample application is that the forwarding decision
-is made based on information read from the input packet.
+The application demonstrates the use of the hash and LPM libraries in the DPDK
+to implement packet forwarding using poll or event mode PMDs for packet I/O.
+The initialization and run-time paths are very similar to those of the
+:doc:`l2_forward_real_virtual` and :doc:`l2_forward_event_real_virtual`.
+The main difference from the L2 Forwarding sample application is that optionally
+packet can be Rx/Tx from/to eventdev instead of port directly and forwarding
+decision is made based on information read from the input packet.
+
+Eventdev can optionally use S/W or H/W (if supported by platform) scheduler
+implementation for packet I/O based on run time parameters.
 
 The lookup method is either hash-based or LPM-based and is selected at run time. When the selected lookup method is hash-based,
 a hash object is used to emulate the flow classification stage.
@@ -56,6 +63,8 @@ The application has a number of command line options::
                              [--ipv6]
                              [--parse-ptype]
                              [--per-port-pool]
+                             [--mode]
+                             [--eventq-sched]
 
 Where,
 
@@ -86,6 +95,11 @@ Where,
 
 * ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports.
 
+* ``--mode:`` Optional, Packet transfer mode for I/O, poll or eventdev.
+
+* ``--eventq-sched:`` Optional, Event queue synchronization method, Ordered or Atomic. Only valid if --mode=eventdev.
+
+
 For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
 while cores 8-15 and 24-31 appear on socket 1.
 
@@ -116,6 +130,51 @@ In this command:
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+
 
+To use eventdev mode with sync method **ordered** on above mentioned environment,
+Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+
+or
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+
+In this command:
+
+*   -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+
+*   The --mode option defines PMD to be used for packet I/O.
+
+*   The --eventq-sched option enables synchronization menthod of event queue so that packets will be scheduled accordingly.
+
+If application uses S/W scheduler, it uses following DPDK services:
+
+*   Software scheduler
+*   Rx adapter service function
+*   Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -p 0x3 --mode=eventdev --eventq-sched=ordered
+
+In case of eventdev mode, *--config* option is not used for ethernet port
+configuration. Instead each ethernet port will be configured with mentioned
+setup:
+
+*   Single Rx/Tx queue
+
+*   Each Rx queue will be connected to event queue via Rx adapter.
+
+*   Each Tx queue will be connected via Tx adapter.
+
 Refer to the *DPDK Getting Started Guide* for general information on running applications and
 the Environment Abstraction Layer (EAL) options.
 
@@ -125,7 +184,7 @@ Explanation
 -----------
 
 The following sections provide some explanation of the sample application code. As mentioned in the overview section,
-the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
+the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual` and :doc:`l2_forward_event_real_virtual`.
 The following sections describe aspects that are specific to the L3 Forwarding sample application.
 
 Hash Initialization
@@ -315,3 +374,8 @@ for LPM-based lookups is done by the get_ipv4_dst_port() function below:
 
         return ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct, rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)? next_hop : portid);
     }
+
+Eventdev Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Eventdev driver initialization is same as L2 forwarding eventdev application.
+Refer :doc:`l2_forward_event_real_virtual` for more details.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/11] example/l3fwd: introduce event device support
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (10 preceding siblings ...)
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
@ 2020-01-13 12:02   ` Nipun Gupta
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
  12 siblings, 0 replies; 78+ messages in thread
From: Nipun Gupta @ 2020-01-13 12:02 UTC (permalink / raw)
  To: pbhagavatula, jerinj, konstantin.ananyev; +Cc: dev

Series Acked-by: Nipun Gupta <nipun.gupta@nxp.com>

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of
> pbhagavatula@marvell.com
> Sent: Saturday, January 11, 2020 7:17 PM
> To: jerinj@marvell.com; konstantin.ananyev@intel.com
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v3 00/11] example/l3fwd: introduce event
> device support
> 
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> This patchset adds support for event mode in l3fwd.
> The poll mode is still the default mode of operation.
> 
> The following new command line parameters are added:
>  --mode: Dictates the mode of operation either poll or event. If unset then
>          by default poll mode is used.
>  --eventq_sched: Dictates event synchronization mode i.e. either ordered,
> 		 atomic or parallel.
>  --event-eth-rxqs: Number of ethernet RX queues per device.
> 
> Based on event device capability the configuration is done as follows:
>     - A single event device is enabled.
>     - The number of event ports is equal to the number of worker
>       cores enabled in the core mask. Additional event ports might
>       be configured based on Rx/Tx adapter capability.
>     - The number of event queues is equal to the number of ethernet
>       ports. If Tx adapter doesn't have internal port capability then
>       an additional single link event queue is used to enqueue events
>       to Tx adapter.
>     - Each event port is linked to all existing event queues.
>     - Dedicated Rx/Tx adapters for each Ethernet port.
> 
> v3 Changes:
> ----------
> - Unify argument parsing.
> - Segregate poll mode and event mode initialization.
> - Simplify event resource creation and accesses.
> - Integrate
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D8002&amp;data=0
> 2%7C01%7Cnipun.gupta%40nxp.com%7Cab797315191e4c4a749c08d7969cd3f
> 2%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C63714347262920637
> 1&amp;sdata=9Mp%2B8oSfYORFBTZKxPkZWnZ4RzcAHzAzsv8A0hELxa8%3D&
> amp;reserved=0 series.
> - Reduce code duplication in lpm.
> 
> v2 Changes:
> ----------
> - Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
> - Use rte_panic instead of rte_exit.
> - Rebase on top of Tx adapter change
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> es.dpdk.org%2Fpatch%2F60971&amp;data=02%7C01%7Cnipun.gupta%40nxp
> .com%7Cab797315191e4c4a749c08d7969cd3f2%7C686ea1d3bc2b4c6fa92cd99
> c5c301635%7C0%7C0%7C637143472629206371&amp;sdata=vGrBNBnOHkPvu
> 6AcGUH7%2BBJinNoh6TtU6ZT80hXJR8I%3D&amp;reserved=0.
> - Add parallel mode support.
> - Fix missing eventdev args parsing.
> 
> Pavan Nikhilesh (7):
>   examples/l3fwd: add event device configuration
>   examples/l3fwd: add event eth Rx/Tx adapter setup
>   examples/l3fwd: add service core setup based on caps
>   examples/l3fwd: add event lpm main loop
>   examples/l3fwd: add event em main loop
>   examples/l3fwd: add graceful teardown for eventdevice
>   doc: update l3fwd user guide to support eventdev
> 
> Sunil Kumar Kori (4):
>   examples/l3fwd: add framework for event device
>   examples/l3fwd: split pipelines based on capability
>   examples/l3fwd: add ethdev setup based on eventdev
>   examples/l3fwd: add event port and queue setup
> 
>  doc/guides/sample_app_ug/l3_forward.rst    |  76 ++++-
>  examples/l3fwd/Makefile                    |   3 +-
>  examples/l3fwd/l3fwd.h                     |  30 ++
>  examples/l3fwd/l3fwd_em.c                  | 177 ++++++++++++
>  examples/l3fwd/l3fwd_em.h                  | 159 ++++++++---
>  examples/l3fwd/l3fwd_em_hlm.h              | 131 +++++++++
>  examples/l3fwd/l3fwd_em_sequential.h       |  26 ++
>  examples/l3fwd/l3fwd_event.c               | 263 +++++++++++++++++
>  examples/l3fwd/l3fwd_event.h               |  86 ++++++
>  examples/l3fwd/l3fwd_event_generic.c       | 302 ++++++++++++++++++++
>  examples/l3fwd/l3fwd_event_internal_port.c | 281 ++++++++++++++++++
>  examples/l3fwd/l3fwd_lpm.c                 | 231 +++++++++++++++
>  examples/l3fwd/main.c                      | 316 +++++++++++++++++----
>  examples/l3fwd/meson.build                 |   5 +-
>  14 files changed, 1971 insertions(+), 115 deletions(-)
>  create mode 100644 examples/l3fwd/l3fwd_event.c
>  create mode 100644 examples/l3fwd/l3fwd_event.h
>  create mode 100644 examples/l3fwd/l3fwd_event_generic.c
>  create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device pbhagavatula
@ 2020-01-21  8:44     ` Jerin Jacob
  0 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2020-01-21  8:44 UTC (permalink / raw)
  To: Pavan Nikhilesh
  Cc: Jerin Jacob, Ananyev, Konstantin, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, dpdk-dev, Nipun Gupta

On Sat, Jan 11, 2020 at 7:17 PM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add framework to enable event device as a producer of packets.
> To switch between event mode and poll mode the following options
> have been added:
>         `--mode="eventdev"` or `--mode="poll"`
> Also, allow the user to select the schedule type to be either
> RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_PARALLEL
> through:
>         `--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
>                 `--eventq-sched="parallel"`

1) We need to throw an error when these options are selected with poll
mode and vice versa
2) Update the example documentation for the newly added "event-eth-rxqs" option.

Other than that, it looks good to me.

>
> Poll mode is still the default operation mode.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
> Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v3 02/11] examples/l3fwd: split pipelines based on capability
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
@ 2020-01-21  8:46     ` Jerin Jacob
  0 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2020-01-21  8:46 UTC (permalink / raw)
  To: Pavan Nikhilesh
  Cc: Jerin Jacob, Ananyev, Konstantin, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, dpdk-dev

On Sat, Jan 11, 2020 at 7:17 PM <pbhagavatula@marvell.com> wrote:
>
> From: Sunil Kumar Kori <skori@marvell.com>
>
> Add infra to split eventdev framework based on event Tx adapter
> capability.
> If event Tx adapter has internal port capability then we use
> `rte_event_eth_tx_adapter_enqueue` to transmitting packets else
> we use a SINGLE_LINK event queue to enqueue packets to a service
> core which is responsible for transmitting packets.
>
> Signed-off-by: Sunil Kumar Kori <skori@marvell.com>

Acked-by: Jerin Jacob <jerinj@marvell.com>


> ---
>  examples/l3fwd/Makefile                    |  1 +
>  examples/l3fwd/l3fwd_event.c               | 31 ++++++++++++++++++++++
>  examples/l3fwd/l3fwd_event.h               | 20 ++++++++++++++
>  examples/l3fwd/l3fwd_event_generic.c       | 14 ++++++++++
>  examples/l3fwd/l3fwd_event_internal_port.c | 14 ++++++++++
>  examples/l3fwd/meson.build                 |  3 ++-
>  6 files changed, 82 insertions(+), 1 deletion(-)
>  create mode 100644 examples/l3fwd/l3fwd_event_generic.c
>  create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c
>
> diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
> index c892b867b..59a110d12 100644
> --- a/examples/l3fwd/Makefile
> +++ b/examples/l3fwd/Makefile
> @@ -6,6 +6,7 @@ APP = l3fwd
>
>  # all source are stored in SRCS-y
>  SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
> +SRCS-y += l3fwd_event_generic.c l3fwd_event_internal_port.c
>
>  # Build using pkg-config variables if possible
>  ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
> diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
> index 1040da4ea..62218f3ca 100644
> --- a/examples/l3fwd/l3fwd_event.c
> +++ b/examples/l3fwd/l3fwd_event.c
> @@ -30,6 +30,31 @@ l3fwd_get_eventdev_rsrc(void)
>         return NULL;
>  }
>
> +static void
> +l3fwd_event_capability_setup(void)
> +{
> +       struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
> +       uint32_t caps = 0;
> +       uint16_t i;
> +       int ret;
> +
> +       RTE_ETH_FOREACH_DEV(i) {
> +               ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
> +               if (ret)
> +                       rte_exit(EXIT_FAILURE,
> +                                "Invalid capability for Tx adptr port %d\n",
> +                                i);
> +
> +               evt_rsrc->tx_mode_q |= !(caps &
> +                                  RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
> +       }
> +
> +       if (evt_rsrc->tx_mode_q)
> +               l3fwd_event_set_generic_ops(&evt_rsrc->ops);
> +       else
> +               l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
> +}
> +
>  void
>  l3fwd_event_resource_setup(void)
>  {
> @@ -37,4 +62,10 @@ l3fwd_event_resource_setup(void)
>
>         if (!evt_rsrc->enabled)
>                 return;
> +
> +       if (!rte_event_dev_count())
> +               rte_exit(EXIT_FAILURE, "No Eventdev found");
> +
> +       /* Setup eventdev capability callbacks */
> +       l3fwd_event_capability_setup();
>  }
> diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
> index 4c23c4e1a..d25c8d222 100644
> --- a/examples/l3fwd/l3fwd_event.h
> +++ b/examples/l3fwd/l3fwd_event.h
> @@ -7,17 +7,37 @@
>
>  #include <rte_common.h>
>  #include <rte_eventdev.h>
> +#include <rte_event_eth_tx_adapter.h>
>  #include <rte_spinlock.h>
>
>  #include "l3fwd.h"
>
> +typedef uint32_t (*event_device_setup_cb)(void);
> +typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
> +typedef void (*event_port_setup_cb)(void);
> +typedef void (*adapter_setup_cb)(void);
> +typedef int (*event_loop_cb)(void *);
> +
> +struct l3fwd_event_setup_ops {
> +       event_device_setup_cb event_device_setup;
> +       event_queue_setup_cb event_queue_setup;
> +       event_port_setup_cb event_port_setup;
> +       adapter_setup_cb adapter_setup;
> +       event_loop_cb lpm_event_loop;
> +       event_loop_cb em_event_loop;
> +};
> +
>  struct l3fwd_event_resources {
> +       struct l3fwd_event_setup_ops ops;
>         uint8_t sched_type;
> +       uint8_t tx_mode_q;
>         uint8_t enabled;
>         uint8_t eth_rx_queues;
>  };
>
>  struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
>  void l3fwd_event_resource_setup(void);
> +void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
> +void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
>
>  #endif /* __L3FWD_EVENTDEV_H__ */
> diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
> new file mode 100644
> index 000000000..7fff850e5
> --- /dev/null
> +++ b/examples/l3fwd/l3fwd_event_generic.c
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2019 Marvell International Ltd.
> + */
> +
> +#include <stdbool.h>
> +
> +#include "l3fwd.h"
> +#include "l3fwd_event.h"
> +
> +void
> +l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
> +{
> +       RTE_SET_USED(ops);
> +}
> diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
> new file mode 100644
> index 000000000..085e9c825
> --- /dev/null
> +++ b/examples/l3fwd/l3fwd_event_internal_port.c
> @@ -0,0 +1,14 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(C) 2019 Marvell International Ltd.
> + */
> +
> +#include <stdbool.h>
> +
> +#include "l3fwd.h"
> +#include "l3fwd_event.h"
> +
> +void
> +l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
> +{
> +       RTE_SET_USED(ops);
> +}
> diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
> index 864327c7b..ebed3b518 100644
> --- a/examples/l3fwd/meson.build
> +++ b/examples/l3fwd/meson.build
> @@ -8,5 +8,6 @@
>
>  deps += ['hash', 'lpm', 'eventdev']
>  sources = files(
> -       'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
> +       'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c',
> +       'l3fwd_event_internal_port.c', 'l3fwd_event_generic.c', 'main.c'
>  )
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v3 03/11] examples/l3fwd: add event device configuration
  2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 03/11] examples/l3fwd: add event device configuration pbhagavatula
@ 2020-01-21  8:51     ` Jerin Jacob
  0 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2020-01-21  8:51 UTC (permalink / raw)
  To: Pavan Nikhilesh
  Cc: Jerin Jacob, Ananyev, Konstantin, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, dpdk-dev

On Sat, Jan 11, 2020 at 7:18 PM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Add event device configuration based on the capabilities of the
> probed event device.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

Looks like it has a similar bug[1] that has been found in l2fwd-event.
If so, please fix it.

[1]
commit 345a22d5ec1ab70e9d965ae50558049e6c0f2f8d
Author: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date:   Fri Dec 20 19:57:41 2019 +0530

    examples/l2fwd-event: fix event device config

    Always enable implicit release since we don't support explicit release
    in datapath.
    Master lcore is used only for printing stats so don't allocate event
    port for it.
    Fix service launch for event device without distributed scheduling.

    Fixes: bcb6f841d42a ("examples/l2fwd-event: setup service core")
    Cc: stable@dpdk.org

    Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  examples/l3fwd/l3fwd_event.c               |  3 +
>  examples/l3fwd/l3fwd_event.h               | 36 ++++++++++
>  examples/l3fwd/l3fwd_event_generic.c       | 75 ++++++++++++++++++++-
>  examples/l3fwd/l3fwd_event_internal_port.c | 78 +++++++++++++++++++++-
>  4 files changed, 190 insertions(+), 2 deletions(-)
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 00/11] example/l3fwd: introduce event device support
  2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
                     ` (11 preceding siblings ...)
  2020-01-13 12:02   ` [dpdk-dev] [PATCH v3 00/11] example/l3fwd: introduce event device support Nipun Gupta
@ 2020-01-22 18:28   ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 01/11] examples/l3fwd: add framework for event device pbhagavatula
                       ` (11 more replies)
  12 siblings, 12 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

This patchset adds support for event mode in l3fwd.
The poll mode is still the default mode of operation.

The following new command line parameters are added:
 --mode: Dictates the mode of operation either poll or event. If unset then
         by default poll mode is used.
 --eventq_sched: Dictates event synchronization mode i.e. either ordered,
		 atomic or parallel.
 --event-eth-rxqs: Number of ethernet RX queues per device.

Based on event device capability the configuration is done as follows:
    - A single event device is enabled.
    - The number of event ports is equal to the number of worker
      cores enabled in the core mask. Additional event ports might
      be configured based on Rx/Tx adapter capability.
    - The number of event queues is equal to the number of ethernet
      ports. If Tx adapter doesn't have internal port capability then
      an additional single link event queue is used to enqueue events
      to Tx adapter.
    - Each event port is linked to all existing event queues.
    - Dedicated Rx/Tx adapters for each Ethernet port.

v4 Changes:
----------
- Update documentation about --event-eth-rxqs. (Jerin)
- Add validation for command line options that are specific to event/poll mode.
- Fix event device service initialization.

v3 Changes:
----------
- Unify argument parsing.
- Segregate poll mode and event mode initialization.
- Simplify event resource creation and accesses.
- Integrate http://patches.dpdk.org/project/dpdk/list/?series=8002 series.
- Reduce code duplication in lpm.

v2 Changes:
----------
- Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
- Use rte_panic instead of rte_exit.
- Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
- Add parallel mode support.
- Fix missing eventdev args parsing.

Pavan Nikhilesh (7):
  examples/l3fwd: add event device configuration
  examples/l3fwd: add event eth Rx/Tx adapter setup
  examples/l3fwd: add service core setup based on caps
  examples/l3fwd: add event lpm main loop
  examples/l3fwd: add event em main loop
  examples/l3fwd: add graceful teardown for eventdevice
  doc: update l3fwd user guide to support eventdev

Sunil Kumar Kori (4):
  examples/l3fwd: add framework for event device
  examples/l3fwd: split pipelines based on capability
  examples/l3fwd: add ethdev setup based on eventdev
  examples/l3fwd: add event port and queue setup

 doc/guides/sample_app_ug/l3_forward.rst    |  79 ++++-
 examples/l3fwd/Makefile                    |   3 +-
 examples/l3fwd/l3fwd.h                     |  30 ++
 examples/l3fwd/l3fwd_em.c                  | 177 +++++++++++
 examples/l3fwd/l3fwd_em.h                  | 159 +++++++---
 examples/l3fwd/l3fwd_em_hlm.h              | 131 ++++++++
 examples/l3fwd/l3fwd_em_sequential.h       |  26 ++
 examples/l3fwd/l3fwd_event.c               | 263 ++++++++++++++++
 examples/l3fwd/l3fwd_event.h               |  86 ++++++
 examples/l3fwd/l3fwd_event_generic.c       | 303 ++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 279 +++++++++++++++++
 examples/l3fwd/l3fwd_lpm.c                 | 203 ++++++++++++
 examples/l3fwd/main.c                      | 341 +++++++++++++++++----
 examples/l3fwd/meson.build                 |   5 +-
 14 files changed, 1970 insertions(+), 115 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

--
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 01/11] examples/l3fwd: add framework for event device
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
                       ` (10 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Nipun Gupta

From: Sunil Kumar Kori <skori@marvell.com>

Add framework to enable event device as a producer of packets.
To switch between event mode and poll mode the following options
have been added:
	`--mode="eventdev"` or `--mode="poll"`
Allow the user to select the schedule type to be either
RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_PARALLEL
through:
	`--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
		`--eventq-sched="parallel"`
Allow the user to specify the number of Rx queues to be connected to
event queue using:
	`--event-eth-rxqs`

Poll mode is still the default operation mode.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 examples/l3fwd/Makefile      |   2 +-
 examples/l3fwd/l3fwd.h       |   1 +
 examples/l3fwd/l3fwd_event.c |  40 +++++++++++++
 examples/l3fwd/l3fwd_event.h |  23 ++++++++
 examples/l3fwd/main.c        | 110 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/meson.build   |   4 +-
 6 files changed, 174 insertions(+), 6 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index b2dbf2607..c892b867b 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -5,7 +5,7 @@
 APP = l3fwd

 # all source are stored in SRCS-y
-SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c
+SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c

 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index 293fb1fa2..e2399d76b 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -5,6 +5,7 @@
 #ifndef __L3_FWD_H__
 #define __L3_FWD_H__

+#include <rte_ethdev.h>
 #include <rte_vect.h>

 #define DO_RFC_1812_CHECKS
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
new file mode 100644
index 000000000..1040da4ea
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.c
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_malloc.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+struct l3fwd_event_resources *
+l3fwd_get_eventdev_rsrc(void)
+{
+	static struct l3fwd_event_resources *rsrc;
+
+	if (rsrc != NULL)
+		return rsrc;
+
+	rsrc = rte_zmalloc("l3fwd", sizeof(struct l3fwd_event_resources), 0);
+	if (rsrc != NULL) {
+		rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+		rsrc->eth_rx_queues = 1;
+		return rsrc;
+	}
+
+	rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+	return NULL;
+}
+
+void
+l3fwd_event_resource_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!evt_rsrc->enabled)
+		return;
+}
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
new file mode 100644
index 000000000..4c23c4e1a
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L3FWD_EVENTDEV_H__
+#define __L3FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_eventdev.h>
+#include <rte_spinlock.h>
+
+#include "l3fwd.h"
+
+struct l3fwd_event_resources {
+	uint8_t sched_type;
+	uint8_t enabled;
+	uint8_t eth_rx_queues;
+};
+
+struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
+void l3fwd_event_resource_setup(void);
+
+#endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4dea12a65..8a5cdb52e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -19,6 +19,7 @@
 #include <rte_vect.h>
 #include <rte_byteorder.h>
 #include <rte_log.h>
+#include <rte_malloc.h>
 #include <rte_memory.h>
 #include <rte_memcpy.h>
 #include <rte_eal.h>
@@ -33,7 +34,6 @@
 #include <rte_random.h>
 #include <rte_debug.h>
 #include <rte_ether.h>
-#include <rte_ethdev.h>
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
 #include <rte_ip.h>
@@ -46,6 +46,7 @@
 #include <cmdline_parse_etheraddr.h>

 #include "l3fwd.h"
+#include "l3fwd_event.h"

 /*
  * Configurable number of RX/TX ring descriptors
@@ -289,7 +290,9 @@ print_usage(const char *prgname)
 		" [--hash-entry-num]"
 		" [--ipv6]"
 		" [--parse-ptype]"
-		" [--per-port-pool]\n\n"
+		" [--per-port-pool]"
+		" [--mode]"
+		" [--eventq-sched]\n\n"

 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -304,7 +307,16 @@ print_usage(const char *prgname)
 		"  --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
 		"  --ipv6: Set if running ipv6 packets\n"
 		"  --parse-ptype: Set to use software to analyze packet type\n"
-		"  --per-port-pool: Use separate buffer pool per port\n\n",
+		"  --per-port-pool: Use separate buffer pool per port\n"
+		"  --mode: Packet transfer mode for I/O, poll or eventdev\n"
+		"          Default mode = poll\n"
+		"  --eventq-sched: Event queue synchronization method\n"
+		"                  ordered, atomic or parallel.\n"
+		"                  Default: atomic\n"
+		"                  Valid only if --mode=eventdev\n"
+		"  --event-eth-rxqs: Number of ethernet RX queues per device.\n"
+		"                    Default: 1\n"
+		"                    Valid only if --mode=eventdev\n\n",
 		prgname);
 }

@@ -440,6 +452,48 @@ parse_eth_dest(const char *optarg)
 	*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 }

+static void
+parse_mode(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strcmp(optarg, "poll"))
+		evt_rsrc->enabled = false;
+	else if (!strcmp(optarg, "eventdev"))
+		evt_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sched(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strcmp(optarg, "ordered"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+	if (!strcmp(optarg, "atomic"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+	if (!strcmp(optarg, "parallel"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
+}
+
+static void
+parse_event_eth_rx_queues(const char *eth_rx_queues)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	char *end = NULL;
+	uint8_t num_eth_rx_queues;
+
+	/* parse decimal string */
+	num_eth_rx_queues = strtoul(eth_rx_queues, &end, 10);
+	if ((eth_rx_queues[0] == '\0') || (end == NULL) || (*end != '\0'))
+		return;
+
+	if (num_eth_rx_queues == 0)
+		return;
+
+	evt_rsrc->eth_rx_queues = num_eth_rx_queues;
+}
+
 #define MAX_JUMBO_PKT_LEN  9600
 #define MEMPOOL_CACHE_SIZE 256

@@ -458,6 +512,9 @@ static const char short_options[] =
 #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
 #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
 #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
+#define CMD_LINE_OPT_EVENT_ETH_RX_QUEUES "event-eth-rxqs"
 enum {
 	/* long options mapped to a short option */

@@ -472,6 +529,9 @@ enum {
 	CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
 	CMD_LINE_OPT_PARSE_PTYPE_NUM,
 	CMD_LINE_OPT_PARSE_PER_PORT_POOL,
+	CMD_LINE_OPT_MODE_NUM,
+	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+	CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM,
 };

 static const struct option lgopts[] = {
@@ -483,6 +543,10 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
 	{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
 	{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
+	{CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+	{CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+	{CMD_LINE_OPT_EVENT_ETH_RX_QUEUES, 1, 0,
+					CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM},
 	{NULL, 0, 0, 0}
 };

@@ -508,6 +572,10 @@ parse_args(int argc, char **argv)
 	char **argvopt;
 	int option_index;
 	char *prgname = argv[0];
+	uint8_t lcore_params = 0;
+	uint8_t eventq_sched = 0;
+	uint8_t eth_rx_q = 0;
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();

 	argvopt = argv;

@@ -546,6 +614,7 @@ parse_args(int argc, char **argv)
 				print_usage(prgname);
 				return -1;
 			}
+			lcore_params = 1;
 			break;

 		case CMD_LINE_OPT_ETH_DEST_NUM:
@@ -607,6 +676,20 @@ parse_args(int argc, char **argv)
 			per_port_pool = 1;
 			break;

+		case CMD_LINE_OPT_MODE_NUM:
+			parse_mode(optarg);
+			break;
+
+		case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+			parse_eventq_sched(optarg);
+			eventq_sched = 1;
+			break;
+
+		case CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM:
+			parse_event_eth_rx_queues(optarg);
+			eth_rx_q = 1;
+			break;
+
 		default:
 			print_usage(prgname);
 			return -1;
@@ -619,6 +702,21 @@ parse_args(int argc, char **argv)
 		return -1;
 	}

+	if (evt_rsrc->enabled && lcore_params) {
+		fprintf(stderr, "lcore config is not valid when event mode is selected\n");
+		return -1;
+	}
+
+	if (!evt_rsrc->enabled && eth_rx_q) {
+		fprintf(stderr, "eth_rx_queues is valid only when event mode is selected\n");
+		return -1;
+	}
+
+	if (!evt_rsrc->enabled && eventq_sched) {
+		fprintf(stderr, "eventq_sched is valid only when event mode is selected\n");
+		return -1;
+	}
+
 	/*
 	 * Nothing is selected, pick longest-prefix match
 	 * as default match.
@@ -811,6 +909,7 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 int
 main(int argc, char **argv)
 {
+	struct l3fwd_event_resources *evt_rsrc;
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_txconf *txconf;
@@ -839,11 +938,16 @@ main(int argc, char **argv)
 		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 	}

+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");

+	/* Configure eventdev parameters if user has requested */
+	l3fwd_event_resource_setup();
+
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");

diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 6dd4b9022..864327c7b 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -6,7 +6,7 @@
 # To build this example as a standalone application with an already-installed
 # DPDK instance, use 'make'

-deps += ['hash', 'lpm']
+deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
 )
--
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 02/11] examples/l3fwd: split pipelines based on capability
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 01/11] examples/l3fwd: add framework for event device pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 03/11] examples/l3fwd: add event device configuration pbhagavatula
                       ` (9 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 examples/l3fwd/Makefile                    |  1 +
 examples/l3fwd/l3fwd_event.c               | 31 ++++++++++++++++++++++
 examples/l3fwd/l3fwd_event.h               | 20 ++++++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 14 ++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 14 ++++++++++
 examples/l3fwd/meson.build                 |  3 ++-
 6 files changed, 82 insertions(+), 1 deletion(-)
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index c892b867b..59a110d12 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -6,6 +6,7 @@ APP = l3fwd
 
 # all source are stored in SRCS-y
 SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
+SRCS-y += l3fwd_event_generic.c l3fwd_event_internal_port.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 1040da4ea..62218f3ca 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -30,6 +30,31 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
+static void
+l3fwd_event_capability_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t caps = 0;
+	uint16_t i;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(i) {
+		ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+		if (ret)
+			rte_exit(EXIT_FAILURE,
+				 "Invalid capability for Tx adptr port %d\n",
+				 i);
+
+		evt_rsrc->tx_mode_q |= !(caps &
+				   RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+	}
+
+	if (evt_rsrc->tx_mode_q)
+		l3fwd_event_set_generic_ops(&evt_rsrc->ops);
+	else
+		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
+}
+
 void
 l3fwd_event_resource_setup(void)
 {
@@ -37,4 +62,10 @@ l3fwd_event_resource_setup(void)
 
 	if (!evt_rsrc->enabled)
 		return;
+
+	if (!rte_event_dev_count())
+		rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+	/* Setup eventdev capability callbacks */
+	l3fwd_event_capability_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 4c23c4e1a..d25c8d222 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,17 +7,37 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_tx_adapter.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
 
+typedef uint32_t (*event_device_setup_cb)(void);
+typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*adapter_setup_cb)(void);
+typedef int (*event_loop_cb)(void *);
+
+struct l3fwd_event_setup_ops {
+	event_device_setup_cb event_device_setup;
+	event_queue_setup_cb event_queue_setup;
+	event_port_setup_cb event_port_setup;
+	adapter_setup_cb adapter_setup;
+	event_loop_cb lpm_event_loop;
+	event_loop_cb em_event_loop;
+};
+
 struct l3fwd_event_resources {
+	struct l3fwd_event_setup_ops ops;
 	uint8_t sched_type;
+	uint8_t tx_mode_q;
 	uint8_t enabled;
 	uint8_t eth_rx_queues;
 };
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
 void l3fwd_event_resource_setup(void);
+void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
+void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
 #endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
new file mode 100644
index 000000000..7fff850e5
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
new file mode 100644
index 000000000..085e9c825
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 864327c7b..ebed3b518 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -8,5 +8,6 @@
 
 deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c',
+	'l3fwd_event_internal_port.c', 'l3fwd_event_generic.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 03/11] examples/l3fwd: add event device configuration
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 01/11] examples/l3fwd: add framework for event device pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
                       ` (8 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event device configuration based on the capabilities of the
probed event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  3 +
 examples/l3fwd/l3fwd_event.h               | 36 ++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 76 +++++++++++++++++++++-
 examples/l3fwd/l3fwd_event_internal_port.c | 76 +++++++++++++++++++++-
 4 files changed, 189 insertions(+), 2 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 62218f3ca..9cb46c711 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -68,4 +68,7 @@ l3fwd_event_resource_setup(void)
 
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
+
+	/* Event device configuration */
+	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index d25c8d222..fc3862ec1 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -8,6 +8,7 @@
 #include <rte_common.h>
 #include <rte_eventdev.h>
 #include <rte_event_eth_tx_adapter.h>
+#include <rte_service.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
@@ -18,6 +19,29 @@ typedef void (*event_port_setup_cb)(void);
 typedef void (*adapter_setup_cb)(void);
 typedef int (*event_loop_cb)(void *);
 
+struct l3fwd_event_queues {
+	uint8_t *event_q_id;
+	uint8_t	nb_queues;
+};
+
+struct l3fwd_event_ports {
+	uint8_t *event_p_id;
+	uint8_t	nb_ports;
+	rte_spinlock_t lock;
+};
+
+struct l3fwd_event_rx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_rx_adptr;
+	uint8_t *rx_adptr;
+};
+
+struct l3fwd_event_tx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_tx_adptr;
+	uint8_t *tx_adptr;
+};
+
 struct l3fwd_event_setup_ops {
 	event_device_setup_cb event_device_setup;
 	event_queue_setup_cb event_queue_setup;
@@ -28,9 +52,21 @@ struct l3fwd_event_setup_ops {
 };
 
 struct l3fwd_event_resources {
+	struct rte_event_port_conf def_p_conf;
+	struct l3fwd_event_rx_adptr rx_adptr;
+	struct l3fwd_event_tx_adptr tx_adptr;
+	uint8_t disable_implicit_release;
 	struct l3fwd_event_setup_ops ops;
+	struct rte_mempool * (*pkt_pool)[NB_SOCKETS];
+	struct l3fwd_event_queues evq;
+	struct l3fwd_event_ports evp;
+	uint32_t port_mask;
+	uint8_t per_port_pool;
+	uint8_t event_d_id;
 	uint8_t sched_type;
 	uint8_t tx_mode_q;
+	uint8_t deq_depth;
+	uint8_t has_burst;
 	uint8_t enabled;
 	uint8_t eth_rx_queues;
 };
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 7fff850e5..c831cfd66 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -7,8 +7,82 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+	/* Enable implicit release */
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)
+		evt_rsrc->disable_implicit_release = 0;
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	/* One queue for each ethdev port + one Tx adapter Single link queue. */
+	event_d_conf.nb_event_queues = ethdev_count + 1;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count() - rte_service_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 085e9c825..f3e1ab3a2 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -7,8 +7,82 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+
+	/* Enable implicit release */
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)
+		evt_rsrc->disable_implicit_release = 0;
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	event_d_conf.nb_event_queues = ethdev_count;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (2 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 03/11] examples/l3fwd: add event device configuration pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
                       ` (7 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Nipun Gupta

From: Sunil Kumar Kori <skori@marvell.com>

Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 examples/l3fwd/l3fwd.h       |  10 +++
 examples/l3fwd/l3fwd_event.c | 138 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/l3fwd_event.h |   2 +-
 examples/l3fwd/main.c        |  15 ++--
 4 files changed, 153 insertions(+), 12 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index e2399d76b..ed5ba5f3b 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -16,9 +16,16 @@
 #define NO_HASH_MULTI_LOOKUP 1
 #endif
 
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
 #define MAX_PKT_BURST     32
 #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
 
+#define MEMPOOL_CACHE_SIZE 256
 #define MAX_RX_QUEUE_PER_LCORE 16
 
 /*
@@ -170,6 +177,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
 }
 #endif /* DO_RFC_1812_CHECKS */
 
+int
+init_mem(uint16_t portid, unsigned int nb_mbuf);
+
 /* Function pointers for LPM or EM functionality. */
 void
 setup_lpm(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 9cb46c711..f9491ecc6 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -10,6 +10,14 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+	char buf[RTE_ETHER_ADDR_FMT_SIZE];
+	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+	printf("%s%s", name, buf);
+}
+
 struct l3fwd_event_resources *
 l3fwd_get_eventdev_rsrc(void)
 {
@@ -30,6 +38,131 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
+static void
+l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint16_t nb_ports = rte_eth_dev_count_avail();
+	uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+	uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+	unsigned int nb_lcores = rte_lcore_count();
+	struct rte_eth_conf local_port_conf;
+	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf txconf;
+	struct rte_eth_rxconf rxconf;
+	unsigned int nb_mbuf;
+	uint16_t port_id;
+	uint8_t eth_qid;
+	int32_t ret;
+
+	/* initialize all ports */
+	RTE_ETH_FOREACH_DEV(port_id) {
+		local_port_conf = *port_conf;
+		/* skip ports that are not enabled */
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0) {
+			printf("\nSkipping disabled port %d\n", port_id);
+			continue;
+		}
+
+		/* init port */
+		printf("Initializing port %d ... ", port_id);
+		fflush(stdout);
+		printf("Creating queues: nb_rxq=%d nb_txq=1...\n",
+		       evt_rsrc->eth_rx_queues);
+
+		rte_eth_dev_info_get(port_id, &dev_info);
+		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+			local_port_conf.txmode.offloads |=
+						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+						dev_info.flow_type_rss_offloads;
+		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+				port_conf->rx_adv_conf.rss_conf.rss_hf) {
+			printf("Port %u modified RSS hash function "
+			       "based on hardware support,"
+			       "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+			       port_id,
+			       port_conf->rx_adv_conf.rss_conf.rss_hf,
+			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+		}
+
+		ret = rte_eth_dev_configure(port_id, evt_rsrc->eth_rx_queues,
+					    1, &local_port_conf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot configure device: err=%d, port=%d\n",
+				 ret, port_id);
+
+		ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+						       &nb_txd);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot adjust number of descriptors: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+		print_ethaddr(" Address:", &ports_eth_addr[port_id]);
+		printf(", ");
+		print_ethaddr("Destination:",
+			(const struct rte_ether_addr *)&dest_eth_addr[port_id]);
+		printf(", ");
+
+		/* prepare source MAC for each port. */
+		rte_ether_addr_copy(&ports_eth_addr[port_id],
+			(struct rte_ether_addr *)(val_eth + port_id) + 1);
+
+		/* init memory */
+		if (!evt_rsrc->per_port_pool) {
+			/* port_id = 0; this is *not* signifying the first port,
+			 * rather, it signifies that port_id is ignored.
+			 */
+			nb_mbuf = RTE_MAX(nb_ports * nb_rxd +
+					  nb_ports * nb_txd +
+					  nb_ports * nb_lcores *
+							MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(0, nb_mbuf);
+		} else {
+			nb_mbuf = RTE_MAX(nb_rxd + nb_rxd +
+					  nb_lcores * MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(port_id, nb_mbuf);
+		}
+		/* init Rx queues per port */
+		rxconf = dev_info.default_rxconf;
+		rxconf.offloads = local_port_conf.rxmode.offloads;
+
+		for (eth_qid = 0; eth_qid < evt_rsrc->eth_rx_queues;
+		     eth_qid++) {
+			if (!evt_rsrc->per_port_pool)
+				ret = rte_eth_rx_queue_setup(port_id, eth_qid,
+					nb_rxd, 0, &rxconf,
+					evt_rsrc->pkt_pool[0][0]);
+			else
+				ret = rte_eth_rx_queue_setup(port_id, eth_qid,
+					nb_rxd, 0, &rxconf,
+					evt_rsrc->pkt_pool[port_id][0]);
+			if (ret < 0)
+				rte_exit(EXIT_FAILURE,
+					 "rte_eth_rx_queue_setup: err=%d, "
+					 "port=%d, eth_qid: %d\n",
+					 ret, port_id, eth_qid);
+		}
+
+		/* init one Tx queue per port */
+		txconf = dev_info.default_txconf;
+		txconf.offloads = local_port_conf.txmode.offloads;
+		ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_tx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+	}
+}
+
 static void
 l3fwd_event_capability_setup(void)
 {
@@ -56,7 +189,7 @@ l3fwd_event_capability_setup(void)
 }
 
 void
-l3fwd_event_resource_setup(void)
+l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 
@@ -69,6 +202,9 @@ l3fwd_event_resource_setup(void)
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
 
+	/* Ethernet device configuration */
+	l3fwd_eth_dev_port_setup(port_conf);
+
 	/* Event device configuration */
 	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index fc3862ec1..53feea069 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -72,7 +72,7 @@ struct l3fwd_event_resources {
 };
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
-void l3fwd_event_resource_setup(void);
+void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 8a5cdb52e..04526343f 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -48,12 +48,6 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
-/*
- * Configurable number of RX/TX ring descriptors
- */
-#define RTE_TEST_RX_DESC_DEFAULT 1024
-#define RTE_TEST_TX_DESC_DEFAULT 1024
-
 #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
 #define MAX_RX_QUEUE_PER_PORT 128
 
@@ -495,7 +489,6 @@ parse_event_eth_rx_queues(const char *eth_rx_queues)
 }
 
 #define MAX_JUMBO_PKT_LEN  9600
-#define MEMPOOL_CACHE_SIZE 256
 
 static const char short_options[] =
 	"p:"  /* portmask */
@@ -752,7 +745,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
 	printf("%s%s", name, buf);
 }
 
-static int
+int
 init_mem(uint16_t portid, unsigned int nb_mbuf)
 {
 	struct lcore_conf *qconf;
@@ -939,14 +932,16 @@ main(int argc, char **argv)
 	}
 
 	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup();
+	l3fwd_event_resource_setup(&port_conf);
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 05/11] examples/l3fwd: add event port and queue setup
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (3 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
                       ` (6 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add event device queue and port setup based on event eth Tx adapter
capabilities.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  28 +++++-
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 103 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  98 ++++++++++++++++++++
 4 files changed, 229 insertions(+), 1 deletion(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index f9491ecc6..b58f9b79a 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -188,10 +188,30 @@ l3fwd_event_capability_setup(void)
 		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
 }
 
+int
+l3fwd_get_free_event_port(struct l3fwd_event_resources *evt_rsrc)
+{
+	static int index;
+	int port_id;
+
+	rte_spinlock_lock(&evt_rsrc->evp.lock);
+	if (index >= evt_rsrc->evp.nb_ports) {
+		printf("No free event port is available\n");
+		return -1;
+	}
+
+	port_id = evt_rsrc->evp.event_p_id[index];
+	index++;
+	rte_spinlock_unlock(&evt_rsrc->evp.lock);
+
+	return port_id;
+}
+
 void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t event_queue_cfg;
 
 	if (!evt_rsrc->enabled)
 		return;
@@ -206,5 +226,11 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	l3fwd_eth_dev_port_setup(port_conf);
 
 	/* Event device configuration */
-	evt_rsrc->ops.event_device_setup();
+	event_queue_cfg = evt_rsrc->ops.event_device_setup();
+
+	/* Event queue configuration */
+	evt_rsrc->ops.event_queue_setup(event_queue_cfg);
+
+	/* Event port configuration */
+	evt_rsrc->ops.event_port_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 53feea069..4bceca920 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -73,6 +73,7 @@ struct l3fwd_event_resources {
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
 void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
+int l3fwd_get_free_event_port(struct l3fwd_event_resources *eventdev_rsrc);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index c831cfd66..2532839bf 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -81,8 +81,111 @@ l3fwd_event_device_setup_generic(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("No space is available\n");
+
+	memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+			def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id,
+					  evt_rsrc->evq.event_q_id,
+					  NULL,
+					  evt_rsrc->evq.nb_queues - 1);
+		if (ret != (evt_rsrc->evq.nb_queues - 1))
+			rte_panic("Error in linking event port %d to queues\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+	}
+	/* init spinlock */
+	rte_spinlock_init(&evt_rsrc->evp.lock);
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id;
+	int32_t ret;
+
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	for (event_q_id = 0; event_q_id < (evt_rsrc->evq.nb_queues - 1);
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+
+	event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+	ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event queue for Tx adapter\n");
+	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
+	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
+	ops->event_port_setup = l3fwd_event_port_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index f3e1ab3a2..59b7d35b7 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -80,9 +80,107 @@ l3fwd_event_device_setup_internal_port(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("Failed to allocate memory for Event Ports\n");
+
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+						def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+					  NULL, 0);
+		if (ret < 0)
+			rte_panic("Error in linking event port %d to queue\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+		/* init spinlock */
+		rte_spinlock_init(&evt_rsrc->evp.lock);
+	}
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id = 0;
+	int32_t ret;
+
+	rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	if (def_q_conf.nb_atomic_order_sequences <
+					event_q_conf.nb_atomic_order_sequences)
+		event_q_conf.nb_atomic_order_sequences =
+					def_q_conf.nb_atomic_order_sequences;
+
+	event_q_conf.event_queue_cfg = event_queue_cfg;
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	for (event_q_id = 0; event_q_id < evt_rsrc->evq.nb_queues;
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+}
 
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
+	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
+	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (4 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
                       ` (5 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |   3 +
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 112 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  93 +++++++++++++++++
 4 files changed, 209 insertions(+)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index b58f9b79a..0a75e39ee 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -233,4 +233,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Event port configuration */
 	evt_rsrc->ops.event_port_setup();
+
+	/* Rx/Tx adapters configuration */
+	evt_rsrc->ops.adapter_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 4bceca920..b1ae970d4 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,6 +7,7 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
 #include <rte_event_eth_tx_adapter.h>
 #include <rte_service.h>
 #include <rte_spinlock.h>
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 2532839bf..1d5d0438d 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -182,10 +182,122 @@ l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
 	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint8_t rx_adptr_id = 0;
+	uint8_t tx_adptr_id = 0;
+	uint8_t tx_port_id = 0;
+	uint16_t port_id;
+	uint32_t service_id;
+	int32_t ret, i = 0;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	/* Rx adapter setup */
+	evt_rsrc->rx_adptr.nb_rx_adptr = 1;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create rx adapter\n");
+
+	/* Configure user requested sched type */
+	eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[i];
+		ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+		if (i < evt_rsrc->evq.nb_queues)
+			i++;
+	}
+
+	ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Error getting the service ID for rx adptr\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->rx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+	if (ret)
+		rte_panic("Rx adapter[%d] start Failed\n", rx_adptr_id);
+
+	evt_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+	/* Tx adapter setup */
+	evt_rsrc->tx_adptr.nb_tx_adptr = 1;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create tx adapter\n");
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Failed to get Tx adapter service ID\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->tx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+	if (ret)
+		rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+	ret = rte_event_port_link(event_d_id, tx_port_id,
+				  &evt_rsrc->evq.event_q_id[
+					evt_rsrc->evq.nb_queues - 1],
+				  NULL, 1);
+	if (ret != 1)
+		rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+			 ret);
+
+	ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+	if (ret)
+		rte_panic("Tx adapter[%d] start Failed\n", tx_adptr_id);
+
+	evt_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
 	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
 	ops->event_port_setup = l3fwd_event_port_setup_generic;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 59b7d35b7..56d28399a 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -177,10 +177,103 @@ l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
 	}
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint16_t adapter_id = 0;
+	uint16_t nb_adapter = 0;
+	uint16_t port_id;
+	uint8_t q_id = 0;
+	int ret;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		nb_adapter++;
+	}
+
+	evt_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create rx adapter[%d]\n",
+				  adapter_id);
+
+		/* Configure user requested sched type*/
+		eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+		ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+
+		ret = rte_event_eth_rx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+		if (q_id < evt_rsrc->evq.nb_queues)
+			q_id++;
+	}
+
+	evt_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	adapter_id = 0;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create tx adapter[%d]\n",
+				  adapter_id);
+
+		ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+
+		ret = rte_event_eth_tx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+	}
+}
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
 	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 07/11] examples/l3fwd: add service core setup based on caps
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (5 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
                       ` (4 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c |   6 ++
 examples/l3fwd/main.c        | 188 +++++++++++++++++++++++++++--------
 2 files changed, 150 insertions(+), 44 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 0a75e39ee..1ed42c3ab 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -212,6 +212,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 	uint32_t event_queue_cfg;
+	int ret;
 
 	if (!evt_rsrc->enabled)
 		return;
@@ -236,4 +237,9 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Rx/Tx adapters configuration */
 	evt_rsrc->ops.adapter_setup();
+
+	/* Start event device */
+	ret = rte_event_dev_start(evt_rsrc->event_d_id);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 04526343f..4bb00a48b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -899,49 +899,18 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 	return 0;
 }
 
-int
-main(int argc, char **argv)
+static void
+l3fwd_poll_resource_setup(void)
 {
-	struct l3fwd_event_resources *evt_rsrc;
-	struct lcore_conf *qconf;
+	uint8_t nb_rx_queue, queue, socketid;
 	struct rte_eth_dev_info dev_info;
+	uint32_t n_tx_queue, nb_lcores;
 	struct rte_eth_txconf *txconf;
-	int ret;
-	unsigned nb_ports;
+	struct lcore_conf *qconf;
 	uint16_t queueid, portid;
-	unsigned lcore_id;
-	uint32_t n_tx_queue, nb_lcores;
-	uint8_t nb_rx_queue, queue, socketid;
-
-	/* init EAL */
-	ret = rte_eal_init(argc, argv);
-	if (ret < 0)
-		rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
-	argc -= ret;
-	argv += ret;
-
-	force_quit = false;
-	signal(SIGINT, signal_handler);
-	signal(SIGTERM, signal_handler);
-
-	/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
-	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
-		dest_eth_addr[portid] =
-			RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
-		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
-	}
-
-	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	/* parse application arguments (after the EAL ones) */
-	ret = parse_args(argc, argv);
-	if (ret < 0)
-		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
-
-	evt_rsrc->per_port_pool = per_port_pool;
-	evt_rsrc->pkt_pool = pktmbuf_pool;
-	evt_rsrc->port_mask = enabled_port_mask;
-	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup(&port_conf);
+	unsigned int nb_ports;
+	unsigned int lcore_id;
+	int ret;
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
@@ -957,9 +926,6 @@ main(int argc, char **argv)
 
 	nb_lcores = rte_lcore_count();
 
-	/* Setup function pointers for lookup method. */
-	setup_l3fwd_lookup_tables();
-
 	/* initialize all ports */
 	RTE_ETH_FOREACH_DEV(portid) {
 		struct rte_eth_conf local_port_conf = port_conf;
@@ -1127,7 +1093,142 @@ main(int argc, char **argv)
 		}
 	}
 
-	printf("\n");
+
+}
+
+static inline int
+l3fwd_service_enable(uint32_t service_id)
+{
+	uint8_t min_service_count = UINT8_MAX;
+	uint32_t slcore_array[RTE_MAX_LCORE];
+	unsigned int slcore = 0;
+	uint8_t service_count;
+	int32_t slcore_count;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+	if (slcore_count < 0)
+		return -ENOENT;
+	/* Get the core which has least number of services running. */
+	while (slcore_count--) {
+		/* Reset default mapping */
+		rte_service_map_lcore_set(service_id,
+				slcore_array[slcore_count], 0);
+		service_count = rte_service_lcore_count_services(
+				slcore_array[slcore_count]);
+		if (service_count < min_service_count) {
+			slcore = slcore_array[slcore_count];
+			min_service_count = service_count;
+		}
+	}
+	if (rte_service_map_lcore_set(service_id, slcore, 1))
+		return -ENOENT;
+	rte_service_lcore_start(slcore);
+
+	return 0;
+}
+
+static void
+l3fwd_event_service_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_info evdev_info;
+	uint32_t service_id, caps;
+	int ret, i;
+
+	rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+	if (!(evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) {
+		ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting eventdev service\n");
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		ret = rte_event_eth_rx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		ret = rte_event_eth_tx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+}
+
+int
+main(int argc, char **argv)
+{
+	struct l3fwd_event_resources *evt_rsrc;
+	struct lcore_conf *qconf;
+	uint16_t queueid, portid;
+	unsigned int lcore_id;
+	uint8_t queue;
+	int ret;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
+	argc -= ret;
+	argv += ret;
+
+	force_quit = false;
+	signal(SIGINT, signal_handler);
+	signal(SIGTERM, signal_handler);
+
+	/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		dest_eth_addr[portid] =
+			RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
+		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
+	}
+
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	/* parse application arguments (after the EAL ones) */
+	ret = parse_args(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
+
+	/* Setup function pointers for lookup method. */
+	setup_l3fwd_lookup_tables();
+
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
+	/* Configure eventdev parameters if user has requested */
+	if (evt_rsrc->enabled) {
+		l3fwd_event_resource_setup(&port_conf);
+		l3fwd_event_service_setup();
+	} else
+		l3fwd_poll_resource_setup();
 
 	/* start ports */
 	RTE_ETH_FOREACH_DEV(portid) {
@@ -1170,7 +1271,6 @@ main(int argc, char **argv)
 		}
 	}
 
-
 	check_all_ports_link_status(enabled_port_mask);
 
 	ret = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 08/11] examples/l3fwd: add event lpm main loop
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (6 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 09/11] examples/l3fwd: add event em " pbhagavatula
                       ` (3 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add lpm main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h       |   9 ++
 examples/l3fwd/l3fwd_event.c |   9 ++
 examples/l3fwd/l3fwd_event.h |   5 +
 examples/l3fwd/l3fwd_lpm.c   | 203 +++++++++++++++++++++++++++++++++++
 examples/l3fwd/main.c        |   1 +
 5 files changed, 227 insertions(+)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index ed5ba5f3b..b3cdcd496 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -207,6 +207,15 @@ em_main_loop(__attribute__((unused)) void *dummy);
 int
 lpm_main_loop(__attribute__((unused)) void *dummy);
 
+int
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 1ed42c3ab..ebddd8f97 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -211,6 +211,12 @@ void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	const event_loop_cb lpm_event_loop[2][2] = {
+		[0][0] = lpm_event_main_loop_tx_d,
+		[0][1] = lpm_event_main_loop_tx_d_burst,
+		[1][0] = lpm_event_main_loop_tx_q,
+		[1][1] = lpm_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -242,4 +248,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	ret = rte_event_dev_start(evt_rsrc->event_d_id);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index b1ae970d4..0e4616417 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -14,6 +14,11 @@
 
 #include "l3fwd.h"
 
+#define L3FWD_EVENT_SINGLE     0x1
+#define L3FWD_EVENT_BURST      0x2
+#define L3FWD_EVENT_TX_DIRECT  0x4
+#define L3FWD_EVENT_TX_ENQ     0x8
+
 typedef uint32_t (*event_device_setup_cb)(void);
 typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
 typedef void (*event_port_setup_cb)(void);
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 349de2703..058a60b1d 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -28,6 +28,7 @@
 #include <rte_lpm6.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 struct ipv4_l3fwd_lpm_route {
 	uint32_t ip;
@@ -254,6 +255,208 @@ lpm_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline uint16_t
+lpm_process_event_pkt(const struct lcore_conf *lconf, struct rte_mbuf *mbuf)
+{
+	mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
+	|| defined RTE_ARCH_PPC_64
+	process_packet(mbuf, &mbuf->port);
+#else
+
+	struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
+			struct rte_ether_hdr *);
+#ifdef DO_RFC_1812_CHECKS
+	struct rte_ipv4_hdr *ipv4_hdr;
+	if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
+		/* Handle IPv4 headers.*/
+		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
+				struct rte_ipv4_hdr *,
+				sizeof(struct rte_ether_hdr));
+
+		if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
+				< 0) {
+			mbuf->port = BAD_PORT;
+			continue;
+		}
+		/* Update time to live and header checksum */
+		--(ipv4_hdr->time_to_live);
+		++(ipv4_hdr->hdr_checksum);
+	}
+#endif
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[mbuf->port];
+
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
+			&eth_hdr->s_addr);
+#endif
+	return mbuf->port;
+}
+
+static __rte_always_inline void
+lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		if (lpm_process_event_pkt(lconf, ev.mbuf) == BAD_PORT) {
+			rte_pktmbuf_free(ev.mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+
+			lpm_process_event_pkt(lconf, events[i].mbuf);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		lpm_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		lpm_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 void
 setup_lpm(const int socketid)
 {
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4bb00a48b..b8bd19b41 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1226,6 +1226,7 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	if (evt_rsrc->enabled) {
 		l3fwd_event_resource_setup(&port_conf);
+		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		l3fwd_event_service_setup();
 	} else
 		l3fwd_poll_resource_setup();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 09/11] examples/l3fwd: add event em main loop
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (7 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
                       ` (2 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add em main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h               |  10 ++
 examples/l3fwd/l3fwd_em.c            | 177 +++++++++++++++++++++++++++
 examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
 examples/l3fwd/l3fwd_em_hlm.h        | 131 ++++++++++++++++++++
 examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
 examples/l3fwd/l3fwd_event.c         |   9 ++
 examples/l3fwd/main.c                |   5 +-
 7 files changed, 470 insertions(+), 47 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index b3cdcd496..c786f9062 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -216,6 +216,16 @@ lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
 int
 lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
 
+int
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c
index 74a7c8fa4..1a8bc9168 100644
--- a/examples/l3fwd/l3fwd_em.c
+++ b/examples/l3fwd/l3fwd_em.c
@@ -26,6 +26,7 @@
 #include <rte_hash.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 #if defined(RTE_ARCH_X86) || defined(RTE_MACHINE_CPUFLAG_CRC32)
 #define EM_HASH_CRC 1
@@ -699,6 +700,182 @@ em_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline void
+em_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		struct rte_mbuf *mbuf = ev.mbuf;
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		mbuf->port = em_get_dst_port(lconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+#else
+		l3fwd_em_simple_process(mbuf, lconf);
+#endif
+		if (mbuf->port == BAD_PORT) {
+			rte_pktmbuf_free(mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		l3fwd_em_process_events(nb_deq, (struct rte_event **)&events,
+					lconf);
+#else
+		l3fwd_em_no_opt_process_events(nb_deq,
+					       (struct rte_event **)&events,
+					       lconf);
+#endif
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		em_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		em_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 /*
  * Initialize exact match (hash) parameters.
  */
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index 090c1b448..b992a21da 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -5,73 +5,92 @@
 #ifndef __L3FWD_EM_H__
 #define __L3FWD_EM_H__
 
-static __rte_always_inline void
-l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
-		struct lcore_conf *qconf)
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
+		     struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
 {
-	struct rte_ether_hdr *eth_hdr;
 	struct rte_ipv4_hdr *ipv4_hdr;
 	uint16_t dst_port;
-	uint32_t tcp_or_udp;
-	uint32_t l3_ptypes;
-
-	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
-	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
 
-	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
-		/* Handle IPv4 headers.*/
-		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
-						sizeof(struct rte_ether_hdr));
+	/* Handle IPv4 headers.*/
+	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
+			sizeof(struct rte_ether_hdr));
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Check to make sure the packet is valid (RFC1812) */
-		if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
-			rte_pktmbuf_free(m);
-			return;
-		}
+	/* Check to make sure the packet is valid (RFC1812) */
+	if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
+		rte_pktmbuf_free(m);
+		return BAD_PORT;
+	}
 #endif
-		dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
-						qconf->ipv4_lookup_struct);
+	dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
+			qconf->ipv4_lookup_struct);
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Update time to live and header checksum */
-		--(ipv4_hdr->time_to_live);
-		++(ipv4_hdr->hdr_checksum);
+	/* Update time to live and header checksum */
+	--(ipv4_hdr->time_to_live);
+	++(ipv4_hdr->hdr_checksum);
 #endif
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		send_single_packet(qconf, m, dst_port);
-	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
-		/* Handle IPv6 headers.*/
-		struct rte_ipv6_hdr *ipv6_hdr;
+	return dst_port;
+}
 
-		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
-						sizeof(struct rte_ether_hdr));
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
+		struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
+{
+	/* Handle IPv6 headers.*/
+	struct rte_ipv6_hdr *ipv6_hdr;
+	uint16_t dst_port;
 
-		dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
-					qconf->ipv6_lookup_struct);
+	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
+			sizeof(struct rte_ether_hdr));
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
+			qconf->ipv6_lookup_struct);
+
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
+
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	return dst_port;
+}
 
+static __rte_always_inline void
+l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
+		struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint16_t dst_port;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
+		dst_port = l3fwd_em_handle_ipv4(m, portid, eth_hdr, qconf);
+		send_single_packet(qconf, m, dst_port);
+	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
+		dst_port = l3fwd_em_handle_ipv6(m, portid, eth_hdr, qconf);
 		send_single_packet(qconf, m, dst_port);
 	} else {
 		/* Free the mbuf that contains non-IPV4/IPV6 packet */
@@ -79,6 +98,25 @@ l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
 	}
 }
 
+static __rte_always_inline void
+l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4))
+		m->port = l3fwd_em_handle_ipv4(m, m->port, eth_hdr, qconf);
+	else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6))
+		m->port = l3fwd_em_handle_ipv6(m, m->port, eth_hdr, qconf);
+	else
+		m->port = BAD_PORT;
+}
+
 /*
  * Buffer non-optimized handling of packets, invoked
  * from main_loop.
@@ -108,4 +146,33 @@ l3fwd_em_no_opt_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 		l3fwd_em_simple_forward(pkts_burst[j], portid, qconf);
 }
 
+/*
+ * Buffer non-optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_no_opt_process_events(int nb_rx, struct rte_event **events,
+			       struct lcore_conf *qconf)
+{
+	int32_t j;
+
+	/* Prefetch first packets */
+	for (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++)
+		rte_prefetch0(rte_pktmbuf_mtod(events[j]->mbuf, void *));
+
+	/*
+	 * Prefetch and forward already prefetched
+	 * packets.
+	 */
+	for (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) {
+		rte_prefetch0(rte_pktmbuf_mtod(events[
+				j + PREFETCH_OFFSET]->mbuf, void *));
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+	}
+
+	/* Forward remaining prefetched packets */
+	for (; j < nb_rx; j++)
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+}
+
 #endif /* __L3FWD_EM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h
index ad8b9ce87..79812716c 100644
--- a/examples/l3fwd/l3fwd_em_hlm.h
+++ b/examples/l3fwd/l3fwd_em_hlm.h
@@ -75,6 +75,60 @@ em_get_dst_port_ipv6xN(struct lcore_conf *qconf, struct rte_mbuf *m[],
 	}
 }
 
+static __rte_always_inline void
+em_get_dst_port_ipv4xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv4_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv4_5tuple(m[i], mask0.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv4_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv4_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
+static __rte_always_inline void
+em_get_dst_port_ipv6xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv6_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv6_5tuple(m[i], mask1.x, mask2.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv6_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv6_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
 static __rte_always_inline uint16_t
 em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 		uint16_t portid)
@@ -187,4 +241,81 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **ev,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j, pos;
+	uint16_t dst_port[MAX_PKT_BURST];
+	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+
+	/*
+	 * Send nb_rx - nb_rx % EM_HASH_LOOKUP_COUNT packets
+	 * in groups of EM_HASH_LOOKUP_COUNT.
+	 */
+	int32_t n = RTE_ALIGN_FLOOR(nb_rx, EM_HASH_LOOKUP_COUNT);
+
+	for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < nb_rx; j++) {
+		pkts_burst[j] = ev[j]->mbuf;
+		rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j],
+					       struct rte_ether_hdr *) + 1);
+	}
+
+	for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) {
+
+		uint32_t pkt_type = RTE_PTYPE_L3_MASK |
+				    RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP;
+		uint32_t l3_type, tcp_or_udp;
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkt_type &= pkts_burst[j + i]->packet_type;
+
+		l3_type = pkt_type & RTE_PTYPE_L3_MASK;
+		tcp_or_udp = pkt_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+
+		for (i = 0, pos = j + EM_HASH_LOOKUP_COUNT;
+		     i < EM_HASH_LOOKUP_COUNT && pos < nb_rx; i++, pos++) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					pkts_burst[pos],
+					struct rte_ether_hdr *) + 1);
+		}
+
+		if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV4)) {
+
+			em_get_dst_port_ipv4xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV6)) {
+
+			em_get_dst_port_ipv6xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else {
+			for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+				pkts_burst[j + i]->port = em_get_dst_port(qconf,
+						pkts_burst[j + i],
+						pkts_burst[j + i]->port);
+				process_packet(pkts_burst[j + i],
+						&pkts_burst[j + i]->port);
+			}
+			continue;
+		}
+		processx4_step3(&pkts_burst[j], &dst_port[j]);
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkts_burst[j + i]->port = dst_port[j + i];
+
+	}
+
+	for (; j < nb_rx; j++) {
+		pkts_burst[j]->port = em_get_dst_port(qconf, pkts_burst[j],
+						      pkts_burst[j]->port);
+		process_packet(pkts_burst[j], &pkts_burst[j]->port);
+	}
+}
 #endif /* __L3FWD_EM_HLM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h
index 23fe9dec8..b231b9994 100644
--- a/examples/l3fwd/l3fwd_em_sequential.h
+++ b/examples/l3fwd/l3fwd_em_sequential.h
@@ -95,4 +95,30 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **events,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j;
+
+	rte_prefetch0(rte_pktmbuf_mtod(events[0]->mbuf,
+		      struct rte_ether_hdr *) + 1);
+
+	for (i = 1, j = 0; j < nb_rx; i++, j++) {
+		struct rte_mbuf *mbuf = events[j]->mbuf;
+
+		if (i < nb_rx) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					events[i]->mbuf,
+					struct rte_ether_hdr *) + 1);
+		}
+		mbuf->port = em_get_dst_port(qconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+	}
+}
 #endif /* __L3FWD_EM_SEQUENTIAL_H__ */
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index ebddd8f97..43c47eade 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -217,6 +217,12 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 		[1][0] = lpm_event_main_loop_tx_q,
 		[1][1] = lpm_event_main_loop_tx_q_burst,
 	};
+	const event_loop_cb em_event_loop[2][2] = {
+		[0][0] = em_event_main_loop_tx_d,
+		[0][1] = em_event_main_loop_tx_d_burst,
+		[1][0] = em_event_main_loop_tx_q,
+		[1][1] = em_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -251,4 +257,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
 						       [evt_rsrc->has_burst];
+
+	evt_rsrc->ops.em_event_loop = em_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index b8bd19b41..c95b1655e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1226,7 +1226,10 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	if (evt_rsrc->enabled) {
 		l3fwd_event_resource_setup(&port_conf);
-		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
+		if (l3fwd_em_on)
+			l3fwd_lkp.main_loop = evt_rsrc->ops.em_event_loop;
+		else
+			l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		l3fwd_event_service_setup();
 	} else
 		l3fwd_poll_resource_setup();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (8 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 09/11] examples/l3fwd: add event em " pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add graceful teardown that addresses both event mode and poll mode.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/main.c | 50 +++++++++++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 16 deletions(-)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index c95b1655e..dda430d68 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1092,8 +1092,6 @@ l3fwd_poll_resource_setup(void)
 				ret, portid);
 		}
 	}
-
-
 }
 
 static inline int
@@ -1191,7 +1189,7 @@ main(int argc, char **argv)
 	uint16_t queueid, portid;
 	unsigned int lcore_id;
 	uint8_t queue;
-	int ret;
+	int i, ret;
 
 	/* init EAL */
 	ret = rte_eal_init(argc, argv);
@@ -1280,21 +1278,41 @@ main(int argc, char **argv)
 	ret = 0;
 	/* launch per-lcore init on every lcore */
 	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
-		if (rte_eal_wait_lcore(lcore_id) < 0) {
-			ret = -1;
-			break;
+	if (evt_rsrc->enabled) {
+		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
+			rte_event_eth_rx_adapter_stop(
+					evt_rsrc->rx_adptr.rx_adptr[i]);
+		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
+			rte_event_eth_tx_adapter_stop(
+					evt_rsrc->tx_adptr.tx_adptr[i]);
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_stop(portid);
 		}
-	}
 
-	/* stop ports */
-	RTE_ETH_FOREACH_DEV(portid) {
-		if ((enabled_port_mask & (1 << portid)) == 0)
-			continue;
-		printf("Closing port %d...", portid);
-		rte_eth_dev_stop(portid);
-		rte_eth_dev_close(portid);
-		printf(" Done\n");
+		rte_eal_mp_wait_lcore();
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_close(portid);
+		}
+
+		rte_event_dev_stop(evt_rsrc->event_d_id);
+		rte_event_dev_close(evt_rsrc->event_d_id);
+
+	} else {
+		rte_eal_mp_wait_lcore();
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			printf("Closing port %d...", portid);
+			rte_eth_dev_stop(portid);
+			rte_eth_dev_close(portid);
+			printf(" Done\n");
+		}
 	}
 	printf("Bye...\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v4 11/11] doc: update l3fwd user guide to support eventdev
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (9 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
@ 2020-01-22 18:28     ` pbhagavatula
  2020-01-23  5:25       ` Jerin Jacob
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
  11 siblings, 1 reply; 78+ messages in thread
From: pbhagavatula @ 2020-01-22 18:28 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh, John McNamara
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Update l3fwd user guide to include event device related information.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 doc/guides/sample_app_ug/l3_forward.rst | 79 +++++++++++++++++++++++--
 1 file changed, 73 insertions(+), 6 deletions(-)

diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 4cb4b18da..5cb0abc32 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -4,16 +4,23 @@
 L3 Forwarding Sample Application
 ================================
 
-The L3 Forwarding application is a simple example of packet processing using the DPDK.
+The L3 Forwarding application is a simple example of packet processing using
+DPDK to demonstrate usage of poll and event mode packet I/O mechanism.
 The application performs L3 forwarding.
 
 Overview
 --------
 
-The application demonstrates the use of the hash and LPM libraries in the DPDK to implement packet forwarding.
-The initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
-The main difference from the L2 Forwarding sample application is that the forwarding decision
-is made based on information read from the input packet.
+The application demonstrates the use of the hash and LPM libraries in the DPDK
+to implement packet forwarding using poll or event mode PMDs for packet I/O.
+The initialization and run-time paths are very similar to those of the
+:doc:`l2_forward_real_virtual` and :doc:`l2_forward_event`.
+The main difference from the L2 Forwarding sample application is that optionally
+packet can be Rx/Tx from/to eventdev instead of port directly and forwarding
+decision is made based on information read from the input packet.
+
+Eventdev can optionally use S/W or H/W (if supported by platform) scheduler
+implementation for packet I/O based on run time parameters.
 
 The lookup method is either hash-based or LPM-based and is selected at run time. When the selected lookup method is hash-based,
 a hash object is used to emulate the flow classification stage.
@@ -56,6 +63,9 @@ The application has a number of command line options::
                              [--ipv6]
                              [--parse-ptype]
                              [--per-port-pool]
+                             [--mode]
+                             [--eventq-sched]
+                             [--event-eth-rxqs]
 
 Where,
 
@@ -86,6 +96,13 @@ Where,
 
 * ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports.
 
+* ``--mode:`` Optional, Packet transfer mode for I/O, poll or eventdev.
+
+* ``--eventq-sched:`` Optional, Event queue synchronization method, Ordered, Atomic or Parallel. Only valid if --mode=eventdev.
+
+* ``--event-eth-rxqs:`` Optionala, Number of ethernet RX queues per device. Only valid if --mode=eventdev.
+
+
 For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
 while cores 8-15 and 24-31 appear on socket 1.
 
@@ -116,6 +133,51 @@ In this command:
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+
 
+To use eventdev mode with sync method **ordered** on above mentioned environment,
+Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+
+or
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+
+In this command:
+
+*   -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+
+*   The --mode option defines PMD to be used for packet I/O.
+
+*   The --eventq-sched option enables synchronization menthod of event queue so that packets will be scheduled accordingly.
+
+If application uses S/W scheduler, it uses following DPDK services:
+
+*   Software scheduler
+*   Rx adapter service function
+*   Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -p 0x3 --mode=eventdev --eventq-sched=ordered
+
+In case of eventdev mode, *--config* option is not used for ethernet port
+configuration. Instead each ethernet port will be configured with mentioned
+setup:
+
+*   Single Rx/Tx queue
+
+*   Each Rx queue will be connected to event queue via Rx adapter.
+
+*   Each Tx queue will be connected via Tx adapter.
+
 Refer to the *DPDK Getting Started Guide* for general information on running applications and
 the Environment Abstraction Layer (EAL) options.
 
@@ -125,7 +187,7 @@ Explanation
 -----------
 
 The following sections provide some explanation of the sample application code. As mentioned in the overview section,
-the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
+the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual` and :doc:`l2_forward_event`.
 The following sections describe aspects that are specific to the L3 Forwarding sample application.
 
 Hash Initialization
@@ -315,3 +377,8 @@ for LPM-based lookups is done by the get_ipv4_dst_port() function below:
 
         return ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct, rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)? next_hop : portid);
     }
+
+Eventdev Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Eventdev driver initialization is same as L2 forwarding eventdev application.
+Refer :doc:`l2_forward_event` for more details.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v4 11/11] doc: update l3fwd user guide to support eventdev
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
@ 2020-01-23  5:25       ` Jerin Jacob
  0 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2020-01-23  5:25 UTC (permalink / raw)
  To: Pavan Nikhilesh
  Cc: Jerin Jacob, Ananyev, Konstantin, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, John McNamara, dpdk-dev

On Thu, Jan 23, 2020 at 12:00 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Update l3fwd user guide to include event device related information.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  doc/guides/sample_app_ug/l3_forward.rst | 79 +++++++++++++++++++++++--
>  1 file changed, 73 insertions(+), 6 deletions(-)
>
>  Where,
>
> @@ -86,6 +96,13 @@ Where,
>
>  * ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports.
>
> +* ``--mode:`` Optional, Packet transfer mode for I/O, poll or eventdev.
> +
> +* ``--eventq-sched:`` Optional, Event queue synchronization method, Ordered, Atomic or Parallel. Only valid if --mode=eventdev.
> +
> +* ``--event-eth-rxqs:`` Optionala, Number of ethernet RX queues per device. Only valid if --mode=eventdev.

Typo: optional

This series looks good to me.
1) Please update the release notes
2) The new files added in example/l3fwd and those are eventdev
specific needs a maintainer.
I would suggest adding those files under Eventdev section just below
the F: app/test/test_eventdev.c

^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support
  2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
                       ` (10 preceding siblings ...)
  2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
@ 2020-01-24  4:05     ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 01/11] examples/l3fwd: add framework for event device pbhagavatula
                         ` (11 more replies)
  11 siblings, 12 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev; +Cc: dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

This patchset adds support for event mode in l3fwd.
The poll mode is still the default mode of operation.

The following new command line parameters are added:
 --mode: Dictates the mode of operation either poll or event. If unset then
         by default poll mode is used.
 --eventq_sched: Dictates event synchronization mode i.e. either ordered,
		 atomic or parallel.
 --event-eth-rxqs: Number of ethernet RX queues per device.

Based on event device capability the configuration is done as follows:
    - A single event device is enabled.
    - The number of event ports is equal to the number of worker
      cores enabled in the core mask. Additional event ports might
      be configured based on Rx/Tx adapter capability.
    - The number of event queues is equal to the number of ethernet
      ports. If Tx adapter doesn't have internal port capability then
      an additional single link event queue is used to enqueue events
      to Tx adapter.
    - Each event port is linked to all existing event queues.
    - Dedicated Rx/Tx adapters for each Ethernet port.

v5 Changes:
----------
- Update release notes and MAINTAINERS file.
- Fix typo in doc.

v4 Changes:
----------
- Update documentation about --event-eth-rxqs. (Jerin)
- Add validation for command line options that are specific to event/poll mode.
- Fix event device service initialization.

v3 Changes:
----------
- Unify argument parsing.
- Segregate poll mode and event mode initialization.
- Simplify event resource creation and accesses.
- Integrate http://patches.dpdk.org/project/dpdk/list/?series=8002 series.
- Reduce code duplication in lpm.

v2 Changes:
----------
- Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
- Use rte_panic instead of rte_exit.
- Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
- Add parallel mode support.
- Fix missing eventdev args parsing.

Pavan Nikhilesh (7):
  examples/l3fwd: add event device configuration
  examples/l3fwd: add event eth Rx/Tx adapter setup
  examples/l3fwd: add service core setup based on caps
  examples/l3fwd: add event lpm main loop
  examples/l3fwd: add event em main loop
  examples/l3fwd: add graceful teardown for eventdevice
  doc: update l3fwd user guide to support eventdev

Sunil Kumar Kori (4):
  examples/l3fwd: add framework for event device
  examples/l3fwd: split pipelines based on capability
  examples/l3fwd: add ethdev setup based on eventdev
  examples/l3fwd: add event port and queue setup

 MAINTAINERS                                |   1 +
 doc/guides/rel_notes/release_20_02.rst     |   5 +
 doc/guides/sample_app_ug/l3_forward.rst    |  79 ++++-
 examples/l3fwd/Makefile                    |   3 +-
 examples/l3fwd/l3fwd.h                     |  30 ++
 examples/l3fwd/l3fwd_em.c                  | 177 +++++++++++
 examples/l3fwd/l3fwd_em.h                  | 159 +++++++---
 examples/l3fwd/l3fwd_em_hlm.h              | 131 ++++++++
 examples/l3fwd/l3fwd_em_sequential.h       |  26 ++
 examples/l3fwd/l3fwd_event.c               | 263 ++++++++++++++++
 examples/l3fwd/l3fwd_event.h               |  86 ++++++
 examples/l3fwd/l3fwd_event_generic.c       | 303 ++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 279 +++++++++++++++++
 examples/l3fwd/l3fwd_lpm.c                 | 203 ++++++++++++
 examples/l3fwd/main.c                      | 341 +++++++++++++++++----
 examples/l3fwd/meson.build                 |   5 +-
 16 files changed, 1976 insertions(+), 115 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

--
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 01/11] examples/l3fwd: add framework for event device
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
                         ` (10 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Nipun Gupta

From: Sunil Kumar Kori <skori@marvell.com>

Add framework to enable event device as a producer of packets.
To switch between event mode and poll mode the following options
have been added:
	`--mode="eventdev"` or `--mode="poll"`
Allow the user to select the schedule type to be either
RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC or RTE_SCHED_TYPE_PARALLEL
through:
	`--eventq-sched="ordered"` or `--eventq-sched="atomic"` or
		`--eventq-sched="parallel"`
Allow the user to specify the number of Rx queues to be connected to
event queue using:
	`--event-eth-rxqs`

Poll mode is still the default operation mode.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 examples/l3fwd/Makefile      |   2 +-
 examples/l3fwd/l3fwd.h       |   1 +
 examples/l3fwd/l3fwd_event.c |  40 +++++++++++++
 examples/l3fwd/l3fwd_event.h |  23 ++++++++
 examples/l3fwd/main.c        | 110 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/meson.build   |   4 +-
 6 files changed, 174 insertions(+), 6 deletions(-)
 create mode 100644 examples/l3fwd/l3fwd_event.c
 create mode 100644 examples/l3fwd/l3fwd_event.h

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index b2dbf2607..c892b867b 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -5,7 +5,7 @@
 APP = l3fwd
 
 # all source are stored in SRCS-y
-SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c
+SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index 293fb1fa2..e2399d76b 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -5,6 +5,7 @@
 #ifndef __L3_FWD_H__
 #define __L3_FWD_H__
 
+#include <rte_ethdev.h>
 #include <rte_vect.h>
 
 #define DO_RFC_1812_CHECKS
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
new file mode 100644
index 000000000..1040da4ea
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.c
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+#include <getopt.h>
+
+#include <rte_malloc.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+struct l3fwd_event_resources *
+l3fwd_get_eventdev_rsrc(void)
+{
+	static struct l3fwd_event_resources *rsrc;
+
+	if (rsrc != NULL)
+		return rsrc;
+
+	rsrc = rte_zmalloc("l3fwd", sizeof(struct l3fwd_event_resources), 0);
+	if (rsrc != NULL) {
+		rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+		rsrc->eth_rx_queues = 1;
+		return rsrc;
+	}
+
+	rte_exit(EXIT_FAILURE, "Unable to allocate memory for eventdev cfg\n");
+
+	return NULL;
+}
+
+void
+l3fwd_event_resource_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!evt_rsrc->enabled)
+		return;
+}
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
new file mode 100644
index 000000000..4c23c4e1a
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#ifndef __L3FWD_EVENTDEV_H__
+#define __L3FWD_EVENTDEV_H__
+
+#include <rte_common.h>
+#include <rte_eventdev.h>
+#include <rte_spinlock.h>
+
+#include "l3fwd.h"
+
+struct l3fwd_event_resources {
+	uint8_t sched_type;
+	uint8_t enabled;
+	uint8_t eth_rx_queues;
+};
+
+struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
+void l3fwd_event_resource_setup(void);
+
+#endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4dea12a65..8a5cdb52e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -19,6 +19,7 @@
 #include <rte_vect.h>
 #include <rte_byteorder.h>
 #include <rte_log.h>
+#include <rte_malloc.h>
 #include <rte_memory.h>
 #include <rte_memcpy.h>
 #include <rte_eal.h>
@@ -33,7 +34,6 @@
 #include <rte_random.h>
 #include <rte_debug.h>
 #include <rte_ether.h>
-#include <rte_ethdev.h>
 #include <rte_mempool.h>
 #include <rte_mbuf.h>
 #include <rte_ip.h>
@@ -46,6 +46,7 @@
 #include <cmdline_parse_etheraddr.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 /*
  * Configurable number of RX/TX ring descriptors
@@ -289,7 +290,9 @@ print_usage(const char *prgname)
 		" [--hash-entry-num]"
 		" [--ipv6]"
 		" [--parse-ptype]"
-		" [--per-port-pool]\n\n"
+		" [--per-port-pool]"
+		" [--mode]"
+		" [--eventq-sched]\n\n"
 
 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -304,7 +307,16 @@ print_usage(const char *prgname)
 		"  --hash-entry-num: Specify the hash entry number in hexadecimal to be setup\n"
 		"  --ipv6: Set if running ipv6 packets\n"
 		"  --parse-ptype: Set to use software to analyze packet type\n"
-		"  --per-port-pool: Use separate buffer pool per port\n\n",
+		"  --per-port-pool: Use separate buffer pool per port\n"
+		"  --mode: Packet transfer mode for I/O, poll or eventdev\n"
+		"          Default mode = poll\n"
+		"  --eventq-sched: Event queue synchronization method\n"
+		"                  ordered, atomic or parallel.\n"
+		"                  Default: atomic\n"
+		"                  Valid only if --mode=eventdev\n"
+		"  --event-eth-rxqs: Number of ethernet RX queues per device.\n"
+		"                    Default: 1\n"
+		"                    Valid only if --mode=eventdev\n\n",
 		prgname);
 }
 
@@ -440,6 +452,48 @@ parse_eth_dest(const char *optarg)
 	*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 }
 
+static void
+parse_mode(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strcmp(optarg, "poll"))
+		evt_rsrc->enabled = false;
+	else if (!strcmp(optarg, "eventdev"))
+		evt_rsrc->enabled = true;
+}
+
+static void
+parse_eventq_sched(const char *optarg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+
+	if (!strcmp(optarg, "ordered"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ORDERED;
+	if (!strcmp(optarg, "atomic"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_ATOMIC;
+	if (!strcmp(optarg, "parallel"))
+		evt_rsrc->sched_type = RTE_SCHED_TYPE_PARALLEL;
+}
+
+static void
+parse_event_eth_rx_queues(const char *eth_rx_queues)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	char *end = NULL;
+	uint8_t num_eth_rx_queues;
+
+	/* parse decimal string */
+	num_eth_rx_queues = strtoul(eth_rx_queues, &end, 10);
+	if ((eth_rx_queues[0] == '\0') || (end == NULL) || (*end != '\0'))
+		return;
+
+	if (num_eth_rx_queues == 0)
+		return;
+
+	evt_rsrc->eth_rx_queues = num_eth_rx_queues;
+}
+
 #define MAX_JUMBO_PKT_LEN  9600
 #define MEMPOOL_CACHE_SIZE 256
 
@@ -458,6 +512,9 @@ static const char short_options[] =
 #define CMD_LINE_OPT_HASH_ENTRY_NUM "hash-entry-num"
 #define CMD_LINE_OPT_PARSE_PTYPE "parse-ptype"
 #define CMD_LINE_OPT_PER_PORT_POOL "per-port-pool"
+#define CMD_LINE_OPT_MODE "mode"
+#define CMD_LINE_OPT_EVENTQ_SYNC "eventq-sched"
+#define CMD_LINE_OPT_EVENT_ETH_RX_QUEUES "event-eth-rxqs"
 enum {
 	/* long options mapped to a short option */
 
@@ -472,6 +529,9 @@ enum {
 	CMD_LINE_OPT_HASH_ENTRY_NUM_NUM,
 	CMD_LINE_OPT_PARSE_PTYPE_NUM,
 	CMD_LINE_OPT_PARSE_PER_PORT_POOL,
+	CMD_LINE_OPT_MODE_NUM,
+	CMD_LINE_OPT_EVENTQ_SYNC_NUM,
+	CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM,
 };
 
 static const struct option lgopts[] = {
@@ -483,6 +543,10 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_HASH_ENTRY_NUM, 1, 0, CMD_LINE_OPT_HASH_ENTRY_NUM_NUM},
 	{CMD_LINE_OPT_PARSE_PTYPE, 0, 0, CMD_LINE_OPT_PARSE_PTYPE_NUM},
 	{CMD_LINE_OPT_PER_PORT_POOL, 0, 0, CMD_LINE_OPT_PARSE_PER_PORT_POOL},
+	{CMD_LINE_OPT_MODE, 1, 0, CMD_LINE_OPT_MODE_NUM},
+	{CMD_LINE_OPT_EVENTQ_SYNC, 1, 0, CMD_LINE_OPT_EVENTQ_SYNC_NUM},
+	{CMD_LINE_OPT_EVENT_ETH_RX_QUEUES, 1, 0,
+					CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -508,6 +572,10 @@ parse_args(int argc, char **argv)
 	char **argvopt;
 	int option_index;
 	char *prgname = argv[0];
+	uint8_t lcore_params = 0;
+	uint8_t eventq_sched = 0;
+	uint8_t eth_rx_q = 0;
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 
 	argvopt = argv;
 
@@ -546,6 +614,7 @@ parse_args(int argc, char **argv)
 				print_usage(prgname);
 				return -1;
 			}
+			lcore_params = 1;
 			break;
 
 		case CMD_LINE_OPT_ETH_DEST_NUM:
@@ -607,6 +676,20 @@ parse_args(int argc, char **argv)
 			per_port_pool = 1;
 			break;
 
+		case CMD_LINE_OPT_MODE_NUM:
+			parse_mode(optarg);
+			break;
+
+		case CMD_LINE_OPT_EVENTQ_SYNC_NUM:
+			parse_eventq_sched(optarg);
+			eventq_sched = 1;
+			break;
+
+		case CMD_LINE_OPT_EVENT_ETH_RX_QUEUES_NUM:
+			parse_event_eth_rx_queues(optarg);
+			eth_rx_q = 1;
+			break;
+
 		default:
 			print_usage(prgname);
 			return -1;
@@ -619,6 +702,21 @@ parse_args(int argc, char **argv)
 		return -1;
 	}
 
+	if (evt_rsrc->enabled && lcore_params) {
+		fprintf(stderr, "lcore config is not valid when event mode is selected\n");
+		return -1;
+	}
+
+	if (!evt_rsrc->enabled && eth_rx_q) {
+		fprintf(stderr, "eth_rx_queues is valid only when event mode is selected\n");
+		return -1;
+	}
+
+	if (!evt_rsrc->enabled && eventq_sched) {
+		fprintf(stderr, "eventq_sched is valid only when event mode is selected\n");
+		return -1;
+	}
+
 	/*
 	 * Nothing is selected, pick longest-prefix match
 	 * as default match.
@@ -811,6 +909,7 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 int
 main(int argc, char **argv)
 {
+	struct l3fwd_event_resources *evt_rsrc;
 	struct lcore_conf *qconf;
 	struct rte_eth_dev_info dev_info;
 	struct rte_eth_txconf *txconf;
@@ -839,11 +938,16 @@ main(int argc, char **argv)
 		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
 	}
 
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	/* Configure eventdev parameters if user has requested */
+	l3fwd_event_resource_setup();
+
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
 
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 6dd4b9022..864327c7b 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -6,7 +6,7 @@
 # To build this example as a standalone application with an already-installed
 # DPDK instance, use 'make'
 
-deps += ['hash', 'lpm']
+deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 02/11] examples/l3fwd: split pipelines based on capability
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 01/11] examples/l3fwd: add framework for event device pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 03/11] examples/l3fwd: add event device configuration pbhagavatula
                         ` (9 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add infra to split eventdev framework based on event Tx adapter
capability.
If event Tx adapter has internal port capability then we use
`rte_event_eth_tx_adapter_enqueue` to transmitting packets else
we use a SINGLE_LINK event queue to enqueue packets to a service
core which is responsible for transmitting packets.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 examples/l3fwd/Makefile                    |  1 +
 examples/l3fwd/l3fwd_event.c               | 31 ++++++++++++++++++++++
 examples/l3fwd/l3fwd_event.h               | 20 ++++++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 14 ++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c | 14 ++++++++++
 examples/l3fwd/meson.build                 |  3 ++-
 6 files changed, 82 insertions(+), 1 deletion(-)
 create mode 100644 examples/l3fwd/l3fwd_event_generic.c
 create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c

diff --git a/examples/l3fwd/Makefile b/examples/l3fwd/Makefile
index c892b867b..59a110d12 100644
--- a/examples/l3fwd/Makefile
+++ b/examples/l3fwd/Makefile
@@ -6,6 +6,7 @@ APP = l3fwd
 
 # all source are stored in SRCS-y
 SRCS-y := main.c l3fwd_lpm.c l3fwd_em.c l3fwd_event.c
+SRCS-y += l3fwd_event_generic.c l3fwd_event_internal_port.c
 
 # Build using pkg-config variables if possible
 ifeq ($(shell pkg-config --exists libdpdk && echo 0),0)
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 1040da4ea..62218f3ca 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -30,6 +30,31 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
+static void
+l3fwd_event_capability_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t caps = 0;
+	uint16_t i;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(i) {
+		ret = rte_event_eth_tx_adapter_caps_get(0, i, &caps);
+		if (ret)
+			rte_exit(EXIT_FAILURE,
+				 "Invalid capability for Tx adptr port %d\n",
+				 i);
+
+		evt_rsrc->tx_mode_q |= !(caps &
+				   RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT);
+	}
+
+	if (evt_rsrc->tx_mode_q)
+		l3fwd_event_set_generic_ops(&evt_rsrc->ops);
+	else
+		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
+}
+
 void
 l3fwd_event_resource_setup(void)
 {
@@ -37,4 +62,10 @@ l3fwd_event_resource_setup(void)
 
 	if (!evt_rsrc->enabled)
 		return;
+
+	if (!rte_event_dev_count())
+		rte_exit(EXIT_FAILURE, "No Eventdev found");
+
+	/* Setup eventdev capability callbacks */
+	l3fwd_event_capability_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 4c23c4e1a..d25c8d222 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,17 +7,37 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_tx_adapter.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
 
+typedef uint32_t (*event_device_setup_cb)(void);
+typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
+typedef void (*event_port_setup_cb)(void);
+typedef void (*adapter_setup_cb)(void);
+typedef int (*event_loop_cb)(void *);
+
+struct l3fwd_event_setup_ops {
+	event_device_setup_cb event_device_setup;
+	event_queue_setup_cb event_queue_setup;
+	event_port_setup_cb event_port_setup;
+	adapter_setup_cb adapter_setup;
+	event_loop_cb lpm_event_loop;
+	event_loop_cb em_event_loop;
+};
+
 struct l3fwd_event_resources {
+	struct l3fwd_event_setup_ops ops;
 	uint8_t sched_type;
+	uint8_t tx_mode_q;
 	uint8_t enabled;
 	uint8_t eth_rx_queues;
 };
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
 void l3fwd_event_resource_setup(void);
+void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
+void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
 #endif /* __L3FWD_EVENTDEV_H__ */
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
new file mode 100644
index 000000000..7fff850e5
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
new file mode 100644
index 000000000..085e9c825
--- /dev/null
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2019 Marvell International Ltd.
+ */
+
+#include <stdbool.h>
+
+#include "l3fwd.h"
+#include "l3fwd_event.h"
+
+void
+l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
+{
+	RTE_SET_USED(ops);
+}
diff --git a/examples/l3fwd/meson.build b/examples/l3fwd/meson.build
index 864327c7b..ebed3b518 100644
--- a/examples/l3fwd/meson.build
+++ b/examples/l3fwd/meson.build
@@ -8,5 +8,6 @@
 
 deps += ['hash', 'lpm', 'eventdev']
 sources = files(
-	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c', 'main.c'
+	'l3fwd_em.c', 'l3fwd_lpm.c', 'l3fwd_event.c',
+	'l3fwd_event_internal_port.c', 'l3fwd_event_generic.c', 'main.c'
 )
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 03/11] examples/l3fwd: add event device configuration
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 01/11] examples/l3fwd: add framework for event device pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
                         ` (8 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event device configuration based on the capabilities of the
probed event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  3 +
 examples/l3fwd/l3fwd_event.h               | 36 ++++++++++
 examples/l3fwd/l3fwd_event_generic.c       | 76 +++++++++++++++++++++-
 examples/l3fwd/l3fwd_event_internal_port.c | 76 +++++++++++++++++++++-
 4 files changed, 189 insertions(+), 2 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 62218f3ca..9cb46c711 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -68,4 +68,7 @@ l3fwd_event_resource_setup(void)
 
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
+
+	/* Event device configuration */
+	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index d25c8d222..fc3862ec1 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -8,6 +8,7 @@
 #include <rte_common.h>
 #include <rte_eventdev.h>
 #include <rte_event_eth_tx_adapter.h>
+#include <rte_service.h>
 #include <rte_spinlock.h>
 
 #include "l3fwd.h"
@@ -18,6 +19,29 @@ typedef void (*event_port_setup_cb)(void);
 typedef void (*adapter_setup_cb)(void);
 typedef int (*event_loop_cb)(void *);
 
+struct l3fwd_event_queues {
+	uint8_t *event_q_id;
+	uint8_t	nb_queues;
+};
+
+struct l3fwd_event_ports {
+	uint8_t *event_p_id;
+	uint8_t	nb_ports;
+	rte_spinlock_t lock;
+};
+
+struct l3fwd_event_rx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_rx_adptr;
+	uint8_t *rx_adptr;
+};
+
+struct l3fwd_event_tx_adptr {
+	uint32_t service_id;
+	uint8_t	nb_tx_adptr;
+	uint8_t *tx_adptr;
+};
+
 struct l3fwd_event_setup_ops {
 	event_device_setup_cb event_device_setup;
 	event_queue_setup_cb event_queue_setup;
@@ -28,9 +52,21 @@ struct l3fwd_event_setup_ops {
 };
 
 struct l3fwd_event_resources {
+	struct rte_event_port_conf def_p_conf;
+	struct l3fwd_event_rx_adptr rx_adptr;
+	struct l3fwd_event_tx_adptr tx_adptr;
+	uint8_t disable_implicit_release;
 	struct l3fwd_event_setup_ops ops;
+	struct rte_mempool * (*pkt_pool)[NB_SOCKETS];
+	struct l3fwd_event_queues evq;
+	struct l3fwd_event_ports evp;
+	uint32_t port_mask;
+	uint8_t per_port_pool;
+	uint8_t event_d_id;
 	uint8_t sched_type;
 	uint8_t tx_mode_q;
+	uint8_t deq_depth;
+	uint8_t has_burst;
 	uint8_t enabled;
 	uint8_t eth_rx_queues;
 };
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 7fff850e5..c831cfd66 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -7,8 +7,82 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+	/* Enable implicit release */
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)
+		evt_rsrc->disable_implicit_release = 0;
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	/* One queue for each ethdev port + one Tx adapter Single link queue. */
+	event_d_conf.nb_event_queues = ethdev_count + 1;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count() - rte_service_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 085e9c825..f3e1ab3a2 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -7,8 +7,82 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static uint32_t
+l3fwd_event_device_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_config event_d_conf = {
+		.nb_events_limit  = 4096,
+		.nb_event_queue_flows = 1024,
+		.nb_event_port_dequeue_depth = 128,
+		.nb_event_port_enqueue_depth = 128
+	};
+	struct rte_event_dev_info dev_info;
+	const uint8_t event_d_id = 0; /* Always use first event device only */
+	uint32_t event_queue_cfg = 0;
+	uint16_t ethdev_count = 0;
+	uint16_t num_workers = 0;
+	uint16_t port_id;
+	int ret;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ethdev_count++;
+	}
+
+	/* Event device configurtion */
+	rte_event_dev_info_get(event_d_id, &dev_info);
+
+	/* Enable implicit release */
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE)
+		evt_rsrc->disable_implicit_release = 0;
+
+	if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)
+		event_queue_cfg |= RTE_EVENT_QUEUE_CFG_ALL_TYPES;
+
+	event_d_conf.nb_event_queues = ethdev_count;
+	if (dev_info.max_event_queues < event_d_conf.nb_event_queues)
+		event_d_conf.nb_event_queues = dev_info.max_event_queues;
+
+	if (dev_info.max_num_events < event_d_conf.nb_events_limit)
+		event_d_conf.nb_events_limit = dev_info.max_num_events;
+
+	if (dev_info.max_event_queue_flows < event_d_conf.nb_event_queue_flows)
+		event_d_conf.nb_event_queue_flows =
+						dev_info.max_event_queue_flows;
+
+	if (dev_info.max_event_port_dequeue_depth <
+				event_d_conf.nb_event_port_dequeue_depth)
+		event_d_conf.nb_event_port_dequeue_depth =
+				dev_info.max_event_port_dequeue_depth;
+
+	if (dev_info.max_event_port_enqueue_depth <
+				event_d_conf.nb_event_port_enqueue_depth)
+		event_d_conf.nb_event_port_enqueue_depth =
+				dev_info.max_event_port_enqueue_depth;
+
+	num_workers = rte_lcore_count();
+	if (dev_info.max_event_ports < num_workers)
+		num_workers = dev_info.max_event_ports;
+
+	event_d_conf.nb_event_ports = num_workers;
+	evt_rsrc->evp.nb_ports = num_workers;
+	evt_rsrc->evq.nb_queues = event_d_conf.nb_event_queues;
+	evt_rsrc->has_burst = !!(dev_info.event_dev_cap &
+				    RTE_EVENT_DEV_CAP_BURST_MODE);
+
+	ret = rte_event_dev_configure(event_d_id, &event_d_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event device\n");
+
+	evt_rsrc->event_d_id = event_d_id;
+	return event_queue_cfg;
+}
+
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
-	RTE_SET_USED(ops);
+	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 04/11] examples/l3fwd: add ethdev setup based on eventdev
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (2 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 03/11] examples/l3fwd: add event device configuration pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
                         ` (7 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev, Nipun Gupta

From: Sunil Kumar Kori <skori@marvell.com>

Add ethernet port Rx/Tx queue setup for event device which are later
used for setting up event eth Rx/Tx adapters.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Nipun Gupta <nipun.gupta@nxp.com>
---
 examples/l3fwd/l3fwd.h       |  10 +++
 examples/l3fwd/l3fwd_event.c | 138 ++++++++++++++++++++++++++++++++++-
 examples/l3fwd/l3fwd_event.h |   2 +-
 examples/l3fwd/main.c        |  15 ++--
 4 files changed, 153 insertions(+), 12 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index e2399d76b..ed5ba5f3b 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -16,9 +16,16 @@
 #define NO_HASH_MULTI_LOOKUP 1
 #endif
 
+/*
+ * Configurable number of RX/TX ring descriptors
+ */
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+
 #define MAX_PKT_BURST     32
 #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */
 
+#define MEMPOOL_CACHE_SIZE 256
 #define MAX_RX_QUEUE_PER_LCORE 16
 
 /*
@@ -170,6 +177,9 @@ is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)
 }
 #endif /* DO_RFC_1812_CHECKS */
 
+int
+init_mem(uint16_t portid, unsigned int nb_mbuf);
+
 /* Function pointers for LPM or EM functionality. */
 void
 setup_lpm(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 9cb46c711..f9491ecc6 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -10,6 +10,14 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+	char buf[RTE_ETHER_ADDR_FMT_SIZE];
+	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+	printf("%s%s", name, buf);
+}
+
 struct l3fwd_event_resources *
 l3fwd_get_eventdev_rsrc(void)
 {
@@ -30,6 +38,131 @@ l3fwd_get_eventdev_rsrc(void)
 	return NULL;
 }
 
+static void
+l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint16_t nb_ports = rte_eth_dev_count_avail();
+	uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+	uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+	unsigned int nb_lcores = rte_lcore_count();
+	struct rte_eth_conf local_port_conf;
+	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf txconf;
+	struct rte_eth_rxconf rxconf;
+	unsigned int nb_mbuf;
+	uint16_t port_id;
+	uint8_t eth_qid;
+	int32_t ret;
+
+	/* initialize all ports */
+	RTE_ETH_FOREACH_DEV(port_id) {
+		local_port_conf = *port_conf;
+		/* skip ports that are not enabled */
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0) {
+			printf("\nSkipping disabled port %d\n", port_id);
+			continue;
+		}
+
+		/* init port */
+		printf("Initializing port %d ... ", port_id);
+		fflush(stdout);
+		printf("Creating queues: nb_rxq=%d nb_txq=1...\n",
+		       evt_rsrc->eth_rx_queues);
+
+		rte_eth_dev_info_get(port_id, &dev_info);
+		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+			local_port_conf.txmode.offloads |=
+						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+
+		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
+						dev_info.flow_type_rss_offloads;
+		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
+				port_conf->rx_adv_conf.rss_conf.rss_hf) {
+			printf("Port %u modified RSS hash function "
+			       "based on hardware support,"
+			       "requested:%#"PRIx64" configured:%#"PRIx64"\n",
+			       port_id,
+			       port_conf->rx_adv_conf.rss_conf.rss_hf,
+			       local_port_conf.rx_adv_conf.rss_conf.rss_hf);
+		}
+
+		ret = rte_eth_dev_configure(port_id, evt_rsrc->eth_rx_queues,
+					    1, &local_port_conf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot configure device: err=%d, port=%d\n",
+				 ret, port_id);
+
+		ret = rte_eth_dev_adjust_nb_rx_tx_desc(port_id, &nb_rxd,
+						       &nb_txd);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Cannot adjust number of descriptors: err=%d, "
+				 "port=%d\n", ret, port_id);
+
+		rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+		print_ethaddr(" Address:", &ports_eth_addr[port_id]);
+		printf(", ");
+		print_ethaddr("Destination:",
+			(const struct rte_ether_addr *)&dest_eth_addr[port_id]);
+		printf(", ");
+
+		/* prepare source MAC for each port. */
+		rte_ether_addr_copy(&ports_eth_addr[port_id],
+			(struct rte_ether_addr *)(val_eth + port_id) + 1);
+
+		/* init memory */
+		if (!evt_rsrc->per_port_pool) {
+			/* port_id = 0; this is *not* signifying the first port,
+			 * rather, it signifies that port_id is ignored.
+			 */
+			nb_mbuf = RTE_MAX(nb_ports * nb_rxd +
+					  nb_ports * nb_txd +
+					  nb_ports * nb_lcores *
+							MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(0, nb_mbuf);
+		} else {
+			nb_mbuf = RTE_MAX(nb_rxd + nb_rxd +
+					  nb_lcores * MAX_PKT_BURST +
+					  nb_lcores * MEMPOOL_CACHE_SIZE,
+					  8192u);
+			ret = init_mem(port_id, nb_mbuf);
+		}
+		/* init Rx queues per port */
+		rxconf = dev_info.default_rxconf;
+		rxconf.offloads = local_port_conf.rxmode.offloads;
+
+		for (eth_qid = 0; eth_qid < evt_rsrc->eth_rx_queues;
+		     eth_qid++) {
+			if (!evt_rsrc->per_port_pool)
+				ret = rte_eth_rx_queue_setup(port_id, eth_qid,
+					nb_rxd, 0, &rxconf,
+					evt_rsrc->pkt_pool[0][0]);
+			else
+				ret = rte_eth_rx_queue_setup(port_id, eth_qid,
+					nb_rxd, 0, &rxconf,
+					evt_rsrc->pkt_pool[port_id][0]);
+			if (ret < 0)
+				rte_exit(EXIT_FAILURE,
+					 "rte_eth_rx_queue_setup: err=%d, "
+					 "port=%d, eth_qid: %d\n",
+					 ret, port_id, eth_qid);
+		}
+
+		/* init one Tx queue per port */
+		txconf = dev_info.default_txconf;
+		txconf.offloads = local_port_conf.txmode.offloads;
+		ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd, 0, &txconf);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "rte_eth_tx_queue_setup: err=%d, "
+				 "port=%d\n", ret, port_id);
+	}
+}
+
 static void
 l3fwd_event_capability_setup(void)
 {
@@ -56,7 +189,7 @@ l3fwd_event_capability_setup(void)
 }
 
 void
-l3fwd_event_resource_setup(void)
+l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 
@@ -69,6 +202,9 @@ l3fwd_event_resource_setup(void)
 	/* Setup eventdev capability callbacks */
 	l3fwd_event_capability_setup();
 
+	/* Ethernet device configuration */
+	l3fwd_eth_dev_port_setup(port_conf);
+
 	/* Event device configuration */
 	evt_rsrc->ops.event_device_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index fc3862ec1..53feea069 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -72,7 +72,7 @@ struct l3fwd_event_resources {
 };
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
-void l3fwd_event_resource_setup(void);
+void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 8a5cdb52e..04526343f 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -48,12 +48,6 @@
 #include "l3fwd.h"
 #include "l3fwd_event.h"
 
-/*
- * Configurable number of RX/TX ring descriptors
- */
-#define RTE_TEST_RX_DESC_DEFAULT 1024
-#define RTE_TEST_TX_DESC_DEFAULT 1024
-
 #define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
 #define MAX_RX_QUEUE_PER_PORT 128
 
@@ -495,7 +489,6 @@ parse_event_eth_rx_queues(const char *eth_rx_queues)
 }
 
 #define MAX_JUMBO_PKT_LEN  9600
-#define MEMPOOL_CACHE_SIZE 256
 
 static const char short_options[] =
 	"p:"  /* portmask */
@@ -752,7 +745,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
 	printf("%s%s", name, buf);
 }
 
-static int
+int
 init_mem(uint16_t portid, unsigned int nb_mbuf)
 {
 	struct lcore_conf *qconf;
@@ -939,14 +932,16 @@ main(int argc, char **argv)
 	}
 
 	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	RTE_SET_USED(evt_rsrc);
 	/* parse application arguments (after the EAL ones) */
 	ret = parse_args(argc, argv);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
 
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
 	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup();
+	l3fwd_event_resource_setup(&port_conf);
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 05/11] examples/l3fwd: add event port and queue setup
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (3 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
                         ` (6 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Sunil Kumar Kori <skori@marvell.com>

Add event device queue and port setup based on event eth Tx adapter
capabilities.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |  28 +++++-
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 103 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  98 ++++++++++++++++++++
 4 files changed, 229 insertions(+), 1 deletion(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index f9491ecc6..b58f9b79a 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -188,10 +188,30 @@ l3fwd_event_capability_setup(void)
 		l3fwd_event_set_internal_port_ops(&evt_rsrc->ops);
 }
 
+int
+l3fwd_get_free_event_port(struct l3fwd_event_resources *evt_rsrc)
+{
+	static int index;
+	int port_id;
+
+	rte_spinlock_lock(&evt_rsrc->evp.lock);
+	if (index >= evt_rsrc->evp.nb_ports) {
+		printf("No free event port is available\n");
+		return -1;
+	}
+
+	port_id = evt_rsrc->evp.event_p_id[index];
+	index++;
+	rte_spinlock_unlock(&evt_rsrc->evp.lock);
+
+	return port_id;
+}
+
 void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint32_t event_queue_cfg;
 
 	if (!evt_rsrc->enabled)
 		return;
@@ -206,5 +226,11 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	l3fwd_eth_dev_port_setup(port_conf);
 
 	/* Event device configuration */
-	evt_rsrc->ops.event_device_setup();
+	event_queue_cfg = evt_rsrc->ops.event_device_setup();
+
+	/* Event queue configuration */
+	evt_rsrc->ops.event_queue_setup(event_queue_cfg);
+
+	/* Event port configuration */
+	evt_rsrc->ops.event_port_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 53feea069..4bceca920 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -73,6 +73,7 @@ struct l3fwd_event_resources {
 
 struct l3fwd_event_resources *l3fwd_get_eventdev_rsrc(void);
 void l3fwd_event_resource_setup(struct rte_eth_conf *port_conf);
+int l3fwd_get_free_event_port(struct l3fwd_event_resources *eventdev_rsrc);
 void l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops);
 void l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops);
 
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index c831cfd66..2532839bf 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -81,8 +81,111 @@ l3fwd_event_device_setup_generic(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("No space is available\n");
+
+	memset(&def_p_conf, 0, sizeof(struct rte_event_port_conf));
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+			def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+	evt_rsrc->deq_depth = def_p_conf.dequeue_depth;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id,
+					  evt_rsrc->evq.event_q_id,
+					  NULL,
+					  evt_rsrc->evq.nb_queues - 1);
+		if (ret != (evt_rsrc->evq.nb_queues - 1))
+			rte_panic("Error in linking event port %d to queues\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+	}
+	/* init spinlock */
+	rte_spinlock_init(&evt_rsrc->evp.lock);
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id;
+	int32_t ret;
+
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	rte_event_queue_default_conf_get(event_d_id, 0, &def_q_conf);
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	for (event_q_id = 0; event_q_id < (evt_rsrc->evq.nb_queues - 1);
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+
+	event_q_conf.event_queue_cfg |= RTE_EVENT_QUEUE_CFG_SINGLE_LINK;
+	event_q_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,
+	ret = rte_event_queue_setup(event_d_id, event_q_id, &event_q_conf);
+	if (ret < 0)
+		rte_panic("Error in configuring event queue for Tx adapter\n");
+	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
+	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
+	ops->event_port_setup = l3fwd_event_port_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index f3e1ab3a2..59b7d35b7 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -80,9 +80,107 @@ l3fwd_event_device_setup_internal_port(void)
 	return event_queue_cfg;
 }
 
+static void
+l3fwd_event_port_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_port_conf event_p_conf = {
+		.dequeue_depth = 32,
+		.enqueue_depth = 32,
+		.new_event_threshold = 4096
+	};
+	struct rte_event_port_conf def_p_conf;
+	uint8_t event_p_id;
+	int32_t ret;
+
+	evt_rsrc->evp.event_p_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evp.nb_ports);
+	if (!evt_rsrc->evp.event_p_id)
+		rte_panic("Failed to allocate memory for Event Ports\n");
+
+	rte_event_port_default_conf_get(event_d_id, 0, &def_p_conf);
+	if (def_p_conf.new_event_threshold < event_p_conf.new_event_threshold)
+		event_p_conf.new_event_threshold =
+						def_p_conf.new_event_threshold;
+
+	if (def_p_conf.dequeue_depth < event_p_conf.dequeue_depth)
+		event_p_conf.dequeue_depth = def_p_conf.dequeue_depth;
+
+	if (def_p_conf.enqueue_depth < event_p_conf.enqueue_depth)
+		event_p_conf.enqueue_depth = def_p_conf.enqueue_depth;
+
+	event_p_conf.disable_implicit_release =
+		evt_rsrc->disable_implicit_release;
+
+	for (event_p_id = 0; event_p_id < evt_rsrc->evp.nb_ports;
+								event_p_id++) {
+		ret = rte_event_port_setup(event_d_id, event_p_id,
+					   &event_p_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event port %d\n",
+				  event_p_id);
+
+		ret = rte_event_port_link(event_d_id, event_p_id, NULL,
+					  NULL, 0);
+		if (ret < 0)
+			rte_panic("Error in linking event port %d to queue\n",
+				  event_p_id);
+		evt_rsrc->evp.event_p_id[event_p_id] = event_p_id;
+
+		/* init spinlock */
+		rte_spinlock_init(&evt_rsrc->evp.lock);
+	}
+
+	evt_rsrc->def_p_conf = event_p_conf;
+}
+
+static void
+l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct rte_event_queue_conf event_q_conf = {
+		.nb_atomic_flows = 1024,
+		.nb_atomic_order_sequences = 1024,
+		.event_queue_cfg = event_queue_cfg,
+		.priority = RTE_EVENT_DEV_PRIORITY_NORMAL
+	};
+	struct rte_event_queue_conf def_q_conf;
+	uint8_t event_q_id = 0;
+	int32_t ret;
+
+	rte_event_queue_default_conf_get(event_d_id, event_q_id, &def_q_conf);
+
+	if (def_q_conf.nb_atomic_flows < event_q_conf.nb_atomic_flows)
+		event_q_conf.nb_atomic_flows = def_q_conf.nb_atomic_flows;
+
+	if (def_q_conf.nb_atomic_order_sequences <
+					event_q_conf.nb_atomic_order_sequences)
+		event_q_conf.nb_atomic_order_sequences =
+					def_q_conf.nb_atomic_order_sequences;
+
+	event_q_conf.event_queue_cfg = event_queue_cfg;
+	event_q_conf.schedule_type = evt_rsrc->sched_type;
+	evt_rsrc->evq.event_q_id = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->evq.nb_queues);
+	if (!evt_rsrc->evq.event_q_id)
+		rte_panic("Memory allocation failure\n");
+
+	for (event_q_id = 0; event_q_id < evt_rsrc->evq.nb_queues;
+								event_q_id++) {
+		ret = rte_event_queue_setup(event_d_id, event_q_id,
+					    &event_q_conf);
+		if (ret < 0)
+			rte_panic("Error in configuring event queue\n");
+		evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
+	}
+}
 
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
+	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
+	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (4 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
                         ` (5 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add event eth Rx/Tx adapter setup for both generic and internal port
event device pipelines.

Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c               |   3 +
 examples/l3fwd/l3fwd_event.h               |   1 +
 examples/l3fwd/l3fwd_event_generic.c       | 112 +++++++++++++++++++++
 examples/l3fwd/l3fwd_event_internal_port.c |  93 +++++++++++++++++
 4 files changed, 209 insertions(+)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index b58f9b79a..0a75e39ee 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -233,4 +233,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Event port configuration */
 	evt_rsrc->ops.event_port_setup();
+
+	/* Rx/Tx adapters configuration */
+	evt_rsrc->ops.adapter_setup();
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index 4bceca920..b1ae970d4 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -7,6 +7,7 @@
 
 #include <rte_common.h>
 #include <rte_eventdev.h>
+#include <rte_event_eth_rx_adapter.h>
 #include <rte_event_eth_tx_adapter.h>
 #include <rte_service.h>
 #include <rte_spinlock.h>
diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c
index 2532839bf..1d5d0438d 100644
--- a/examples/l3fwd/l3fwd_event_generic.c
+++ b/examples/l3fwd/l3fwd_event_generic.c
@@ -182,10 +182,122 @@ l3fwd_event_queue_setup_generic(uint32_t event_queue_cfg)
 	evt_rsrc->evq.event_q_id[event_q_id] = event_q_id;
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_generic(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint8_t rx_adptr_id = 0;
+	uint8_t tx_adptr_id = 0;
+	uint8_t tx_port_id = 0;
+	uint16_t port_id;
+	uint32_t service_id;
+	int32_t ret, i = 0;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	/* Rx adapter setup */
+	evt_rsrc->rx_adptr.nb_rx_adptr = 1;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_rx_adapter_create(rx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create rx adapter\n");
+
+	/* Configure user requested sched type */
+	eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[i];
+		ret = rte_event_eth_rx_adapter_queue_add(rx_adptr_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+		if (i < evt_rsrc->evq.nb_queues)
+			i++;
+	}
+
+	ret = rte_event_eth_rx_adapter_service_id_get(rx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Error getting the service ID for rx adptr\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->rx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_rx_adapter_start(rx_adptr_id);
+	if (ret)
+		rte_panic("Rx adapter[%d] start Failed\n", rx_adptr_id);
+
+	evt_rsrc->rx_adptr.rx_adptr[0] = rx_adptr_id;
+
+	/* Tx adapter setup */
+	evt_rsrc->tx_adptr.nb_tx_adptr = 1;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_create(tx_adptr_id, event_d_id,
+					      &evt_rsrc->def_p_conf);
+	if (ret)
+		rte_panic("Failed to create tx adapter\n");
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_queue_add(tx_adptr_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+	}
+
+	ret = rte_event_eth_tx_adapter_service_id_get(tx_adptr_id, &service_id);
+	if (ret != -ESRCH && ret != 0)
+		rte_panic("Failed to get Tx adapter service ID\n");
+
+	rte_service_runstate_set(service_id, 1);
+	rte_service_set_runstate_mapped_check(service_id, 0);
+	evt_rsrc->tx_adptr.service_id = service_id;
+
+	ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
+	if (ret)
+		rte_panic("Failed to get Tx adapter port id: %d\n", ret);
+
+	ret = rte_event_port_link(event_d_id, tx_port_id,
+				  &evt_rsrc->evq.event_q_id[
+					evt_rsrc->evq.nb_queues - 1],
+				  NULL, 1);
+	if (ret != 1)
+		rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
+			 ret);
+
+	ret = rte_event_eth_tx_adapter_start(tx_adptr_id);
+	if (ret)
+		rte_panic("Tx adapter[%d] start Failed\n", tx_adptr_id);
+
+	evt_rsrc->tx_adptr.tx_adptr[0] = tx_adptr_id;
+}
+
 void
 l3fwd_event_set_generic_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_generic;
 	ops->event_queue_setup = l3fwd_event_queue_setup_generic;
 	ops->event_port_setup = l3fwd_event_port_setup_generic;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_generic;
 }
diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c
index 59b7d35b7..56d28399a 100644
--- a/examples/l3fwd/l3fwd_event_internal_port.c
+++ b/examples/l3fwd/l3fwd_event_internal_port.c
@@ -177,10 +177,103 @@ l3fwd_event_queue_setup_internal_port(uint32_t event_queue_cfg)
 	}
 }
 
+static void
+l3fwd_rx_tx_adapter_setup_internal_port(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_eth_rx_adapter_queue_conf eth_q_conf;
+	uint8_t event_d_id = evt_rsrc->event_d_id;
+	uint16_t adapter_id = 0;
+	uint16_t nb_adapter = 0;
+	uint16_t port_id;
+	uint8_t q_id = 0;
+	int ret;
+
+	memset(&eth_q_conf, 0, sizeof(eth_q_conf));
+	eth_q_conf.ev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		nb_adapter++;
+	}
+
+	evt_rsrc->rx_adptr.nb_rx_adptr = nb_adapter;
+	evt_rsrc->rx_adptr.rx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->rx_adptr.nb_rx_adptr);
+	if (!evt_rsrc->rx_adptr.rx_adptr) {
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create rx adapter[%d]\n",
+				  adapter_id);
+
+		/* Configure user requested sched type*/
+		eth_q_conf.ev.sched_type = evt_rsrc->sched_type;
+		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
+		ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
+							 -1, &eth_q_conf);
+		if (ret)
+			rte_panic("Failed to add queues to Rx adapter\n");
+
+		ret = rte_event_eth_rx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+		if (q_id < evt_rsrc->evq.nb_queues)
+			q_id++;
+	}
+
+	evt_rsrc->tx_adptr.nb_tx_adptr = nb_adapter;
+	evt_rsrc->tx_adptr.tx_adptr = (uint8_t *)malloc(sizeof(uint8_t) *
+					evt_rsrc->tx_adptr.nb_tx_adptr);
+	if (!evt_rsrc->tx_adptr.tx_adptr) {
+		free(evt_rsrc->rx_adptr.rx_adptr);
+		free(evt_rsrc->evp.event_p_id);
+		free(evt_rsrc->evq.event_q_id);
+		rte_panic("Failed to allocate memery for Rx adapter\n");
+	}
+
+	adapter_id = 0;
+	RTE_ETH_FOREACH_DEV(port_id) {
+		if ((evt_rsrc->port_mask & (1 << port_id)) == 0)
+			continue;
+		ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
+						&evt_rsrc->def_p_conf);
+		if (ret)
+			rte_panic("Failed to create tx adapter[%d]\n",
+				  adapter_id);
+
+		ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
+							 -1);
+		if (ret)
+			rte_panic("Failed to add queues to Tx adapter\n");
+
+		ret = rte_event_eth_tx_adapter_start(adapter_id);
+		if (ret)
+			rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
+
+		evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
+		adapter_id++;
+	}
+}
+
 void
 l3fwd_event_set_internal_port_ops(struct l3fwd_event_setup_ops *ops)
 {
 	ops->event_device_setup = l3fwd_event_device_setup_internal_port;
 	ops->event_queue_setup = l3fwd_event_queue_setup_internal_port;
 	ops->event_port_setup = l3fwd_event_port_setup_internal_port;
+	ops->adapter_setup = l3fwd_rx_tx_adapter_setup_internal_port;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 07/11] examples/l3fwd: add service core setup based on caps
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (5 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
                         ` (4 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add service core setup when eventdev and Rx/Tx adapter don't have
internal port capability.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd_event.c |   6 ++
 examples/l3fwd/main.c        | 188 +++++++++++++++++++++++++++--------
 2 files changed, 150 insertions(+), 44 deletions(-)

diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 0a75e39ee..1ed42c3ab 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -212,6 +212,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
 	uint32_t event_queue_cfg;
+	int ret;
 
 	if (!evt_rsrc->enabled)
 		return;
@@ -236,4 +237,9 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	/* Rx/Tx adapters configuration */
 	evt_rsrc->ops.adapter_setup();
+
+	/* Start event device */
+	ret = rte_event_dev_start(evt_rsrc->event_d_id);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 04526343f..4bb00a48b 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -899,49 +899,18 @@ prepare_ptype_parser(uint16_t portid, uint16_t queueid)
 	return 0;
 }
 
-int
-main(int argc, char **argv)
+static void
+l3fwd_poll_resource_setup(void)
 {
-	struct l3fwd_event_resources *evt_rsrc;
-	struct lcore_conf *qconf;
+	uint8_t nb_rx_queue, queue, socketid;
 	struct rte_eth_dev_info dev_info;
+	uint32_t n_tx_queue, nb_lcores;
 	struct rte_eth_txconf *txconf;
-	int ret;
-	unsigned nb_ports;
+	struct lcore_conf *qconf;
 	uint16_t queueid, portid;
-	unsigned lcore_id;
-	uint32_t n_tx_queue, nb_lcores;
-	uint8_t nb_rx_queue, queue, socketid;
-
-	/* init EAL */
-	ret = rte_eal_init(argc, argv);
-	if (ret < 0)
-		rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
-	argc -= ret;
-	argv += ret;
-
-	force_quit = false;
-	signal(SIGINT, signal_handler);
-	signal(SIGTERM, signal_handler);
-
-	/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
-	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
-		dest_eth_addr[portid] =
-			RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
-		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
-	}
-
-	evt_rsrc = l3fwd_get_eventdev_rsrc();
-	/* parse application arguments (after the EAL ones) */
-	ret = parse_args(argc, argv);
-	if (ret < 0)
-		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
-
-	evt_rsrc->per_port_pool = per_port_pool;
-	evt_rsrc->pkt_pool = pktmbuf_pool;
-	evt_rsrc->port_mask = enabled_port_mask;
-	/* Configure eventdev parameters if user has requested */
-	l3fwd_event_resource_setup(&port_conf);
+	unsigned int nb_ports;
+	unsigned int lcore_id;
+	int ret;
 
 	if (check_lcore_params() < 0)
 		rte_exit(EXIT_FAILURE, "check_lcore_params failed\n");
@@ -957,9 +926,6 @@ main(int argc, char **argv)
 
 	nb_lcores = rte_lcore_count();
 
-	/* Setup function pointers for lookup method. */
-	setup_l3fwd_lookup_tables();
-
 	/* initialize all ports */
 	RTE_ETH_FOREACH_DEV(portid) {
 		struct rte_eth_conf local_port_conf = port_conf;
@@ -1127,7 +1093,142 @@ main(int argc, char **argv)
 		}
 	}
 
-	printf("\n");
+
+}
+
+static inline int
+l3fwd_service_enable(uint32_t service_id)
+{
+	uint8_t min_service_count = UINT8_MAX;
+	uint32_t slcore_array[RTE_MAX_LCORE];
+	unsigned int slcore = 0;
+	uint8_t service_count;
+	int32_t slcore_count;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	slcore_count = rte_service_lcore_list(slcore_array, RTE_MAX_LCORE);
+	if (slcore_count < 0)
+		return -ENOENT;
+	/* Get the core which has least number of services running. */
+	while (slcore_count--) {
+		/* Reset default mapping */
+		rte_service_map_lcore_set(service_id,
+				slcore_array[slcore_count], 0);
+		service_count = rte_service_lcore_count_services(
+				slcore_array[slcore_count]);
+		if (service_count < min_service_count) {
+			slcore = slcore_array[slcore_count];
+			min_service_count = service_count;
+		}
+	}
+	if (rte_service_map_lcore_set(service_id, slcore, 1))
+		return -ENOENT;
+	rte_service_lcore_start(slcore);
+
+	return 0;
+}
+
+static void
+l3fwd_event_service_setup(void)
+{
+	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	struct rte_event_dev_info evdev_info;
+	uint32_t service_id, caps;
+	int ret, i;
+
+	rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
+	if (!(evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) {
+		ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting eventdev service\n");
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
+		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		ret = rte_event_eth_rx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->rx_adptr.rx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+
+	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
+		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
+				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
+		if (ret < 0)
+			rte_exit(EXIT_FAILURE,
+				 "Failed to get Rx adapter[%d] caps\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		ret = rte_event_eth_tx_adapter_service_id_get(
+				evt_rsrc->event_d_id,
+				&service_id);
+		if (ret != -ESRCH && ret != 0)
+			rte_exit(EXIT_FAILURE,
+				 "Error in starting Rx adapter[%d] service\n",
+				 evt_rsrc->tx_adptr.tx_adptr[i]);
+		l3fwd_service_enable(service_id);
+	}
+}
+
+int
+main(int argc, char **argv)
+{
+	struct l3fwd_event_resources *evt_rsrc;
+	struct lcore_conf *qconf;
+	uint16_t queueid, portid;
+	unsigned int lcore_id;
+	uint8_t queue;
+	int ret;
+
+	/* init EAL */
+	ret = rte_eal_init(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid EAL parameters\n");
+	argc -= ret;
+	argv += ret;
+
+	force_quit = false;
+	signal(SIGINT, signal_handler);
+	signal(SIGTERM, signal_handler);
+
+	/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */
+	for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
+		dest_eth_addr[portid] =
+			RTE_ETHER_LOCAL_ADMIN_ADDR + ((uint64_t)portid << 40);
+		*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];
+	}
+
+	evt_rsrc = l3fwd_get_eventdev_rsrc();
+	/* parse application arguments (after the EAL ones) */
+	ret = parse_args(argc, argv);
+	if (ret < 0)
+		rte_exit(EXIT_FAILURE, "Invalid L3FWD parameters\n");
+
+	/* Setup function pointers for lookup method. */
+	setup_l3fwd_lookup_tables();
+
+	evt_rsrc->per_port_pool = per_port_pool;
+	evt_rsrc->pkt_pool = pktmbuf_pool;
+	evt_rsrc->port_mask = enabled_port_mask;
+	/* Configure eventdev parameters if user has requested */
+	if (evt_rsrc->enabled) {
+		l3fwd_event_resource_setup(&port_conf);
+		l3fwd_event_service_setup();
+	} else
+		l3fwd_poll_resource_setup();
 
 	/* start ports */
 	RTE_ETH_FOREACH_DEV(portid) {
@@ -1170,7 +1271,6 @@ main(int argc, char **argv)
 		}
 	}
 
-
 	check_all_ports_link_status(enabled_port_mask);
 
 	ret = 0;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 08/11] examples/l3fwd: add event lpm main loop
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (6 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 09/11] examples/l3fwd: add event em " pbhagavatula
                         ` (3 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add lpm main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h       |   9 ++
 examples/l3fwd/l3fwd_event.c |   9 ++
 examples/l3fwd/l3fwd_event.h |   5 +
 examples/l3fwd/l3fwd_lpm.c   | 203 +++++++++++++++++++++++++++++++++++
 examples/l3fwd/main.c        |   1 +
 5 files changed, 227 insertions(+)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index ed5ba5f3b..b3cdcd496 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -207,6 +207,15 @@ em_main_loop(__attribute__((unused)) void *dummy);
 int
 lpm_main_loop(__attribute__((unused)) void *dummy);
 
+int
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 1ed42c3ab..ebddd8f97 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -211,6 +211,12 @@ void
 l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 {
 	struct l3fwd_event_resources *evt_rsrc = l3fwd_get_eventdev_rsrc();
+	const event_loop_cb lpm_event_loop[2][2] = {
+		[0][0] = lpm_event_main_loop_tx_d,
+		[0][1] = lpm_event_main_loop_tx_d_burst,
+		[1][0] = lpm_event_main_loop_tx_q,
+		[1][1] = lpm_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -242,4 +248,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 	ret = rte_event_dev_start(evt_rsrc->event_d_id);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Error in starting eventdev");
+
+	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/l3fwd_event.h b/examples/l3fwd/l3fwd_event.h
index b1ae970d4..0e4616417 100644
--- a/examples/l3fwd/l3fwd_event.h
+++ b/examples/l3fwd/l3fwd_event.h
@@ -14,6 +14,11 @@
 
 #include "l3fwd.h"
 
+#define L3FWD_EVENT_SINGLE     0x1
+#define L3FWD_EVENT_BURST      0x2
+#define L3FWD_EVENT_TX_DIRECT  0x4
+#define L3FWD_EVENT_TX_ENQ     0x8
+
 typedef uint32_t (*event_device_setup_cb)(void);
 typedef void (*event_queue_setup_cb)(uint32_t event_queue_cfg);
 typedef void (*event_port_setup_cb)(void);
diff --git a/examples/l3fwd/l3fwd_lpm.c b/examples/l3fwd/l3fwd_lpm.c
index 349de2703..058a60b1d 100644
--- a/examples/l3fwd/l3fwd_lpm.c
+++ b/examples/l3fwd/l3fwd_lpm.c
@@ -28,6 +28,7 @@
 #include <rte_lpm6.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 struct ipv4_l3fwd_lpm_route {
 	uint32_t ip;
@@ -254,6 +255,208 @@ lpm_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline uint16_t
+lpm_process_event_pkt(const struct lcore_conf *lconf, struct rte_mbuf *mbuf)
+{
+	mbuf->port = lpm_get_dst_port(lconf, mbuf, mbuf->port);
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON \
+	|| defined RTE_ARCH_PPC_64
+	process_packet(mbuf, &mbuf->port);
+#else
+
+	struct rte_ether_hdr *eth_hdr = rte_pktmbuf_mtod(mbuf,
+			struct rte_ether_hdr *);
+#ifdef DO_RFC_1812_CHECKS
+	struct rte_ipv4_hdr *ipv4_hdr;
+	if (RTE_ETH_IS_IPV4_HDR(mbuf->packet_type)) {
+		/* Handle IPv4 headers.*/
+		ipv4_hdr = rte_pktmbuf_mtod_offset(mbuf,
+				struct rte_ipv4_hdr *,
+				sizeof(struct rte_ether_hdr));
+
+		if (is_valid_ipv4_pkt(ipv4_hdr, mbuf->pkt_len)
+				< 0) {
+			mbuf->port = BAD_PORT;
+			continue;
+		}
+		/* Update time to live and header checksum */
+		--(ipv4_hdr->time_to_live);
+		++(ipv4_hdr->hdr_checksum);
+	}
+#endif
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[mbuf->port];
+
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[mbuf->port],
+			&eth_hdr->s_addr);
+#endif
+	return mbuf->port;
+}
+
+static __rte_always_inline void
+lpm_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		if (lpm_process_event_pkt(lconf, ev.mbuf) == BAD_PORT) {
+			rte_pktmbuf_free(ev.mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+
+			lpm_process_event_pkt(lconf, events[i].mbuf);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+lpm_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		lpm_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		lpm_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	lpm_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 void
 setup_lpm(const int socketid)
 {
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 4bb00a48b..b8bd19b41 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1226,6 +1226,7 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	if (evt_rsrc->enabled) {
 		l3fwd_event_resource_setup(&port_conf);
+		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		l3fwd_event_service_setup();
 	} else
 		l3fwd_poll_resource_setup();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 09/11] examples/l3fwd: add event em main loop
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (7 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
                         ` (2 subsequent siblings)
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add em main loop for handling events based on capabilities of the
event device.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/l3fwd.h               |  10 ++
 examples/l3fwd/l3fwd_em.c            | 177 +++++++++++++++++++++++++++
 examples/l3fwd/l3fwd_em.h            | 159 +++++++++++++++++-------
 examples/l3fwd/l3fwd_em_hlm.h        | 131 ++++++++++++++++++++
 examples/l3fwd/l3fwd_em_sequential.h |  26 ++++
 examples/l3fwd/l3fwd_event.c         |   9 ++
 examples/l3fwd/main.c                |   5 +-
 7 files changed, 470 insertions(+), 47 deletions(-)

diff --git a/examples/l3fwd/l3fwd.h b/examples/l3fwd/l3fwd.h
index b3cdcd496..c786f9062 100644
--- a/examples/l3fwd/l3fwd.h
+++ b/examples/l3fwd/l3fwd.h
@@ -216,6 +216,16 @@ lpm_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
 int
 lpm_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
 
+int
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy);
+int
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy);
+
+
 /* Return ipv4/ipv6 fwd lookup struct for LPM or EM. */
 void *
 em_get_ipv4_l3fwd_lookup_struct(const int socketid);
diff --git a/examples/l3fwd/l3fwd_em.c b/examples/l3fwd/l3fwd_em.c
index 74a7c8fa4..1a8bc9168 100644
--- a/examples/l3fwd/l3fwd_em.c
+++ b/examples/l3fwd/l3fwd_em.c
@@ -26,6 +26,7 @@
 #include <rte_hash.h>
 
 #include "l3fwd.h"
+#include "l3fwd_event.h"
 
 #if defined(RTE_ARCH_X86) || defined(RTE_MACHINE_CPUFLAG_CRC32)
 #define EM_HASH_CRC 1
@@ -699,6 +700,182 @@ em_main_loop(__attribute__((unused)) void *dummy)
 	return 0;
 }
 
+static __rte_always_inline void
+em_event_loop_single(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	struct rte_event ev;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+	while (!force_quit) {
+		if (!rte_event_dequeue_burst(event_d_id, event_p_id, &ev, 1, 0))
+			continue;
+
+		struct rte_mbuf *mbuf = ev.mbuf;
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		mbuf->port = em_get_dst_port(lconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+#else
+		l3fwd_em_simple_process(mbuf, lconf);
+#endif
+		if (mbuf->port == BAD_PORT) {
+			rte_pktmbuf_free(mbuf);
+			continue;
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			ev.queue_id = tx_q_id;
+			ev.op = RTE_EVENT_OP_FORWARD;
+			while (rte_event_enqueue_burst(event_d_id, event_p_id,
+						&ev, 1) && !force_quit)
+				;
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			rte_event_eth_tx_adapter_txq_set(mbuf, 0);
+			while (!rte_event_eth_tx_adapter_enqueue(event_d_id,
+						event_p_id, &ev, 1, 0) &&
+					!force_quit)
+				;
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop_burst(struct l3fwd_event_resources *evt_rsrc,
+		const uint8_t flags)
+{
+	const int event_p_id = l3fwd_get_free_event_port(evt_rsrc);
+	const uint8_t tx_q_id = evt_rsrc->evq.event_q_id[
+		evt_rsrc->evq.nb_queues - 1];
+	const uint8_t event_d_id = evt_rsrc->event_d_id;
+	const uint16_t deq_len = evt_rsrc->deq_depth;
+	struct rte_event events[MAX_PKT_BURST];
+	struct lcore_conf *lconf;
+	unsigned int lcore_id;
+	int i, nb_enq, nb_deq;
+
+	if (event_p_id < 0)
+		return;
+
+	lcore_id = rte_lcore_id();
+
+	lconf = &lcore_conf[lcore_id];
+
+	RTE_LOG(INFO, L3FWD, "entering %s on lcore %u\n", __func__, lcore_id);
+
+	while (!force_quit) {
+		/* Read events from RX queues */
+		nb_deq = rte_event_dequeue_burst(event_d_id, event_p_id,
+				events, deq_len, 0);
+		if (nb_deq == 0) {
+			rte_pause();
+			continue;
+		}
+
+#if defined RTE_ARCH_X86 || defined RTE_MACHINE_CPUFLAG_NEON
+		l3fwd_em_process_events(nb_deq, (struct rte_event **)&events,
+					lconf);
+#else
+		l3fwd_em_no_opt_process_events(nb_deq,
+					       (struct rte_event **)&events,
+					       lconf);
+#endif
+		for (i = 0; i < nb_deq; i++) {
+			if (flags & L3FWD_EVENT_TX_ENQ) {
+				events[i].queue_id = tx_q_id;
+				events[i].op = RTE_EVENT_OP_FORWARD;
+			}
+
+			if (flags & L3FWD_EVENT_TX_DIRECT)
+				rte_event_eth_tx_adapter_txq_set(events[i].mbuf,
+								 0);
+		}
+
+		if (flags & L3FWD_EVENT_TX_ENQ) {
+			nb_enq = rte_event_enqueue_burst(event_d_id, event_p_id,
+					events, nb_deq);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_enqueue_burst(event_d_id,
+						event_p_id, events + nb_enq,
+						nb_deq - nb_enq);
+		}
+
+		if (flags & L3FWD_EVENT_TX_DIRECT) {
+			nb_enq = rte_event_eth_tx_adapter_enqueue(event_d_id,
+					event_p_id, events, nb_deq, 0);
+			while (nb_enq < nb_deq && !force_quit)
+				nb_enq += rte_event_eth_tx_adapter_enqueue(
+						event_d_id, event_p_id,
+						events + nb_enq,
+						nb_deq - nb_enq, 0);
+		}
+	}
+}
+
+static __rte_always_inline void
+em_event_loop(struct l3fwd_event_resources *evt_rsrc,
+		 const uint8_t flags)
+{
+	if (flags & L3FWD_EVENT_SINGLE)
+		em_event_loop_single(evt_rsrc, flags);
+	if (flags & L3FWD_EVENT_BURST)
+		em_event_loop_burst(evt_rsrc, flags);
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_d_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_DIRECT | L3FWD_EVENT_BURST);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_SINGLE);
+	return 0;
+}
+
+int __rte_noinline
+em_event_main_loop_tx_q_burst(__attribute__((unused)) void *dummy)
+{
+	struct l3fwd_event_resources *evt_rsrc =
+					l3fwd_get_eventdev_rsrc();
+
+	em_event_loop(evt_rsrc, L3FWD_EVENT_TX_ENQ | L3FWD_EVENT_BURST);
+	return 0;
+}
+
 /*
  * Initialize exact match (hash) parameters.
  */
diff --git a/examples/l3fwd/l3fwd_em.h b/examples/l3fwd/l3fwd_em.h
index 090c1b448..b992a21da 100644
--- a/examples/l3fwd/l3fwd_em.h
+++ b/examples/l3fwd/l3fwd_em.h
@@ -5,73 +5,92 @@
 #ifndef __L3FWD_EM_H__
 #define __L3FWD_EM_H__
 
-static __rte_always_inline void
-l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
-		struct lcore_conf *qconf)
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv4(struct rte_mbuf *m, uint16_t portid,
+		     struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
 {
-	struct rte_ether_hdr *eth_hdr;
 	struct rte_ipv4_hdr *ipv4_hdr;
 	uint16_t dst_port;
-	uint32_t tcp_or_udp;
-	uint32_t l3_ptypes;
-
-	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
-	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
-	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
 
-	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
-		/* Handle IPv4 headers.*/
-		ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
-						sizeof(struct rte_ether_hdr));
+	/* Handle IPv4 headers.*/
+	ipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,
+			sizeof(struct rte_ether_hdr));
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Check to make sure the packet is valid (RFC1812) */
-		if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
-			rte_pktmbuf_free(m);
-			return;
-		}
+	/* Check to make sure the packet is valid (RFC1812) */
+	if (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {
+		rte_pktmbuf_free(m);
+		return BAD_PORT;
+	}
 #endif
-		dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
-						qconf->ipv4_lookup_struct);
+	dst_port = em_get_ipv4_dst_port(ipv4_hdr, portid,
+			qconf->ipv4_lookup_struct);
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
 
 #ifdef DO_RFC_1812_CHECKS
-		/* Update time to live and header checksum */
-		--(ipv4_hdr->time_to_live);
-		++(ipv4_hdr->hdr_checksum);
+	/* Update time to live and header checksum */
+	--(ipv4_hdr->time_to_live);
+	++(ipv4_hdr->hdr_checksum);
 #endif
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		send_single_packet(qconf, m, dst_port);
-	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
-		/* Handle IPv6 headers.*/
-		struct rte_ipv6_hdr *ipv6_hdr;
+	return dst_port;
+}
 
-		ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
-						sizeof(struct rte_ether_hdr));
+static __rte_always_inline uint16_t
+l3fwd_em_handle_ipv6(struct rte_mbuf *m, uint16_t portid,
+		struct rte_ether_hdr *eth_hdr, struct lcore_conf *qconf)
+{
+	/* Handle IPv6 headers.*/
+	struct rte_ipv6_hdr *ipv6_hdr;
+	uint16_t dst_port;
 
-		dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
-					qconf->ipv6_lookup_struct);
+	ipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,
+			sizeof(struct rte_ether_hdr));
 
-		if (dst_port >= RTE_MAX_ETHPORTS ||
+	dst_port = em_get_ipv6_dst_port(ipv6_hdr, portid,
+			qconf->ipv6_lookup_struct);
+
+	if (dst_port >= RTE_MAX_ETHPORTS ||
 			(enabled_port_mask & 1 << dst_port) == 0)
-			dst_port = portid;
+		dst_port = portid;
+
+	/* dst addr */
+	*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
 
-		/* dst addr */
-		*(uint64_t *)&eth_hdr->d_addr = dest_eth_addr[dst_port];
+	/* src addr */
+	rte_ether_addr_copy(&ports_eth_addr[dst_port],
+			&eth_hdr->s_addr);
 
-		/* src addr */
-		rte_ether_addr_copy(&ports_eth_addr[dst_port],
-				&eth_hdr->s_addr);
+	return dst_port;
+}
 
+static __rte_always_inline void
+l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
+		struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint16_t dst_port;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4)) {
+		dst_port = l3fwd_em_handle_ipv4(m, portid, eth_hdr, qconf);
+		send_single_packet(qconf, m, dst_port);
+	} else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6)) {
+		dst_port = l3fwd_em_handle_ipv6(m, portid, eth_hdr, qconf);
 		send_single_packet(qconf, m, dst_port);
 	} else {
 		/* Free the mbuf that contains non-IPV4/IPV6 packet */
@@ -79,6 +98,25 @@ l3fwd_em_simple_forward(struct rte_mbuf *m, uint16_t portid,
 	}
 }
 
+static __rte_always_inline void
+l3fwd_em_simple_process(struct rte_mbuf *m, struct lcore_conf *qconf)
+{
+	struct rte_ether_hdr *eth_hdr;
+	uint32_t tcp_or_udp;
+	uint32_t l3_ptypes;
+
+	eth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+	tcp_or_udp = m->packet_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+	l3_ptypes = m->packet_type & RTE_PTYPE_L3_MASK;
+
+	if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV4))
+		m->port = l3fwd_em_handle_ipv4(m, m->port, eth_hdr, qconf);
+	else if (tcp_or_udp && (l3_ptypes == RTE_PTYPE_L3_IPV6))
+		m->port = l3fwd_em_handle_ipv6(m, m->port, eth_hdr, qconf);
+	else
+		m->port = BAD_PORT;
+}
+
 /*
  * Buffer non-optimized handling of packets, invoked
  * from main_loop.
@@ -108,4 +146,33 @@ l3fwd_em_no_opt_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 		l3fwd_em_simple_forward(pkts_burst[j], portid, qconf);
 }
 
+/*
+ * Buffer non-optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_no_opt_process_events(int nb_rx, struct rte_event **events,
+			       struct lcore_conf *qconf)
+{
+	int32_t j;
+
+	/* Prefetch first packets */
+	for (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++)
+		rte_prefetch0(rte_pktmbuf_mtod(events[j]->mbuf, void *));
+
+	/*
+	 * Prefetch and forward already prefetched
+	 * packets.
+	 */
+	for (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) {
+		rte_prefetch0(rte_pktmbuf_mtod(events[
+				j + PREFETCH_OFFSET]->mbuf, void *));
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+	}
+
+	/* Forward remaining prefetched packets */
+	for (; j < nb_rx; j++)
+		l3fwd_em_simple_process(events[j]->mbuf, qconf);
+}
+
 #endif /* __L3FWD_EM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h
index ad8b9ce87..79812716c 100644
--- a/examples/l3fwd/l3fwd_em_hlm.h
+++ b/examples/l3fwd/l3fwd_em_hlm.h
@@ -75,6 +75,60 @@ em_get_dst_port_ipv6xN(struct lcore_conf *qconf, struct rte_mbuf *m[],
 	}
 }
 
+static __rte_always_inline void
+em_get_dst_port_ipv4xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv4_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv4_5tuple(m[i], mask0.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv4_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv4_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
+static __rte_always_inline void
+em_get_dst_port_ipv6xN_events(struct lcore_conf *qconf, struct rte_mbuf *m[],
+			      uint16_t dst_port[])
+{
+	int i;
+	int32_t ret[EM_HASH_LOOKUP_COUNT];
+	union ipv6_5tuple_host key[EM_HASH_LOOKUP_COUNT];
+	const void *key_array[EM_HASH_LOOKUP_COUNT];
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		get_ipv6_5tuple(m[i], mask1.x, mask2.x, &key[i]);
+		key_array[i] = &key[i];
+	}
+
+	rte_hash_lookup_bulk(qconf->ipv6_lookup_struct, &key_array[0],
+			     EM_HASH_LOOKUP_COUNT, ret);
+
+	for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+		dst_port[i] = ((ret[i] < 0) ?
+				m[i]->port : ipv6_l3fwd_out_if[ret[i]]);
+
+		if (dst_port[i] >= RTE_MAX_ETHPORTS ||
+				(enabled_port_mask & 1 << dst_port[i]) == 0)
+			dst_port[i] = m[i]->port;
+	}
+}
+
 static __rte_always_inline uint16_t
 em_get_dst_port(const struct lcore_conf *qconf, struct rte_mbuf *pkt,
 		uint16_t portid)
@@ -187,4 +241,81 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **ev,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j, pos;
+	uint16_t dst_port[MAX_PKT_BURST];
+	struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+
+	/*
+	 * Send nb_rx - nb_rx % EM_HASH_LOOKUP_COUNT packets
+	 * in groups of EM_HASH_LOOKUP_COUNT.
+	 */
+	int32_t n = RTE_ALIGN_FLOOR(nb_rx, EM_HASH_LOOKUP_COUNT);
+
+	for (j = 0; j < EM_HASH_LOOKUP_COUNT && j < nb_rx; j++) {
+		pkts_burst[j] = ev[j]->mbuf;
+		rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j],
+					       struct rte_ether_hdr *) + 1);
+	}
+
+	for (j = 0; j < n; j += EM_HASH_LOOKUP_COUNT) {
+
+		uint32_t pkt_type = RTE_PTYPE_L3_MASK |
+				    RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP;
+		uint32_t l3_type, tcp_or_udp;
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkt_type &= pkts_burst[j + i]->packet_type;
+
+		l3_type = pkt_type & RTE_PTYPE_L3_MASK;
+		tcp_or_udp = pkt_type & (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP);
+
+		for (i = 0, pos = j + EM_HASH_LOOKUP_COUNT;
+		     i < EM_HASH_LOOKUP_COUNT && pos < nb_rx; i++, pos++) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					pkts_burst[pos],
+					struct rte_ether_hdr *) + 1);
+		}
+
+		if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV4)) {
+
+			em_get_dst_port_ipv4xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else if (tcp_or_udp && (l3_type == RTE_PTYPE_L3_IPV6)) {
+
+			em_get_dst_port_ipv6xN_events(qconf, &pkts_burst[j],
+					       &dst_port[j]);
+
+		} else {
+			for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++) {
+				pkts_burst[j + i]->port = em_get_dst_port(qconf,
+						pkts_burst[j + i],
+						pkts_burst[j + i]->port);
+				process_packet(pkts_burst[j + i],
+						&pkts_burst[j + i]->port);
+			}
+			continue;
+		}
+		processx4_step3(&pkts_burst[j], &dst_port[j]);
+
+		for (i = 0; i < EM_HASH_LOOKUP_COUNT; i++)
+			pkts_burst[j + i]->port = dst_port[j + i];
+
+	}
+
+	for (; j < nb_rx; j++) {
+		pkts_burst[j]->port = em_get_dst_port(qconf, pkts_burst[j],
+						      pkts_burst[j]->port);
+		process_packet(pkts_burst[j], &pkts_burst[j]->port);
+	}
+}
 #endif /* __L3FWD_EM_HLM_H__ */
diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h
index 23fe9dec8..b231b9994 100644
--- a/examples/l3fwd/l3fwd_em_sequential.h
+++ b/examples/l3fwd/l3fwd_em_sequential.h
@@ -95,4 +95,30 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
 
 	send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
 }
+
+/*
+ * Buffer optimized handling of events, invoked
+ * from main_loop.
+ */
+static inline void
+l3fwd_em_process_events(int nb_rx, struct rte_event **events,
+		     struct lcore_conf *qconf)
+{
+	int32_t i, j;
+
+	rte_prefetch0(rte_pktmbuf_mtod(events[0]->mbuf,
+		      struct rte_ether_hdr *) + 1);
+
+	for (i = 1, j = 0; j < nb_rx; i++, j++) {
+		struct rte_mbuf *mbuf = events[j]->mbuf;
+
+		if (i < nb_rx) {
+			rte_prefetch0(rte_pktmbuf_mtod(
+					events[i]->mbuf,
+					struct rte_ether_hdr *) + 1);
+		}
+		mbuf->port = em_get_dst_port(qconf, mbuf, mbuf->port);
+		process_packet(mbuf, &mbuf->port);
+	}
+}
 #endif /* __L3FWD_EM_SEQUENTIAL_H__ */
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index ebddd8f97..43c47eade 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -217,6 +217,12 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 		[1][0] = lpm_event_main_loop_tx_q,
 		[1][1] = lpm_event_main_loop_tx_q_burst,
 	};
+	const event_loop_cb em_event_loop[2][2] = {
+		[0][0] = em_event_main_loop_tx_d,
+		[0][1] = em_event_main_loop_tx_d_burst,
+		[1][0] = em_event_main_loop_tx_q,
+		[1][1] = em_event_main_loop_tx_q_burst,
+	};
 	uint32_t event_queue_cfg;
 	int ret;
 
@@ -251,4 +257,7 @@ l3fwd_event_resource_setup(struct rte_eth_conf *port_conf)
 
 	evt_rsrc->ops.lpm_event_loop = lpm_event_loop[evt_rsrc->tx_mode_q]
 						       [evt_rsrc->has_burst];
+
+	evt_rsrc->ops.em_event_loop = em_event_loop[evt_rsrc->tx_mode_q]
+						       [evt_rsrc->has_burst];
 }
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index b8bd19b41..c95b1655e 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1226,7 +1226,10 @@ main(int argc, char **argv)
 	/* Configure eventdev parameters if user has requested */
 	if (evt_rsrc->enabled) {
 		l3fwd_event_resource_setup(&port_conf);
-		l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
+		if (l3fwd_em_on)
+			l3fwd_lkp.main_loop = evt_rsrc->ops.em_event_loop;
+		else
+			l3fwd_lkp.main_loop = evt_rsrc->ops.lpm_event_loop;
 		l3fwd_event_service_setup();
 	} else
 		l3fwd_poll_resource_setup();
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 10/11] examples/l3fwd: add graceful teardown for eventdevice
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (8 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 09/11] examples/l3fwd: add event em " pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
  2020-01-27 16:17       ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support Jerin Jacob
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Marko Kovacevic, Ori Kam,
	Bruce Richardson, Radu Nicolau, Akhil Goyal, Tomasz Kantecki,
	Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Add graceful teardown that addresses both event mode and poll mode.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 examples/l3fwd/main.c | 50 +++++++++++++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 16 deletions(-)

diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index c95b1655e..dda430d68 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -1092,8 +1092,6 @@ l3fwd_poll_resource_setup(void)
 				ret, portid);
 		}
 	}
-
-
 }
 
 static inline int
@@ -1191,7 +1189,7 @@ main(int argc, char **argv)
 	uint16_t queueid, portid;
 	unsigned int lcore_id;
 	uint8_t queue;
-	int ret;
+	int i, ret;
 
 	/* init EAL */
 	ret = rte_eal_init(argc, argv);
@@ -1280,21 +1278,41 @@ main(int argc, char **argv)
 	ret = 0;
 	/* launch per-lcore init on every lcore */
 	rte_eal_mp_remote_launch(l3fwd_lkp.main_loop, NULL, CALL_MASTER);
-	RTE_LCORE_FOREACH_SLAVE(lcore_id) {
-		if (rte_eal_wait_lcore(lcore_id) < 0) {
-			ret = -1;
-			break;
+	if (evt_rsrc->enabled) {
+		for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++)
+			rte_event_eth_rx_adapter_stop(
+					evt_rsrc->rx_adptr.rx_adptr[i]);
+		for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++)
+			rte_event_eth_tx_adapter_stop(
+					evt_rsrc->tx_adptr.tx_adptr[i]);
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_stop(portid);
 		}
-	}
 
-	/* stop ports */
-	RTE_ETH_FOREACH_DEV(portid) {
-		if ((enabled_port_mask & (1 << portid)) == 0)
-			continue;
-		printf("Closing port %d...", portid);
-		rte_eth_dev_stop(portid);
-		rte_eth_dev_close(portid);
-		printf(" Done\n");
+		rte_eal_mp_wait_lcore();
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			rte_eth_dev_close(portid);
+		}
+
+		rte_event_dev_stop(evt_rsrc->event_d_id);
+		rte_event_dev_close(evt_rsrc->event_d_id);
+
+	} else {
+		rte_eal_mp_wait_lcore();
+
+		RTE_ETH_FOREACH_DEV(portid) {
+			if ((enabled_port_mask & (1 << portid)) == 0)
+				continue;
+			printf("Closing port %d...", portid);
+			rte_eth_dev_stop(portid);
+			rte_eth_dev_close(portid);
+			printf(" Done\n");
+		}
 	}
 	printf("Bye...\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [dpdk-dev] [PATCH v5 11/11] doc: update l3fwd user guide to support eventdev
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (9 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
@ 2020-01-24  4:05       ` pbhagavatula
  2020-01-27 16:17       ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support Jerin Jacob
  11 siblings, 0 replies; 78+ messages in thread
From: pbhagavatula @ 2020-01-24  4:05 UTC (permalink / raw)
  To: jerinj, konstantin.ananyev, Thomas Monjalon, John McNamara,
	Marko Kovacevic, Ori Kam, Bruce Richardson, Radu Nicolau,
	Akhil Goyal, Tomasz Kantecki, Sunil Kumar Kori, Pavan Nikhilesh
  Cc: dev

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Update l3fwd user guide to include event device related information.
Update release notes and MAINTAINERS file.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 MAINTAINERS                             |  1 +
 doc/guides/rel_notes/release_20_02.rst  |  5 ++
 doc/guides/sample_app_ug/l3_forward.rst | 79 +++++++++++++++++++++++--
 3 files changed, 79 insertions(+), 6 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index b7c4572d4..14d93d87d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@ T: git://dpdk.org/next/dpdk-next-eventdev
 F: lib/librte_eventdev/
 F: drivers/event/skeleton/
 F: app/test/test_eventdev.c
+F: examples/l3fwd/l3fwd_event*

 Eventdev Ethdev Rx Adapter API
 M: Nikhil Rao <nikhil.rao@intel.com>
diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 0eaa45a76..abd98dbb1 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -56,6 +56,11 @@ New Features
      Also, make sure to start the actual text at the margin.
      =========================================================

+* **Add event device support for l3fwd sample application.**
+
+  Add event device support for ``l3fwd`` sample application. It demonstrates
+  usage of poll and event mode IO mechanism under a single application.
+

 Removed Items
 -------------
diff --git a/doc/guides/sample_app_ug/l3_forward.rst b/doc/guides/sample_app_ug/l3_forward.rst
index 4cb4b18da..5cb0abc32 100644
--- a/doc/guides/sample_app_ug/l3_forward.rst
+++ b/doc/guides/sample_app_ug/l3_forward.rst
@@ -4,16 +4,23 @@
 L3 Forwarding Sample Application
 ================================

-The L3 Forwarding application is a simple example of packet processing using the DPDK.
+The L3 Forwarding application is a simple example of packet processing using
+DPDK to demonstrate usage of poll and event mode packet I/O mechanism.
 The application performs L3 forwarding.

 Overview
 --------

-The application demonstrates the use of the hash and LPM libraries in the DPDK to implement packet forwarding.
-The initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
-The main difference from the L2 Forwarding sample application is that the forwarding decision
-is made based on information read from the input packet.
+The application demonstrates the use of the hash and LPM libraries in the DPDK
+to implement packet forwarding using poll or event mode PMDs for packet I/O.
+The initialization and run-time paths are very similar to those of the
+:doc:`l2_forward_real_virtual` and :doc:`l2_forward_event`.
+The main difference from the L2 Forwarding sample application is that optionally
+packet can be Rx/Tx from/to eventdev instead of port directly and forwarding
+decision is made based on information read from the input packet.
+
+Eventdev can optionally use S/W or H/W (if supported by platform) scheduler
+implementation for packet I/O based on run time parameters.

 The lookup method is either hash-based or LPM-based and is selected at run time. When the selected lookup method is hash-based,
 a hash object is used to emulate the flow classification stage.
@@ -56,6 +63,9 @@ The application has a number of command line options::
                              [--ipv6]
                              [--parse-ptype]
                              [--per-port-pool]
+                             [--mode]
+                             [--eventq-sched]
+                             [--event-eth-rxqs]

 Where,

@@ -86,6 +96,13 @@ Where,

 * ``--per-port-pool:`` Optional, set to use independent buffer pools per port. Without this option, single buffer pool is used for all ports.

+* ``--mode:`` Optional, Packet transfer mode for I/O, poll or eventdev.
+
+* ``--eventq-sched:`` Optional, Event queue synchronization method, Ordered, Atomic or Parallel. Only valid if --mode=eventdev.
+
+* ``--event-eth-rxqs:`` Optional, Number of ethernet RX queues per device. Only valid if --mode=eventdev.
+
+
 For example, consider a dual processor socket platform with 8 physical cores, where cores 0-7 and 16-23 appear on socket 0,
 while cores 8-15 and 24-31 appear on socket 1.

@@ -116,6 +133,51 @@ In this command:
 |          |           |           |                                     |
 +----------+-----------+-----------+-------------------------------------+

+To use eventdev mode with sync method **ordered** on above mentioned environment,
+Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x3 --eventq-sched=ordered
+
+or
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-3 -n 4 -w <event device> -- -p 0x03 --mode=eventdev --eventq-sched=ordered
+
+In this command:
+
+*   -w option whitelist the event device supported by platform. Way to pass this device may vary based on platform.
+
+*   The --mode option defines PMD to be used for packet I/O.
+
+*   The --eventq-sched option enables synchronization menthod of event queue so that packets will be scheduled accordingly.
+
+If application uses S/W scheduler, it uses following DPDK services:
+
+*   Software scheduler
+*   Rx adapter service function
+*   Tx adapter service function
+
+Application needs service cores to run above mentioned services. Service cores
+must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
+scheduler. Following is the sample command:
+
+.. code-block:: console
+
+    ./build/l3fwd -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -p 0x3 --mode=eventdev --eventq-sched=ordered
+
+In case of eventdev mode, *--config* option is not used for ethernet port
+configuration. Instead each ethernet port will be configured with mentioned
+setup:
+
+*   Single Rx/Tx queue
+
+*   Each Rx queue will be connected to event queue via Rx adapter.
+
+*   Each Tx queue will be connected via Tx adapter.
+
 Refer to the *DPDK Getting Started Guide* for general information on running applications and
 the Environment Abstraction Layer (EAL) options.

@@ -125,7 +187,7 @@ Explanation
 -----------

 The following sections provide some explanation of the sample application code. As mentioned in the overview section,
-the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
+the initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual` and :doc:`l2_forward_event`.
 The following sections describe aspects that are specific to the L3 Forwarding sample application.

 Hash Initialization
@@ -315,3 +377,8 @@ for LPM-based lookups is done by the get_ipv4_dst_port() function below:

         return ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct, rte_be_to_cpu_32(ipv4_hdr->dst_addr), &next_hop) == 0)? next_hop : portid);
     }
+
+Eventdev Driver Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Eventdev driver initialization is same as L2 forwarding eventdev application.
+Refer :doc:`l2_forward_event` for more details.
--
2.17.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support
  2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
                         ` (10 preceding siblings ...)
  2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
@ 2020-01-27 16:17       ` Jerin Jacob
  11 siblings, 0 replies; 78+ messages in thread
From: Jerin Jacob @ 2020-01-27 16:17 UTC (permalink / raw)
  To: Pavan Nikhilesh; +Cc: Jerin Jacob, Ananyev, Konstantin, dpdk-dev

On Fri, Jan 24, 2020 at 9:35 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> This patchset adds support for event mode in l3fwd.
> The poll mode is still the default mode of operation.
>
> The following new command line parameters are added:
>  --mode: Dictates the mode of operation either poll or event. If unset then
>          by default poll mode is used.
>  --eventq_sched: Dictates event synchronization mode i.e. either ordered,
>                  atomic or parallel.
>  --event-eth-rxqs: Number of ethernet RX queues per device.
>
> Based on event device capability the configuration is done as follows:
>     - A single event device is enabled.
>     - The number of event ports is equal to the number of worker
>       cores enabled in the core mask. Additional event ports might
>       be configured based on Rx/Tx adapter capability.
>     - The number of event queues is equal to the number of ethernet
>       ports. If Tx adapter doesn't have internal port capability then
>       an additional single link event queue is used to enqueue events
>       to Tx adapter.
>     - Each event port is linked to all existing event queues.
>     - Dedicated Rx/Tx adapters for each Ethernet port.

This series looks good to me.
I have verified that there is no poll mode per core performance impact
with this change.

I will merge the next version on Tuesday for rc2 if there are no more
comments from the mailing list.

With below changes, Series-Acked-by: Jerin Jacob <jerinj@marvell.com>

# Please rebase the release note to next-eventdev
# Fix the following typos

### examples/l3fwd: add event device configuration

WARNING:TYPO_SPELLING: 'configurtion' may be misspelled - perhaps
'configuration'?
#119: FILE: examples/l3fwd/l3fwd_event_generic.c:34:
+       /* Event device configurtion */

WARNING:TYPO_SPELLING: 'configurtion' may be misspelled - perhaps
'configuration'?
#207: FILE: examples/l3fwd/l3fwd_event_internal_port.c:34:
+       /* Event device configurtion */

total: 0 errors, 2 warnings, 230 lines checked

### examples/l3fwd: add event eth Rx/Tx adapter setup

WARNING:TYPO_SPELLING: 'memery' may be misspelled - perhaps 'memory'?
#67: FILE: examples/l3fwd/l3fwd_event_generic.c:208:
+               rte_panic("Failed to allocate memery for Rx adapter\n");

WARNING:TYPO_SPELLING: 'memery' may be misspelled - perhaps 'memory'?
#111: FILE: examples/l3fwd/l3fwd_event_generic.c:252:
+               rte_panic("Failed to allocate memery for Rx adapter\n");

WARNING:TYPO_SPELLING: 'memery' may be misspelled - perhaps 'memory'?
#198: FILE: examples/l3fwd/l3fwd_event_internal_port.c:207:
+               rte_panic("Failed to allocate memery for Rx adapter\n");

WARNING:TYPO_SPELLING: 'memery' may be misspelled - perhaps 'memory'?
#236: FILE: examples/l3fwd/l3fwd_event_internal_port.c:245:
+               rte_panic("Failed to allocate memery for Rx adapter\n");

total: 0 errors, 4 warnings, 239 lines checked

>
> v5 Changes:
> ----------
> - Update release notes and MAINTAINERS file.
> - Fix typo in doc.
>
> v4 Changes:
> ----------
> - Update documentation about --event-eth-rxqs. (Jerin)
> - Add validation for command line options that are specific to event/poll mode.
> - Fix event device service initialization.
>
> v3 Changes:
> ----------
> - Unify argument parsing.
> - Segregate poll mode and event mode initialization.
> - Simplify event resource creation and accesses.
> - Integrate http://patches.dpdk.org/project/dpdk/list/?series=8002 series.
> - Reduce code duplication in lpm.
>
> v2 Changes:
> ----------
> - Shorten the structure name `s/event_rsrc/evt_rsrc` `s/l2fwd_rsrc/rsrc`.
> - Use rte_panic instead of rte_exit.
> - Rebase on top of Tx adapter change http://patches.dpdk.org/patch/60971.
> - Add parallel mode support.
> - Fix missing eventdev args parsing.
>
> Pavan Nikhilesh (7):
>   examples/l3fwd: add event device configuration
>   examples/l3fwd: add event eth Rx/Tx adapter setup
>   examples/l3fwd: add service core setup based on caps
>   examples/l3fwd: add event lpm main loop
>   examples/l3fwd: add event em main loop
>   examples/l3fwd: add graceful teardown for eventdevice
>   doc: update l3fwd user guide to support eventdev
>
> Sunil Kumar Kori (4):
>   examples/l3fwd: add framework for event device
>   examples/l3fwd: split pipelines based on capability
>   examples/l3fwd: add ethdev setup based on eventdev
>   examples/l3fwd: add event port and queue setup
>
>  MAINTAINERS                                |   1 +
>  doc/guides/rel_notes/release_20_02.rst     |   5 +
>  doc/guides/sample_app_ug/l3_forward.rst    |  79 ++++-
>  examples/l3fwd/Makefile                    |   3 +-
>  examples/l3fwd/l3fwd.h                     |  30 ++
>  examples/l3fwd/l3fwd_em.c                  | 177 +++++++++++
>  examples/l3fwd/l3fwd_em.h                  | 159 +++++++---
>  examples/l3fwd/l3fwd_em_hlm.h              | 131 ++++++++
>  examples/l3fwd/l3fwd_em_sequential.h       |  26 ++
>  examples/l3fwd/l3fwd_event.c               | 263 ++++++++++++++++
>  examples/l3fwd/l3fwd_event.h               |  86 ++++++
>  examples/l3fwd/l3fwd_event_generic.c       | 303 ++++++++++++++++++
>  examples/l3fwd/l3fwd_event_internal_port.c | 279 +++++++++++++++++
>  examples/l3fwd/l3fwd_lpm.c                 | 203 ++++++++++++
>  examples/l3fwd/main.c                      | 341 +++++++++++++++++----
>  examples/l3fwd/meson.build                 |   5 +-
>  16 files changed, 1976 insertions(+), 115 deletions(-)
>  create mode 100644 examples/l3fwd/l3fwd_event.c
>  create mode 100644 examples/l3fwd/l3fwd_event.h
>  create mode 100644 examples/l3fwd/l3fwd_event_generic.c
>  create mode 100644 examples/l3fwd/l3fwd_event_internal_port.c
>
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2020-01-27 16:17 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-04 14:43 [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support pbhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 01/11] examples/l3fwd: add framework for event device pbhagavatula
2020-01-03 12:49   ` Ananyev, Konstantin
2020-01-06  4:27     ` Pavan Nikhilesh Bhagavatula
2020-01-06 12:32       ` Ananyev, Konstantin
2020-01-07 16:34         ` Pavan Nikhilesh Bhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 03/11] examples/l3fwd: add event device configuration pbhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
2019-12-27 13:33   ` Nipun Gupta
2019-12-29 15:42     ` Pavan Nikhilesh Bhagavatula
2019-12-30  7:40       ` Nipun Gupta
2020-01-02  6:21         ` Pavan Nikhilesh Bhagavatula
2020-01-02  8:49           ` Nipun Gupta
2020-01-02  9:33             ` Jerin Jacob
2020-01-03  9:06               ` Nipun Gupta
2020-01-03  9:09                 ` Jerin Jacob
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
2020-01-03 13:07   ` Ananyev, Konstantin
2020-01-06  4:29     ` Pavan Nikhilesh Bhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
2020-01-03 13:16   ` Ananyev, Konstantin
2020-01-06  4:43     ` Pavan Nikhilesh Bhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 09/11] examples/l3fwd: add event em " pbhagavatula
2020-01-03 13:45   ` Ananyev, Konstantin
2020-01-06  4:44     ` Pavan Nikhilesh Bhagavatula
2020-01-06 12:27       ` Ananyev, Konstantin
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
2020-01-03 13:52   ` Ananyev, Konstantin
2020-01-06  4:50     ` Pavan Nikhilesh Bhagavatula
2020-01-06 12:12       ` Ananyev, Konstantin
2020-01-07 16:28         ` Pavan Nikhilesh Bhagavatula
2019-12-04 14:43 ` [dpdk-dev] [PATCH v2 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
2019-12-04 15:16 ` [dpdk-dev] [PATCH v2 00/11] example/l3fwd: introduce event device support Jerin Jacob
2020-01-11 13:47 ` [dpdk-dev] [PATCH v3 " pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 01/11] examples/l3fwd: add framework for event device pbhagavatula
2020-01-21  8:44     ` Jerin Jacob
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
2020-01-21  8:46     ` Jerin Jacob
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 03/11] examples/l3fwd: add event device configuration pbhagavatula
2020-01-21  8:51     ` Jerin Jacob
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 09/11] examples/l3fwd: add event em " pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
2020-01-11 13:47   ` [dpdk-dev] [PATCH v3 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
2020-01-13 12:02   ` [dpdk-dev] [PATCH v3 00/11] example/l3fwd: introduce event device support Nipun Gupta
2020-01-22 18:28   ` [dpdk-dev] [PATCH v4 " pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 01/11] examples/l3fwd: add framework for event device pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 03/11] examples/l3fwd: add event device configuration pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 09/11] examples/l3fwd: add event em " pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
2020-01-22 18:28     ` [dpdk-dev] [PATCH v4 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
2020-01-23  5:25       ` Jerin Jacob
2020-01-24  4:05     ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 01/11] examples/l3fwd: add framework for event device pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 02/11] examples/l3fwd: split pipelines based on capability pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 03/11] examples/l3fwd: add event device configuration pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 04/11] examples/l3fwd: add ethdev setup based on eventdev pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 05/11] examples/l3fwd: add event port and queue setup pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 06/11] examples/l3fwd: add event eth Rx/Tx adapter setup pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 07/11] examples/l3fwd: add service core setup based on caps pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 08/11] examples/l3fwd: add event lpm main loop pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 09/11] examples/l3fwd: add event em " pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 10/11] examples/l3fwd: add graceful teardown for eventdevice pbhagavatula
2020-01-24  4:05       ` [dpdk-dev] [PATCH v5 11/11] doc: update l3fwd user guide to support eventdev pbhagavatula
2020-01-27 16:17       ` [dpdk-dev] [PATCH v5 00/11] example/l3fwd: introduce event device support Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).