DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] examples/ipsec-secgw: add support for event vector
@ 2021-08-26 10:03 Srujana Challa
  2021-09-14 13:14 ` [dpdk-dev] [PATCH v2] " Srujana Challa
  2021-11-03  3:24 ` [dpdk-dev] [PATCH v3] " Nithin Dabilpuram
  0 siblings, 2 replies; 7+ messages in thread
From: Srujana Challa @ 2021-08-26 10:03 UTC (permalink / raw)
  To: gakhil, radu.nicolau, konstantin.ananyev; +Cc: dev, ndabilpuram, anoobj, jerinj

Adds event vector support to inline protocol offload mode.
By default vector support is disabled, it can be enabled by
using the option --event-vector.
Additional options to configure vector size and vector timeout are
also implemented and can be used by specifying --vector-size and
--vector-tmo.

Depends-on: series-18262 ("security: Improve inline fast path routines")
Depends-on: series-18322 ("eventdev: simplify Rx adapter event vector
config")

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 doc/guides/sample_app_ug/ipsec_secgw.rst |  18 +-
 examples/ipsec-secgw/event_helper.c      |  78 +++++-
 examples/ipsec-secgw/event_helper.h      |   8 +
 examples/ipsec-secgw/ipsec-secgw.c       |  41 ++-
 examples/ipsec-secgw/ipsec-secgw.h       |   2 +
 examples/ipsec-secgw/ipsec_worker.c      | 330 ++++++++++++++++++++++-
 6 files changed, 472 insertions(+), 5 deletions(-)

diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f9..557ca510f7 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -86,6 +86,15 @@ The application supports two modes of operation: poll mode and event mode.
   threads and supports inline protocol only.** It also provides infrastructure for
   non-internal port however does not define any worker threads.
 
+  Event mode also supports event vectorization. The event devices, ethernet device
+  pairs which support the capability ``RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR`` can
+  aggregate packets based on flow characteristics and generate a ``rte_event``
+  containing ``rte_event_vector``.
+  The aggregation size and timeout can be given using command line options vector-size
+  (default vector-size is 16) and vector-tmo (default vector-tmo is 102400ns).
+  By default event vectorization is disabled and it can be enabled using event-vector
+  option.
+
 Additionally the event mode introduces two submodes of processing packets:
 
 * Driver submode: This submode has bare minimum changes in the application to support
@@ -293,7 +302,8 @@ event app mode::
 
     ./<build_dir>/examples/dpdk-ipsec-secgw -c 0x3 -- -P -p 0x3 -u 0x1       \
            -f /path/to/config_file --transfer-mode event \
-           --event-schedule-type parallel                \
+           --event-schedule-type parallel --event-vector --vector-size 32    \
+           --vector-tmo 102400                           \
 
 where each option means:
 
@@ -312,6 +322,12 @@ where each option means:
 
 *   The ``--event-schedule-type`` option selects parallel ordering of event queues.
 
+*   The ``--event-vector`` option enables event vectorization.
+
+*   The ``--vector-size`` option specifies max vector size.
+
+*   The ``--vector-tmo`` option specifies max timeout in nanoseconds for vectorization.
+
 
 Refer to the *DPDK Getting Started Guide* for general information on running
 applications and the Environment Abstraction Layer (EAL) options.
diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c
index 8475d542b2..e8600f5e90 100644
--- a/examples/ipsec-secgw/event_helper.c
+++ b/examples/ipsec-secgw/event_helper.c
@@ -10,6 +10,10 @@
 #include <stdbool.h>
 
 #include "event_helper.h"
+#include "ipsec-secgw.h"
+
+#define DEFAULT_VECTOR_SIZE  16
+#define DEFAULT_VECTOR_TMO   102400
 
 static volatile bool eth_core_running;
 
@@ -728,6 +732,45 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf)
 	return 0;
 }
 
+static int
+eh_event_vector_limits_validate(struct eventmode_conf *em_conf,
+				uint8_t ev_dev_id, uint8_t ethdev_id)
+{
+	struct rte_event_eth_rx_adapter_vector_limits limits = {0};
+	uint16_t vector_size = em_conf->ext_params.vector_size;
+	int ret;
+
+	ret = rte_event_eth_rx_adapter_vector_limits_get(ev_dev_id, ethdev_id,
+							 &limits);
+	if (ret) {
+		EH_LOG_ERR("failed to get vector limits");
+		return ret;
+	}
+
+	if (vector_size < limits.min_sz || vector_size > limits.max_sz) {
+		EH_LOG_ERR("Vector size [%d] not within limits min[%d] max[%d]",
+			   vector_size, limits.min_sz, limits.max_sz);
+		return -EINVAL;
+	}
+
+	if (limits.log2_sz && !rte_is_power_of_2(vector_size)) {
+		EH_LOG_ERR("Vector size [%d] not power of 2", vector_size);
+		return -EINVAL;
+	}
+
+	if (em_conf->vector_tmo_ns > limits.max_timeout_ns ||
+	    em_conf->vector_tmo_ns < limits.min_timeout_ns) {
+		EH_LOG_ERR("Vector timeout [%" PRIu64
+			   "] not within limits max[%" PRIu64
+			   "] min[%" PRIu64 "]",
+			   em_conf->vector_tmo_ns,
+			   limits.max_timeout_ns,
+			   limits.min_timeout_ns);
+		return -EINVAL;
+	}
+	return 0;
+}
+
 static int
 eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		struct rx_adapter_conf *adapter)
@@ -736,8 +779,10 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 	struct rte_event_dev_info evdev_default_conf = {0};
 	struct rte_event_port_conf port_conf = {0};
 	struct rx_adapter_connection_info *conn;
+	uint32_t service_id, socket_id, nb_elem;
+	struct rte_mempool *vector_pool = NULL;
+	uint32_t lcore_id = rte_lcore_id();
 	uint8_t eventdev_id;
-	uint32_t service_id;
 	int ret;
 	int j;
 
@@ -751,6 +796,20 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		return ret;
 	}
 
+	if (em_conf->ext_params.event_vector) {
+		socket_id = rte_lcore_to_socket_id(lcore_id);
+		nb_elem = (nb_bufs_in_pool / em_conf->ext_params.vector_size)
+			  + 1;
+
+		vector_pool = rte_event_vector_pool_create(
+			"vector_pool", nb_elem, 0,
+			em_conf->ext_params.vector_size,
+			socket_id);
+		if (vector_pool == NULL) {
+			EH_LOG_ERR("failed to create event vector pool");
+			return -ENOMEM;
+		}
+	}
 	/* Setup port conf */
 	port_conf.new_event_threshold = 1200;
 	port_conf.dequeue_depth =
@@ -776,6 +835,20 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		queue_conf.ev.sched_type = em_conf->ext_params.sched_type;
 		queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV;
 
+		if (em_conf->ext_params.event_vector) {
+			ret = eh_event_vector_limits_validate(em_conf,
+							      eventdev_id,
+							      conn->ethdev_id);
+			if (ret)
+				return ret;
+
+			queue_conf.vector_sz = em_conf->ext_params.vector_size;
+			queue_conf.vector_timeout_ns = em_conf->vector_tmo_ns;
+			queue_conf.vector_mp = vector_pool;
+			queue_conf.rx_queue_flags =
+				RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR;
+		}
+
 		/* Add queue to the adapter */
 		ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id,
 				conn->ethdev_id, conn->ethdev_rx_qid,
@@ -1475,6 +1548,9 @@ eh_conf_init(void)
 
 	rte_bitmap_set(em_conf->eth_core_mask, eth_core_id);
 
+	em_conf->ext_params.vector_size = DEFAULT_VECTOR_SIZE;
+	em_conf->vector_tmo_ns = DEFAULT_VECTOR_TMO;
+
 	return conf;
 
 free_bitmap:
diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h
index b65b343367..5be6c620cd 100644
--- a/examples/ipsec-secgw/event_helper.h
+++ b/examples/ipsec-secgw/event_helper.h
@@ -171,10 +171,18 @@ struct eventmode_conf {
 		 * When enabled, all event queues need to be mapped to
 		 * each event port
 		 */
+			uint64_t event_vector                   : 1;
+		/**<
+		 * Enable event vector, when enabled application can
+		 * receive vector of events.
+		 */
+			uint64_t vector_size                    : 16;
 		};
 		uint64_t u64;
 	} ext_params;
 		/**< 64 bit field to specify extended params */
+	uint64_t vector_tmo_ns;
+		/**< Max vector timeout in nanoseconds */
 };
 
 /**
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985..5b8b94d886 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -115,6 +115,9 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS];
 #define CMD_LINE_OPT_REASSEMBLE		"reassemble"
 #define CMD_LINE_OPT_MTU		"mtu"
 #define CMD_LINE_OPT_FRAG_TTL		"frag-ttl"
+#define CMD_LINE_OPT_EVENT_VECTOR	"event-vector"
+#define CMD_LINE_OPT_VECTOR_SIZE	"vector-size"
+#define CMD_LINE_OPT_VECTOR_TIMEOUT	"vector-tmo"
 
 #define CMD_LINE_ARG_EVENT	"event"
 #define CMD_LINE_ARG_POLL	"poll"
@@ -139,6 +142,9 @@ enum {
 	CMD_LINE_OPT_REASSEMBLE_NUM,
 	CMD_LINE_OPT_MTU_NUM,
 	CMD_LINE_OPT_FRAG_TTL_NUM,
+	CMD_LINE_OPT_EVENT_VECTOR_NUM,
+	CMD_LINE_OPT_VECTOR_SIZE_NUM,
+	CMD_LINE_OPT_VECTOR_TIMEOUT_NUM,
 };
 
 static const struct option lgopts[] = {
@@ -152,6 +158,9 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM},
 	{CMD_LINE_OPT_MTU, 1, 0, CMD_LINE_OPT_MTU_NUM},
 	{CMD_LINE_OPT_FRAG_TTL, 1, 0, CMD_LINE_OPT_FRAG_TTL_NUM},
+	{CMD_LINE_OPT_EVENT_VECTOR, 0, 0, CMD_LINE_OPT_EVENT_VECTOR_NUM},
+	{CMD_LINE_OPT_VECTOR_SIZE, 1, 0, CMD_LINE_OPT_VECTOR_SIZE_NUM},
+	{CMD_LINE_OPT_VECTOR_TIMEOUT, 1, 0, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -164,7 +173,7 @@ static int32_t promiscuous_on = 1;
 static int32_t numa_on = 1; /**< NUMA is enabled by default. */
 static uint32_t nb_lcores;
 static uint32_t single_sa;
-static uint32_t nb_bufs_in_pool;
+uint32_t nb_bufs_in_pool;
 
 /*
  * RX/TX HW offload capabilities to enable/use on ethernet ports.
@@ -1417,6 +1426,9 @@ print_usage(const char *prgname)
 		" [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]"
 		" [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]"
 		" [--" CMD_LINE_OPT_MTU " MTU]"
+		" [--event-vector]"
+		" [--vector-size SIZE]"
+		" [--vector-tmo TIMEOUT in ns]"
 		"\n\n"
 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -1470,6 +1482,10 @@ print_usage(const char *prgname)
 		"  --" CMD_LINE_OPT_FRAG_TTL " FRAG_TTL_NS"
 		": fragments lifetime in nanoseconds, default\n"
 		"    and maximum value is 10.000.000.000 ns (10 s)\n"
+		"  --event-vector enables event vectorization\n"
+		"  --vector-size Max vector size (default value: 16)\n"
+		"  --vector-tmo Max vector timeout in nanoseconds"
+		"    (default value: 102400)\n"
 		"\n",
 		prgname);
 }
@@ -1634,6 +1650,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf)
 	int32_t option_index;
 	char *prgname = argv[0];
 	int32_t f_present = 0;
+	struct eventmode_conf *em_conf = NULL;
 
 	argvopt = argv;
 
@@ -1819,6 +1836,28 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf)
 			}
 			frag_ttl_ns = ret;
 			break;
+		case CMD_LINE_OPT_EVENT_VECTOR_NUM:
+			em_conf = eh_conf->mode_params;
+			em_conf->ext_params.event_vector = 1;
+			break;
+		case CMD_LINE_OPT_VECTOR_SIZE_NUM:
+			ret = parse_decimal(optarg);
+
+			if (ret > MAX_PKT_BURST) {
+				printf("Invalid argument for \'%s\': %s\n",
+					CMD_LINE_OPT_VECTOR_SIZE, optarg);
+				print_usage(prgname);
+				return -1;
+			}
+			em_conf = eh_conf->mode_params;
+			em_conf->ext_params.vector_size = ret;
+			break;
+		case CMD_LINE_OPT_VECTOR_TIMEOUT_NUM:
+			ret = parse_decimal(optarg);
+
+			em_conf = eh_conf->mode_params;
+			em_conf->vector_tmo_ns = ret;
+			break;
 		default:
 			print_usage(prgname);
 			return -1;
diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h
index 96e22de45e..73d01d8925 100644
--- a/examples/ipsec-secgw/ipsec-secgw.h
+++ b/examples/ipsec-secgw/ipsec-secgw.h
@@ -106,6 +106,8 @@ extern uint32_t single_sa_idx;
 
 extern volatile bool force_quit;
 
+extern uint32_t nb_bufs_in_pool;
+
 static inline uint8_t
 is_unprotected_port(uint16_t port_id)
 {
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index 647e22df59..72132718a3 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -123,6 +123,72 @@ check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx)
 	return 1;
 }
 
+static inline void
+check_sp_bulk(struct sp_ctx *sp, struct traffic_type *ip,
+	      struct traffic_type *ipsec)
+{
+	uint32_t i, j, res;
+	struct rte_mbuf *m;
+
+	if (unlikely(sp == NULL || ip->num == 0))
+		return;
+
+	rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, ip->num,
+			 DEFAULT_MAX_CATEGORIES);
+
+	j = 0;
+	for (i = 0; i < ip->num; i++) {
+		m = ip->pkts[i];
+		res = ip->res[i];
+		if (unlikely(res == DISCARD))
+			free_pkts(&m, 1);
+		else if (res == BYPASS)
+			ip->pkts[j++] = m;
+		else {
+			ipsec->res[ipsec->num] = res - 1;
+			ipsec->pkts[ipsec->num++] = m;
+		}
+	}
+	ip->num = j;
+}
+
+static inline void
+check_sp_sa_bulk(struct sp_ctx *sp, struct sa_ctx *sa_ctx,
+		 struct traffic_type *ip)
+{
+	struct ipsec_sa *sa;
+	uint32_t i, j, res;
+	struct rte_mbuf *m;
+
+	if (unlikely(sp == NULL || ip->num == 0))
+		return;
+
+	rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, ip->num,
+			 DEFAULT_MAX_CATEGORIES);
+
+	j = 0;
+	for (i = 0; i < ip->num; i++) {
+		m = ip->pkts[i];
+		res = ip->res[i];
+		if (unlikely(res == DISCARD))
+			free_pkts(&m, 1);
+		else if (res == BYPASS)
+			ip->pkts[j++] = m;
+		else {
+			sa = *(struct ipsec_sa **)rte_security_dynfield(m);
+			if (sa == NULL)
+				free_pkts(&m, 1);
+
+			/* SPI on the packet should match with the one in SA */
+			if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi))
+				free_pkts(&m, 1);
+
+			ip->pkts[j++] = m;
+		}
+	}
+	ip->num = j;
+}
+
 static inline uint16_t
 route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx)
 {
@@ -381,6 +447,247 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 	return PKT_DROPPED;
 }
 
+static inline int
+ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt,
+		    struct ipsec_traffic *t, struct sa_ctx *sa_ctx)
+{
+	struct rte_ipsec_session *sess;
+	uint32_t sa_idx, i, j = 0;
+	uint16_t port_id = 0;
+	struct rte_mbuf *pkt;
+	struct ipsec_sa *sa;
+
+	/* Route IPv4 packets */
+	for (i = 0; i < t->ip4.num; i++) {
+		pkt = t->ip4.pkts[i];
+		port_id = route4_pkt(pkt, rt->rt4_ctx);
+		if (port_id != RTE_MAX_ETHPORTS) {
+			/* Update mac addresses */
+			update_mac_addrs(pkt, port_id);
+			/* Update the event with the dest port */
+			ipsec_event_pre_forward(pkt, port_id);
+			vec->mbufs[j++] = pkt;
+		} else
+			free_pkts(&pkt, 1);
+	}
+
+	/* Route IPv6 packets */
+	for (i = 0; i < t->ip6.num; i++) {
+		pkt = t->ip6.pkts[i];
+		port_id = route6_pkt(pkt, rt->rt6_ctx);
+		if (port_id != RTE_MAX_ETHPORTS) {
+			/* Update mac addresses */
+			update_mac_addrs(pkt, port_id);
+			/* Update the event with the dest port */
+			ipsec_event_pre_forward(pkt, port_id);
+			vec->mbufs[j++] = pkt;
+		} else
+			free_pkts(&pkt, 1);
+	}
+
+	/* Route ESP packets */
+	for (i = 0; i < t->ipsec.num; i++) {
+		/* Validate sa_idx */
+		sa_idx = t->ipsec.res[i];
+		pkt = t->ipsec.pkts[i];
+		if (unlikely(sa_idx >= sa_ctx->nb_sa))
+			free_pkts(&pkt, 1);
+		else {
+			/* Else the packet has to be protected */
+			sa = &(sa_ctx->sa[sa_idx]);
+			/* Get IPsec session */
+			sess = ipsec_get_primary_session(sa);
+			/* Allow only inline protocol for now */
+			if (unlikely(sess->type !=
+				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) {
+				RTE_LOG(ERR, IPSEC, "SA type not supported\n");
+				free_pkts(&pkt, 1);
+			}
+			rte_security_set_pkt_metadata(sess->security.ctx,
+						sess->security.ses, pkt, NULL);
+
+			pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+			port_id = sa->portid;
+			update_mac_addrs(pkt, port_id);
+			ipsec_event_pre_forward(pkt, port_id);
+			vec->mbufs[j++] = pkt;
+		}
+	}
+
+	return j;
+}
+
+static inline void
+classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t)
+{
+	enum pkt_type type;
+	uint8_t *nlp;
+
+	/* Check the packet type */
+	type = process_ipsec_get_pkt_type(pkt, &nlp);
+
+	switch (type) {
+	case PKT_TYPE_PLAIN_IPV4:
+		t->ip4.data[t->ip4.num] = nlp;
+		t->ip4.pkts[(t->ip4.num)++] = pkt;
+		break;
+	case PKT_TYPE_PLAIN_IPV6:
+		t->ip6.data[t->ip6.num] = nlp;
+		t->ip6.pkts[(t->ip6.num)++] = pkt;
+		break;
+	default:
+		RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type);
+		free_pkts(&pkt, 1);
+		break;
+	}
+}
+
+static inline int
+process_ipsec_ev_inbound_vector(struct ipsec_ctx *ctx, struct route_table *rt,
+				struct rte_event_vector *vec)
+{
+	struct ipsec_traffic t;
+	struct rte_mbuf *pkt;
+	int i;
+
+	t.ip4.num = 0;
+	t.ip6.num = 0;
+	t.ipsec.num = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		/* Get pkt from event */
+		pkt = vec->mbufs[i];
+
+		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+			if (unlikely(pkt->ol_flags &
+				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				RTE_LOG(ERR, IPSEC,
+					"Inbound security offload failed\n");
+				free_pkts(&pkt, 1);
+				continue;
+			}
+		}
+
+		classify_pkt(pkt, &t);
+	}
+
+	check_sp_sa_bulk(ctx->sp4_ctx, ctx->sa_ctx, &t.ip4);
+	check_sp_sa_bulk(ctx->sp6_ctx, ctx->sa_ctx, &t.ip6);
+
+	return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx);
+}
+
+static inline int
+process_ipsec_ev_outbound_vector(struct ipsec_ctx *ctx, struct route_table *rt,
+				 struct rte_event_vector *vec)
+{
+	struct ipsec_traffic t;
+	struct rte_mbuf *pkt;
+	uint32_t i;
+
+	t.ip4.num = 0;
+	t.ip6.num = 0;
+	t.ipsec.num = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		/* Get pkt from event */
+		pkt = vec->mbufs[i];
+
+		classify_pkt(pkt, &t);
+
+		/* Provide L2 len for Outbound processing */
+		pkt->l2_len = RTE_ETHER_HDR_LEN;
+	}
+
+	check_sp_bulk(ctx->sp4_ctx, &t.ip4, &t.ipsec);
+	check_sp_bulk(ctx->sp6_ctx, &t.ip6, &t.ipsec);
+
+	return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx);
+}
+
+static inline int
+process_ipsec_ev_drv_mode_outbound_vector(struct rte_event_vector *vec,
+					  struct port_drv_mode_data *data)
+{
+	struct rte_mbuf *pkt;
+	int16_t port_id;
+	uint32_t i;
+	int j = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		pkt = vec->mbufs[i];
+		port_id = pkt->port;
+
+		if (unlikely(!data[port_id].sess)) {
+			free_pkts(&pkt, 1);
+			continue;
+		}
+		ipsec_event_pre_forward(pkt, port_id);
+		/* Save security session */
+		rte_security_set_pkt_metadata(data[port_id].ctx,
+					      data[port_id].sess, pkt,
+					      NULL);
+
+		/* Mark the packet for Tx security offload */
+		pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+
+		/* Provide L2 len for Outbound processing */
+		pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+		vec->mbufs[j++] = pkt;
+	}
+
+	return j;
+}
+
+static inline void
+ipsec_ev_vector_process(struct lcore_conf_ev_tx_int_port_wrkr *lconf,
+			struct eh_event_link_info *links,
+			struct rte_event *ev)
+{
+	struct rte_event_vector *vec = ev->vec;
+	struct rte_mbuf *pkt;
+	int ret;
+
+	pkt = vec->mbufs[0];
+
+	if (is_unprotected_port(pkt->port))
+		ret = process_ipsec_ev_inbound_vector(&lconf->inbound,
+						      &lconf->rt, vec);
+	else
+		ret = process_ipsec_ev_outbound_vector(&lconf->outbound,
+						       &lconf->rt, vec);
+
+	if (ret > 0) {
+		vec->nb_elem = ret;
+		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
+						 links[0].event_port_id,
+						 ev, 1, 0);
+	}
+}
+
+static inline void
+ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links,
+				 struct rte_event *ev,
+				 struct port_drv_mode_data *data)
+{
+	struct rte_event_vector *vec = ev->vec;
+	struct rte_mbuf *pkt;
+	int ret;
+
+	pkt = vec->mbufs[0];
+
+	if (!is_unprotected_port(pkt->port)) {
+		ret = process_ipsec_ev_drv_mode_outbound_vector(vec, data);
+		vec->nb_elem = ret;
+	}
+
+	if (vec->nb_elem > 0)
+		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
+						 links[0].event_port_id,
+						 ev, 1, 0);
+}
+
 /*
  * Event mode exposes various operating modes depending on the
  * capabilities of the event device and the operating mode
@@ -450,6 +757,19 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 		if (nb_rx == 0)
 			continue;
 
+		switch (ev.event_type) {
+		case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR:
+		case RTE_EVENT_TYPE_ETHDEV_VECTOR:
+			ipsec_ev_vector_drv_mode_process(links, &ev, data);
+			continue;
+		case RTE_EVENT_TYPE_ETHDEV:
+			break;
+		default:
+			RTE_LOG(ERR, IPSEC, "Invalid event type %u",
+				ev.event_type);
+			continue;
+		}
+
 		pkt = ev.mbuf;
 		port_id = pkt->port;
 
@@ -557,10 +877,16 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links,
 		if (nb_rx == 0)
 			continue;
 
-		if (unlikely(ev.event_type != RTE_EVENT_TYPE_ETHDEV)) {
+		switch (ev.event_type) {
+		case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR:
+		case RTE_EVENT_TYPE_ETHDEV_VECTOR:
+			ipsec_ev_vector_process(&lconf, links, &ev);
+			continue;
+		case RTE_EVENT_TYPE_ETHDEV:
+			break;
+		default:
 			RTE_LOG(ERR, IPSEC, "Invalid event type %u",
 				ev.event_type);
-
 			continue;
 		}
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH v2] examples/ipsec-secgw: add support for event vector
  2021-08-26 10:03 [dpdk-dev] [PATCH] examples/ipsec-secgw: add support for event vector Srujana Challa
@ 2021-09-14 13:14 ` Srujana Challa
  2021-10-31 13:22   ` Akhil Goyal
  2021-11-03  3:24 ` [dpdk-dev] [PATCH v3] " Nithin Dabilpuram
  1 sibling, 1 reply; 7+ messages in thread
From: Srujana Challa @ 2021-09-14 13:14 UTC (permalink / raw)
  To: gakhil, radu.nicolau, konstantin.ananyev
  Cc: dev, ndabilpuram, anoobj, jerinj, schalla

Adds event vector support to inline protocol offload mode.
By default vector support is disabled, it can be enabled by
using the option --event-vector.
Additional options to configure vector size and vector timeout are
also implemented and can be used by specifying --vector-size and
--vector-tmo.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
Depends-on: series-18262 ("security: Improve inline fast path routines")
Depends-on: series-18322 ("eventdev: simplify Rx adapter event vector
config")

v2:
* Set rte_event_vector::attr_valid if all packets in the vector uses
same port.

 doc/guides/sample_app_ug/ipsec_secgw.rst |  18 +-
 examples/ipsec-secgw/event_helper.c      |  78 ++++-
 examples/ipsec-secgw/event_helper.h      |   8 +
 examples/ipsec-secgw/ipsec-secgw.c       |  41 ++-
 examples/ipsec-secgw/ipsec-secgw.h       |   2 +
 examples/ipsec-secgw/ipsec_worker.c      | 350 ++++++++++++++++++++++-
 6 files changed, 492 insertions(+), 5 deletions(-)

diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f9..557ca510f7 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -86,6 +86,15 @@ The application supports two modes of operation: poll mode and event mode.
   threads and supports inline protocol only.** It also provides infrastructure for
   non-internal port however does not define any worker threads.
 
+  Event mode also supports event vectorization. The event devices, ethernet device
+  pairs which support the capability ``RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR`` can
+  aggregate packets based on flow characteristics and generate a ``rte_event``
+  containing ``rte_event_vector``.
+  The aggregation size and timeout can be given using command line options vector-size
+  (default vector-size is 16) and vector-tmo (default vector-tmo is 102400ns).
+  By default event vectorization is disabled and it can be enabled using event-vector
+  option.
+
 Additionally the event mode introduces two submodes of processing packets:
 
 * Driver submode: This submode has bare minimum changes in the application to support
@@ -293,7 +302,8 @@ event app mode::
 
     ./<build_dir>/examples/dpdk-ipsec-secgw -c 0x3 -- -P -p 0x3 -u 0x1       \
            -f /path/to/config_file --transfer-mode event \
-           --event-schedule-type parallel                \
+           --event-schedule-type parallel --event-vector --vector-size 32    \
+           --vector-tmo 102400                           \
 
 where each option means:
 
@@ -312,6 +322,12 @@ where each option means:
 
 *   The ``--event-schedule-type`` option selects parallel ordering of event queues.
 
+*   The ``--event-vector`` option enables event vectorization.
+
+*   The ``--vector-size`` option specifies max vector size.
+
+*   The ``--vector-tmo`` option specifies max timeout in nanoseconds for vectorization.
+
 
 Refer to the *DPDK Getting Started Guide* for general information on running
 applications and the Environment Abstraction Layer (EAL) options.
diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c
index 8475d542b2..e8600f5e90 100644
--- a/examples/ipsec-secgw/event_helper.c
+++ b/examples/ipsec-secgw/event_helper.c
@@ -10,6 +10,10 @@
 #include <stdbool.h>
 
 #include "event_helper.h"
+#include "ipsec-secgw.h"
+
+#define DEFAULT_VECTOR_SIZE  16
+#define DEFAULT_VECTOR_TMO   102400
 
 static volatile bool eth_core_running;
 
@@ -728,6 +732,45 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf)
 	return 0;
 }
 
+static int
+eh_event_vector_limits_validate(struct eventmode_conf *em_conf,
+				uint8_t ev_dev_id, uint8_t ethdev_id)
+{
+	struct rte_event_eth_rx_adapter_vector_limits limits = {0};
+	uint16_t vector_size = em_conf->ext_params.vector_size;
+	int ret;
+
+	ret = rte_event_eth_rx_adapter_vector_limits_get(ev_dev_id, ethdev_id,
+							 &limits);
+	if (ret) {
+		EH_LOG_ERR("failed to get vector limits");
+		return ret;
+	}
+
+	if (vector_size < limits.min_sz || vector_size > limits.max_sz) {
+		EH_LOG_ERR("Vector size [%d] not within limits min[%d] max[%d]",
+			   vector_size, limits.min_sz, limits.max_sz);
+		return -EINVAL;
+	}
+
+	if (limits.log2_sz && !rte_is_power_of_2(vector_size)) {
+		EH_LOG_ERR("Vector size [%d] not power of 2", vector_size);
+		return -EINVAL;
+	}
+
+	if (em_conf->vector_tmo_ns > limits.max_timeout_ns ||
+	    em_conf->vector_tmo_ns < limits.min_timeout_ns) {
+		EH_LOG_ERR("Vector timeout [%" PRIu64
+			   "] not within limits max[%" PRIu64
+			   "] min[%" PRIu64 "]",
+			   em_conf->vector_tmo_ns,
+			   limits.max_timeout_ns,
+			   limits.min_timeout_ns);
+		return -EINVAL;
+	}
+	return 0;
+}
+
 static int
 eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		struct rx_adapter_conf *adapter)
@@ -736,8 +779,10 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 	struct rte_event_dev_info evdev_default_conf = {0};
 	struct rte_event_port_conf port_conf = {0};
 	struct rx_adapter_connection_info *conn;
+	uint32_t service_id, socket_id, nb_elem;
+	struct rte_mempool *vector_pool = NULL;
+	uint32_t lcore_id = rte_lcore_id();
 	uint8_t eventdev_id;
-	uint32_t service_id;
 	int ret;
 	int j;
 
@@ -751,6 +796,20 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		return ret;
 	}
 
+	if (em_conf->ext_params.event_vector) {
+		socket_id = rte_lcore_to_socket_id(lcore_id);
+		nb_elem = (nb_bufs_in_pool / em_conf->ext_params.vector_size)
+			  + 1;
+
+		vector_pool = rte_event_vector_pool_create(
+			"vector_pool", nb_elem, 0,
+			em_conf->ext_params.vector_size,
+			socket_id);
+		if (vector_pool == NULL) {
+			EH_LOG_ERR("failed to create event vector pool");
+			return -ENOMEM;
+		}
+	}
 	/* Setup port conf */
 	port_conf.new_event_threshold = 1200;
 	port_conf.dequeue_depth =
@@ -776,6 +835,20 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		queue_conf.ev.sched_type = em_conf->ext_params.sched_type;
 		queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV;
 
+		if (em_conf->ext_params.event_vector) {
+			ret = eh_event_vector_limits_validate(em_conf,
+							      eventdev_id,
+							      conn->ethdev_id);
+			if (ret)
+				return ret;
+
+			queue_conf.vector_sz = em_conf->ext_params.vector_size;
+			queue_conf.vector_timeout_ns = em_conf->vector_tmo_ns;
+			queue_conf.vector_mp = vector_pool;
+			queue_conf.rx_queue_flags =
+				RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR;
+		}
+
 		/* Add queue to the adapter */
 		ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id,
 				conn->ethdev_id, conn->ethdev_rx_qid,
@@ -1475,6 +1548,9 @@ eh_conf_init(void)
 
 	rte_bitmap_set(em_conf->eth_core_mask, eth_core_id);
 
+	em_conf->ext_params.vector_size = DEFAULT_VECTOR_SIZE;
+	em_conf->vector_tmo_ns = DEFAULT_VECTOR_TMO;
+
 	return conf;
 
 free_bitmap:
diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h
index b65b343367..5be6c620cd 100644
--- a/examples/ipsec-secgw/event_helper.h
+++ b/examples/ipsec-secgw/event_helper.h
@@ -171,10 +171,18 @@ struct eventmode_conf {
 		 * When enabled, all event queues need to be mapped to
 		 * each event port
 		 */
+			uint64_t event_vector                   : 1;
+		/**<
+		 * Enable event vector, when enabled application can
+		 * receive vector of events.
+		 */
+			uint64_t vector_size                    : 16;
 		};
 		uint64_t u64;
 	} ext_params;
 		/**< 64 bit field to specify extended params */
+	uint64_t vector_tmo_ns;
+		/**< Max vector timeout in nanoseconds */
 };
 
 /**
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985..5b8b94d886 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -115,6 +115,9 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS];
 #define CMD_LINE_OPT_REASSEMBLE		"reassemble"
 #define CMD_LINE_OPT_MTU		"mtu"
 #define CMD_LINE_OPT_FRAG_TTL		"frag-ttl"
+#define CMD_LINE_OPT_EVENT_VECTOR	"event-vector"
+#define CMD_LINE_OPT_VECTOR_SIZE	"vector-size"
+#define CMD_LINE_OPT_VECTOR_TIMEOUT	"vector-tmo"
 
 #define CMD_LINE_ARG_EVENT	"event"
 #define CMD_LINE_ARG_POLL	"poll"
@@ -139,6 +142,9 @@ enum {
 	CMD_LINE_OPT_REASSEMBLE_NUM,
 	CMD_LINE_OPT_MTU_NUM,
 	CMD_LINE_OPT_FRAG_TTL_NUM,
+	CMD_LINE_OPT_EVENT_VECTOR_NUM,
+	CMD_LINE_OPT_VECTOR_SIZE_NUM,
+	CMD_LINE_OPT_VECTOR_TIMEOUT_NUM,
 };
 
 static const struct option lgopts[] = {
@@ -152,6 +158,9 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM},
 	{CMD_LINE_OPT_MTU, 1, 0, CMD_LINE_OPT_MTU_NUM},
 	{CMD_LINE_OPT_FRAG_TTL, 1, 0, CMD_LINE_OPT_FRAG_TTL_NUM},
+	{CMD_LINE_OPT_EVENT_VECTOR, 0, 0, CMD_LINE_OPT_EVENT_VECTOR_NUM},
+	{CMD_LINE_OPT_VECTOR_SIZE, 1, 0, CMD_LINE_OPT_VECTOR_SIZE_NUM},
+	{CMD_LINE_OPT_VECTOR_TIMEOUT, 1, 0, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -164,7 +173,7 @@ static int32_t promiscuous_on = 1;
 static int32_t numa_on = 1; /**< NUMA is enabled by default. */
 static uint32_t nb_lcores;
 static uint32_t single_sa;
-static uint32_t nb_bufs_in_pool;
+uint32_t nb_bufs_in_pool;
 
 /*
  * RX/TX HW offload capabilities to enable/use on ethernet ports.
@@ -1417,6 +1426,9 @@ print_usage(const char *prgname)
 		" [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]"
 		" [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]"
 		" [--" CMD_LINE_OPT_MTU " MTU]"
+		" [--event-vector]"
+		" [--vector-size SIZE]"
+		" [--vector-tmo TIMEOUT in ns]"
 		"\n\n"
 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -1470,6 +1482,10 @@ print_usage(const char *prgname)
 		"  --" CMD_LINE_OPT_FRAG_TTL " FRAG_TTL_NS"
 		": fragments lifetime in nanoseconds, default\n"
 		"    and maximum value is 10.000.000.000 ns (10 s)\n"
+		"  --event-vector enables event vectorization\n"
+		"  --vector-size Max vector size (default value: 16)\n"
+		"  --vector-tmo Max vector timeout in nanoseconds"
+		"    (default value: 102400)\n"
 		"\n",
 		prgname);
 }
@@ -1634,6 +1650,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf)
 	int32_t option_index;
 	char *prgname = argv[0];
 	int32_t f_present = 0;
+	struct eventmode_conf *em_conf = NULL;
 
 	argvopt = argv;
 
@@ -1819,6 +1836,28 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf)
 			}
 			frag_ttl_ns = ret;
 			break;
+		case CMD_LINE_OPT_EVENT_VECTOR_NUM:
+			em_conf = eh_conf->mode_params;
+			em_conf->ext_params.event_vector = 1;
+			break;
+		case CMD_LINE_OPT_VECTOR_SIZE_NUM:
+			ret = parse_decimal(optarg);
+
+			if (ret > MAX_PKT_BURST) {
+				printf("Invalid argument for \'%s\': %s\n",
+					CMD_LINE_OPT_VECTOR_SIZE, optarg);
+				print_usage(prgname);
+				return -1;
+			}
+			em_conf = eh_conf->mode_params;
+			em_conf->ext_params.vector_size = ret;
+			break;
+		case CMD_LINE_OPT_VECTOR_TIMEOUT_NUM:
+			ret = parse_decimal(optarg);
+
+			em_conf = eh_conf->mode_params;
+			em_conf->vector_tmo_ns = ret;
+			break;
 		default:
 			print_usage(prgname);
 			return -1;
diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h
index 96e22de45e..73d01d8925 100644
--- a/examples/ipsec-secgw/ipsec-secgw.h
+++ b/examples/ipsec-secgw/ipsec-secgw.h
@@ -106,6 +106,8 @@ extern uint32_t single_sa_idx;
 
 extern volatile bool force_quit;
 
+extern uint32_t nb_bufs_in_pool;
+
 static inline uint8_t
 is_unprotected_port(uint16_t port_id)
 {
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index 647e22df59..d420934fb4 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -58,6 +58,25 @@ ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id)
 	rte_event_eth_tx_adapter_txq_set(m, 0);
 }
 
+static inline void
+ev_vector_attr_init(struct rte_event_vector *vec)
+{
+	vec->attr_valid = 1;
+	vec->port = 0xFFFF;
+	vec->queue = 0;
+}
+
+static inline void
+ev_vector_attr_update(struct rte_event_vector *vec, struct rte_mbuf *pkt)
+{
+	if (vec->port == 0xFFFF) {
+		vec->port = pkt->port;
+		return;
+	}
+	if (vec->attr_valid && (vec->port != pkt->port))
+		vec->attr_valid = 0;
+}
+
 static inline void
 prepare_out_sessions_tbl(struct sa_ctx *sa_out,
 		struct rte_security_session **sess_tbl, uint16_t size)
@@ -123,6 +142,72 @@ check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx)
 	return 1;
 }
 
+static inline void
+check_sp_bulk(struct sp_ctx *sp, struct traffic_type *ip,
+	      struct traffic_type *ipsec)
+{
+	uint32_t i, j, res;
+	struct rte_mbuf *m;
+
+	if (unlikely(sp == NULL || ip->num == 0))
+		return;
+
+	rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, ip->num,
+			 DEFAULT_MAX_CATEGORIES);
+
+	j = 0;
+	for (i = 0; i < ip->num; i++) {
+		m = ip->pkts[i];
+		res = ip->res[i];
+		if (unlikely(res == DISCARD))
+			free_pkts(&m, 1);
+		else if (res == BYPASS)
+			ip->pkts[j++] = m;
+		else {
+			ipsec->res[ipsec->num] = res - 1;
+			ipsec->pkts[ipsec->num++] = m;
+		}
+	}
+	ip->num = j;
+}
+
+static inline void
+check_sp_sa_bulk(struct sp_ctx *sp, struct sa_ctx *sa_ctx,
+		 struct traffic_type *ip)
+{
+	struct ipsec_sa *sa;
+	uint32_t i, j, res;
+	struct rte_mbuf *m;
+
+	if (unlikely(sp == NULL || ip->num == 0))
+		return;
+
+	rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, ip->num,
+			 DEFAULT_MAX_CATEGORIES);
+
+	j = 0;
+	for (i = 0; i < ip->num; i++) {
+		m = ip->pkts[i];
+		res = ip->res[i];
+		if (unlikely(res == DISCARD))
+			free_pkts(&m, 1);
+		else if (res == BYPASS)
+			ip->pkts[j++] = m;
+		else {
+			sa = *(struct ipsec_sa **)rte_security_dynfield(m);
+			if (sa == NULL)
+				free_pkts(&m, 1);
+
+			/* SPI on the packet should match with the one in SA */
+			if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi))
+				free_pkts(&m, 1);
+
+			ip->pkts[j++] = m;
+		}
+	}
+	ip->num = j;
+}
+
 static inline uint16_t
 route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx)
 {
@@ -381,6 +466,248 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 	return PKT_DROPPED;
 }
 
+static inline int
+ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt,
+		    struct ipsec_traffic *t, struct sa_ctx *sa_ctx)
+{
+	struct rte_ipsec_session *sess;
+	uint32_t sa_idx, i, j = 0;
+	uint16_t port_id = 0;
+	struct rte_mbuf *pkt;
+	struct ipsec_sa *sa;
+
+	/* Route IPv4 packets */
+	for (i = 0; i < t->ip4.num; i++) {
+		pkt = t->ip4.pkts[i];
+		port_id = route4_pkt(pkt, rt->rt4_ctx);
+		if (port_id != RTE_MAX_ETHPORTS) {
+			/* Update mac addresses */
+			update_mac_addrs(pkt, port_id);
+			/* Update the event with the dest port */
+			ipsec_event_pre_forward(pkt, port_id);
+			ev_vector_attr_update(vec, pkt);
+			vec->mbufs[j++] = pkt;
+		} else
+			free_pkts(&pkt, 1);
+	}
+
+	/* Route IPv6 packets */
+	for (i = 0; i < t->ip6.num; i++) {
+		pkt = t->ip6.pkts[i];
+		port_id = route6_pkt(pkt, rt->rt6_ctx);
+		if (port_id != RTE_MAX_ETHPORTS) {
+			/* Update mac addresses */
+			update_mac_addrs(pkt, port_id);
+			/* Update the event with the dest port */
+			ipsec_event_pre_forward(pkt, port_id);
+			ev_vector_attr_update(vec, pkt);
+			vec->mbufs[j++] = pkt;
+		} else
+			free_pkts(&pkt, 1);
+	}
+
+	/* Route ESP packets */
+	for (i = 0; i < t->ipsec.num; i++) {
+		/* Validate sa_idx */
+		sa_idx = t->ipsec.res[i];
+		pkt = t->ipsec.pkts[i];
+		if (unlikely(sa_idx >= sa_ctx->nb_sa))
+			free_pkts(&pkt, 1);
+		else {
+			/* Else the packet has to be protected */
+			sa = &(sa_ctx->sa[sa_idx]);
+			/* Get IPsec session */
+			sess = ipsec_get_primary_session(sa);
+			/* Allow only inline protocol for now */
+			if (unlikely(sess->type !=
+				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) {
+				RTE_LOG(ERR, IPSEC, "SA type not supported\n");
+				free_pkts(&pkt, 1);
+			}
+			rte_security_set_pkt_metadata(sess->security.ctx,
+						sess->security.ses, pkt, NULL);
+
+			pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+			port_id = sa->portid;
+			update_mac_addrs(pkt, port_id);
+			ipsec_event_pre_forward(pkt, port_id);
+			ev_vector_attr_update(vec, pkt);
+			vec->mbufs[j++] = pkt;
+		}
+	}
+
+	return j;
+}
+
+static inline void
+classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t)
+{
+	enum pkt_type type;
+	uint8_t *nlp;
+
+	/* Check the packet type */
+	type = process_ipsec_get_pkt_type(pkt, &nlp);
+
+	switch (type) {
+	case PKT_TYPE_PLAIN_IPV4:
+		t->ip4.data[t->ip4.num] = nlp;
+		t->ip4.pkts[(t->ip4.num)++] = pkt;
+		break;
+	case PKT_TYPE_PLAIN_IPV6:
+		t->ip6.data[t->ip6.num] = nlp;
+		t->ip6.pkts[(t->ip6.num)++] = pkt;
+		break;
+	default:
+		RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type);
+		free_pkts(&pkt, 1);
+		break;
+	}
+}
+
+static inline int
+process_ipsec_ev_inbound_vector(struct ipsec_ctx *ctx, struct route_table *rt,
+				struct rte_event_vector *vec)
+{
+	struct ipsec_traffic t;
+	struct rte_mbuf *pkt;
+	int i;
+
+	t.ip4.num = 0;
+	t.ip6.num = 0;
+	t.ipsec.num = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		/* Get pkt from event */
+		pkt = vec->mbufs[i];
+
+		if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) {
+			if (unlikely(pkt->ol_flags &
+				     PKT_RX_SEC_OFFLOAD_FAILED)) {
+				RTE_LOG(ERR, IPSEC,
+					"Inbound security offload failed\n");
+				free_pkts(&pkt, 1);
+				continue;
+			}
+		}
+
+		classify_pkt(pkt, &t);
+	}
+
+	check_sp_sa_bulk(ctx->sp4_ctx, ctx->sa_ctx, &t.ip4);
+	check_sp_sa_bulk(ctx->sp6_ctx, ctx->sa_ctx, &t.ip6);
+
+	return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx);
+}
+
+static inline int
+process_ipsec_ev_outbound_vector(struct ipsec_ctx *ctx, struct route_table *rt,
+				 struct rte_event_vector *vec)
+{
+	struct ipsec_traffic t;
+	struct rte_mbuf *pkt;
+	uint32_t i;
+
+	t.ip4.num = 0;
+	t.ip6.num = 0;
+	t.ipsec.num = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		/* Get pkt from event */
+		pkt = vec->mbufs[i];
+
+		classify_pkt(pkt, &t);
+
+		/* Provide L2 len for Outbound processing */
+		pkt->l2_len = RTE_ETHER_HDR_LEN;
+	}
+
+	check_sp_bulk(ctx->sp4_ctx, &t.ip4, &t.ipsec);
+	check_sp_bulk(ctx->sp6_ctx, &t.ip6, &t.ipsec);
+
+	return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx);
+}
+
+static inline int
+process_ipsec_ev_drv_mode_outbound_vector(struct rte_event_vector *vec,
+					  struct port_drv_mode_data *data)
+{
+	struct rte_mbuf *pkt;
+	int16_t port_id;
+	uint32_t i;
+	int j = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		pkt = vec->mbufs[i];
+		port_id = pkt->port;
+
+		if (unlikely(!data[port_id].sess)) {
+			free_pkts(&pkt, 1);
+			continue;
+		}
+		ipsec_event_pre_forward(pkt, port_id);
+		/* Save security session */
+		rte_security_set_pkt_metadata(data[port_id].ctx,
+					      data[port_id].sess, pkt,
+					      NULL);
+
+		/* Mark the packet for Tx security offload */
+		pkt->ol_flags |= PKT_TX_SEC_OFFLOAD;
+
+		/* Provide L2 len for Outbound processing */
+		pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+		vec->mbufs[j++] = pkt;
+	}
+
+	return j;
+}
+
+static inline void
+ipsec_ev_vector_process(struct lcore_conf_ev_tx_int_port_wrkr *lconf,
+			struct eh_event_link_info *links,
+			struct rte_event *ev)
+{
+	struct rte_event_vector *vec = ev->vec;
+	struct rte_mbuf *pkt;
+	int ret;
+
+	pkt = vec->mbufs[0];
+
+	ev_vector_attr_init(vec);
+	if (is_unprotected_port(pkt->port))
+		ret = process_ipsec_ev_inbound_vector(&lconf->inbound,
+						      &lconf->rt, vec);
+	else
+		ret = process_ipsec_ev_outbound_vector(&lconf->outbound,
+						       &lconf->rt, vec);
+
+	if (ret > 0) {
+		vec->nb_elem = ret;
+		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
+						 links[0].event_port_id,
+						 ev, 1, 0);
+	}
+}
+
+static inline void
+ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links,
+				 struct rte_event *ev,
+				 struct port_drv_mode_data *data)
+{
+	struct rte_event_vector *vec = ev->vec;
+	struct rte_mbuf *pkt;
+
+	pkt = vec->mbufs[0];
+
+	if (!is_unprotected_port(pkt->port))
+		vec->nb_elem = process_ipsec_ev_drv_mode_outbound_vector(vec,
+									 data);
+	if (vec->nb_elem > 0)
+		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
+						 links[0].event_port_id,
+						 ev, 1, 0);
+}
+
 /*
  * Event mode exposes various operating modes depending on the
  * capabilities of the event device and the operating mode
@@ -450,6 +777,19 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 		if (nb_rx == 0)
 			continue;
 
+		switch (ev.event_type) {
+		case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR:
+		case RTE_EVENT_TYPE_ETHDEV_VECTOR:
+			ipsec_ev_vector_drv_mode_process(links, &ev, data);
+			continue;
+		case RTE_EVENT_TYPE_ETHDEV:
+			break;
+		default:
+			RTE_LOG(ERR, IPSEC, "Invalid event type %u",
+				ev.event_type);
+			continue;
+		}
+
 		pkt = ev.mbuf;
 		port_id = pkt->port;
 
@@ -557,10 +897,16 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links,
 		if (nb_rx == 0)
 			continue;
 
-		if (unlikely(ev.event_type != RTE_EVENT_TYPE_ETHDEV)) {
+		switch (ev.event_type) {
+		case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR:
+		case RTE_EVENT_TYPE_ETHDEV_VECTOR:
+			ipsec_ev_vector_process(&lconf, links, &ev);
+			continue;
+		case RTE_EVENT_TYPE_ETHDEV:
+			break;
+		default:
 			RTE_LOG(ERR, IPSEC, "Invalid event type %u",
 				ev.event_type);
-
 			continue;
 		}
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v2] examples/ipsec-secgw: add support for event vector
  2021-09-14 13:14 ` [dpdk-dev] [PATCH v2] " Srujana Challa
@ 2021-10-31 13:22   ` Akhil Goyal
  2021-11-01 10:39     ` Ananyev, Konstantin
  0 siblings, 1 reply; 7+ messages in thread
From: Akhil Goyal @ 2021-10-31 13:22 UTC (permalink / raw)
  To: Srujana Challa, radu.nicolau, konstantin.ananyev
  Cc: dev, Nithin Kumar Dabilpuram, Anoob Joseph,
	Jerin Jacob Kollanukkaran, Srujana Challa

Hi Konstantin/Radu,

> Adds event vector support to inline protocol offload mode.
> By default vector support is disabled, it can be enabled by
> using the option --event-vector.
> Additional options to configure vector size and vector timeout are
> also implemented and can be used by specifying --vector-size and
> --vector-tmo.
> 
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
> Depends-on: series-18262 ("security: Improve inline fast path routines")
> Depends-on: series-18322 ("eventdev: simplify Rx adapter event vector
> config")
> 
> v2:
> * Set rte_event_vector::attr_valid if all packets in the vector uses
> same port.
Any comments on this patch. It has been in patchworks from quite some time.

Regards,
Akhil

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v2] examples/ipsec-secgw: add support for event vector
  2021-10-31 13:22   ` Akhil Goyal
@ 2021-11-01 10:39     ` Ananyev, Konstantin
  2021-11-01 14:03       ` Nicolau, Radu
  0 siblings, 1 reply; 7+ messages in thread
From: Ananyev, Konstantin @ 2021-11-01 10:39 UTC (permalink / raw)
  To: Akhil Goyal, Srujana Challa, Nicolau, Radu
  Cc: dev, Nithin Kumar Dabilpuram, Anoob Joseph,
	Jerin Jacob Kollanukkaran, Srujana Challa



Hi Akhil,

> Hi Konstantin/Radu,
> 
> > Adds event vector support to inline protocol offload mode.
> > By default vector support is disabled, it can be enabled by
> > using the option --event-vector.
> > Additional options to configure vector size and vector timeout are
> > also implemented and can be used by specifying --vector-size and
> > --vector-tmo.
> >
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > ---
> > Depends-on: series-18262 ("security: Improve inline fast path routines")
> > Depends-on: series-18322 ("eventdev: simplify Rx adapter event vector
> > config")
> >
> > v2:
> > * Set rte_event_vector::attr_valid if all packets in the vector uses
> > same port.
> Any comments on this patch. It has been in patchworks from quite some time.
> 

As I understand it affects only event mode for ipsec-secgw, right?
I am not really familiar with that part, so no comments from me.
Thanks
Konstantin



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v2] examples/ipsec-secgw: add support for event vector
  2021-11-01 10:39     ` Ananyev, Konstantin
@ 2021-11-01 14:03       ` Nicolau, Radu
  0 siblings, 0 replies; 7+ messages in thread
From: Nicolau, Radu @ 2021-11-01 14:03 UTC (permalink / raw)
  To: Ananyev, Konstantin, Akhil Goyal, Srujana Challa
  Cc: dev, Nithin Kumar Dabilpuram, Anoob Joseph, Jerin Jacob Kollanukkaran

Hi Akhil, I'm even less familiar with this section, so no objection from 
me neither.

On 11/1/2021 10:39 AM, Ananyev, Konstantin wrote:
>
> Hi Akhil,
>
>> Hi Konstantin/Radu,
>>
>>> Adds event vector support to inline protocol offload mode.
>>> By default vector support is disabled, it can be enabled by
>>> using the option --event-vector.
>>> Additional options to configure vector size and vector timeout are
>>> also implemented and can be used by specifying --vector-size and
>>> --vector-tmo.
>>>
>>> Signed-off-by: Srujana Challa <schalla@marvell.com>
>>> ---
>>> Depends-on: series-18262 ("security: Improve inline fast path routines")
>>> Depends-on: series-18322 ("eventdev: simplify Rx adapter event vector
>>> config")
>>>
>>> v2:
>>> * Set rte_event_vector::attr_valid if all packets in the vector uses
>>> same port.
>> Any comments on this patch. It has been in patchworks from quite some time.
>>
> As I understand it affects only event mode for ipsec-secgw, right?
> I am not really familiar with that part, so no comments from me.
> Thanks
> Konstantin
>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [dpdk-dev] [PATCH v3] examples/ipsec-secgw: add support for event vector
  2021-08-26 10:03 [dpdk-dev] [PATCH] examples/ipsec-secgw: add support for event vector Srujana Challa
  2021-09-14 13:14 ` [dpdk-dev] [PATCH v2] " Srujana Challa
@ 2021-11-03  3:24 ` Nithin Dabilpuram
  2021-11-03 16:07   ` Akhil Goyal
  1 sibling, 1 reply; 7+ messages in thread
From: Nithin Dabilpuram @ 2021-11-03  3:24 UTC (permalink / raw)
  To: gakhil, Radu Nicolau; +Cc: dev, jerinj, schalla

From: Srujana Challa <schalla@marvell.com>

Adds event vector support to inline protocol offload mode.
By default vector support is disabled, it can be enabled by
using the option --event-vector.
Additional options to configure vector size and vector timeout are
also implemented and can be used by specifying --vector-size and
--vector-tmo.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---

v3:
* Rebased and updated PKT_* mbuf flags to RTE_MBUF_F*.

v2:
* Set rte_event_vector::attr_valid if all packets in the vector uses
same port.

doc/guides/sample_app_ug/ipsec_secgw.rst |  18 +-
 examples/ipsec-secgw/event_helper.c      |  78 ++++++-
 examples/ipsec-secgw/event_helper.h      |   8 +
 examples/ipsec-secgw/ipsec-secgw.c       |  41 +++-
 examples/ipsec-secgw/ipsec-secgw.h       |   2 +
 examples/ipsec-secgw/ipsec_worker.c      | 350 ++++++++++++++++++++++++++++++-
 6 files changed, 492 insertions(+), 5 deletions(-)

diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 639d309..f9f4324 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -86,6 +86,15 @@ The application supports two modes of operation: poll mode and event mode.
   threads and supports inline protocol only.** It also provides infrastructure for
   non-internal port however does not define any worker threads.
 
+  Event mode also supports event vectorization. The event devices, ethernet device
+  pairs which support the capability ``RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR`` can
+  aggregate packets based on flow characteristics and generate a ``rte_event``
+  containing ``rte_event_vector``.
+  The aggregation size and timeout can be given using command line options vector-size
+  (default vector-size is 16) and vector-tmo (default vector-tmo is 102400ns).
+  By default event vectorization is disabled and it can be enabled using event-vector
+  option.
+
 Additionally the event mode introduces two submodes of processing packets:
 
 * Driver submode: This submode has bare minimum changes in the application to support
@@ -293,7 +302,8 @@ event app mode::
 
     ./<build_dir>/examples/dpdk-ipsec-secgw -c 0x3 -- -P -p 0x3 -u 0x1       \
            -f /path/to/config_file --transfer-mode event \
-           --event-schedule-type parallel                \
+           --event-schedule-type parallel --event-vector --vector-size 32    \
+           --vector-tmo 102400                           \
 
 where each option means:
 
@@ -312,6 +322,12 @@ where each option means:
 
 *   The ``--event-schedule-type`` option selects parallel ordering of event queues.
 
+*   The ``--event-vector`` option enables event vectorization.
+
+*   The ``--vector-size`` option specifies max vector size.
+
+*   The ``--vector-tmo`` option specifies max timeout in nanoseconds for vectorization.
+
 
 Refer to the *DPDK Getting Started Guide* for general information on running
 applications and the Environment Abstraction Layer (EAL) options.
diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c
index 8475d54..e8600f5 100644
--- a/examples/ipsec-secgw/event_helper.c
+++ b/examples/ipsec-secgw/event_helper.c
@@ -10,6 +10,10 @@
 #include <stdbool.h>
 
 #include "event_helper.h"
+#include "ipsec-secgw.h"
+
+#define DEFAULT_VECTOR_SIZE  16
+#define DEFAULT_VECTOR_TMO   102400
 
 static volatile bool eth_core_running;
 
@@ -729,6 +733,45 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf)
 }
 
 static int
+eh_event_vector_limits_validate(struct eventmode_conf *em_conf,
+				uint8_t ev_dev_id, uint8_t ethdev_id)
+{
+	struct rte_event_eth_rx_adapter_vector_limits limits = {0};
+	uint16_t vector_size = em_conf->ext_params.vector_size;
+	int ret;
+
+	ret = rte_event_eth_rx_adapter_vector_limits_get(ev_dev_id, ethdev_id,
+							 &limits);
+	if (ret) {
+		EH_LOG_ERR("failed to get vector limits");
+		return ret;
+	}
+
+	if (vector_size < limits.min_sz || vector_size > limits.max_sz) {
+		EH_LOG_ERR("Vector size [%d] not within limits min[%d] max[%d]",
+			   vector_size, limits.min_sz, limits.max_sz);
+		return -EINVAL;
+	}
+
+	if (limits.log2_sz && !rte_is_power_of_2(vector_size)) {
+		EH_LOG_ERR("Vector size [%d] not power of 2", vector_size);
+		return -EINVAL;
+	}
+
+	if (em_conf->vector_tmo_ns > limits.max_timeout_ns ||
+	    em_conf->vector_tmo_ns < limits.min_timeout_ns) {
+		EH_LOG_ERR("Vector timeout [%" PRIu64
+			   "] not within limits max[%" PRIu64
+			   "] min[%" PRIu64 "]",
+			   em_conf->vector_tmo_ns,
+			   limits.max_timeout_ns,
+			   limits.min_timeout_ns);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+static int
 eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		struct rx_adapter_conf *adapter)
 {
@@ -736,8 +779,10 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 	struct rte_event_dev_info evdev_default_conf = {0};
 	struct rte_event_port_conf port_conf = {0};
 	struct rx_adapter_connection_info *conn;
+	uint32_t service_id, socket_id, nb_elem;
+	struct rte_mempool *vector_pool = NULL;
+	uint32_t lcore_id = rte_lcore_id();
 	uint8_t eventdev_id;
-	uint32_t service_id;
 	int ret;
 	int j;
 
@@ -751,6 +796,20 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		return ret;
 	}
 
+	if (em_conf->ext_params.event_vector) {
+		socket_id = rte_lcore_to_socket_id(lcore_id);
+		nb_elem = (nb_bufs_in_pool / em_conf->ext_params.vector_size)
+			  + 1;
+
+		vector_pool = rte_event_vector_pool_create(
+			"vector_pool", nb_elem, 0,
+			em_conf->ext_params.vector_size,
+			socket_id);
+		if (vector_pool == NULL) {
+			EH_LOG_ERR("failed to create event vector pool");
+			return -ENOMEM;
+		}
+	}
 	/* Setup port conf */
 	port_conf.new_event_threshold = 1200;
 	port_conf.dequeue_depth =
@@ -776,6 +835,20 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf,
 		queue_conf.ev.sched_type = em_conf->ext_params.sched_type;
 		queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV;
 
+		if (em_conf->ext_params.event_vector) {
+			ret = eh_event_vector_limits_validate(em_conf,
+							      eventdev_id,
+							      conn->ethdev_id);
+			if (ret)
+				return ret;
+
+			queue_conf.vector_sz = em_conf->ext_params.vector_size;
+			queue_conf.vector_timeout_ns = em_conf->vector_tmo_ns;
+			queue_conf.vector_mp = vector_pool;
+			queue_conf.rx_queue_flags =
+				RTE_EVENT_ETH_RX_ADAPTER_QUEUE_EVENT_VECTOR;
+		}
+
 		/* Add queue to the adapter */
 		ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id,
 				conn->ethdev_id, conn->ethdev_rx_qid,
@@ -1475,6 +1548,9 @@ eh_conf_init(void)
 
 	rte_bitmap_set(em_conf->eth_core_mask, eth_core_id);
 
+	em_conf->ext_params.vector_size = DEFAULT_VECTOR_SIZE;
+	em_conf->vector_tmo_ns = DEFAULT_VECTOR_TMO;
+
 	return conf;
 
 free_bitmap:
diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h
index b65b343..5be6c62 100644
--- a/examples/ipsec-secgw/event_helper.h
+++ b/examples/ipsec-secgw/event_helper.h
@@ -171,10 +171,18 @@ struct eventmode_conf {
 		 * When enabled, all event queues need to be mapped to
 		 * each event port
 		 */
+			uint64_t event_vector                   : 1;
+		/**<
+		 * Enable event vector, when enabled application can
+		 * receive vector of events.
+		 */
+			uint64_t vector_size                    : 16;
 		};
 		uint64_t u64;
 	} ext_params;
 		/**< 64 bit field to specify extended params */
+	uint64_t vector_tmo_ns;
+		/**< Max vector timeout in nanoseconds */
 };
 
 /**
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 5fcf424..ad92ab5 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -114,6 +114,9 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS];
 #define CMD_LINE_OPT_REASSEMBLE		"reassemble"
 #define CMD_LINE_OPT_MTU		"mtu"
 #define CMD_LINE_OPT_FRAG_TTL		"frag-ttl"
+#define CMD_LINE_OPT_EVENT_VECTOR	"event-vector"
+#define CMD_LINE_OPT_VECTOR_SIZE	"vector-size"
+#define CMD_LINE_OPT_VECTOR_TIMEOUT	"vector-tmo"
 
 #define CMD_LINE_ARG_EVENT	"event"
 #define CMD_LINE_ARG_POLL	"poll"
@@ -138,6 +141,9 @@ enum {
 	CMD_LINE_OPT_REASSEMBLE_NUM,
 	CMD_LINE_OPT_MTU_NUM,
 	CMD_LINE_OPT_FRAG_TTL_NUM,
+	CMD_LINE_OPT_EVENT_VECTOR_NUM,
+	CMD_LINE_OPT_VECTOR_SIZE_NUM,
+	CMD_LINE_OPT_VECTOR_TIMEOUT_NUM,
 };
 
 static const struct option lgopts[] = {
@@ -151,6 +157,9 @@ static const struct option lgopts[] = {
 	{CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM},
 	{CMD_LINE_OPT_MTU, 1, 0, CMD_LINE_OPT_MTU_NUM},
 	{CMD_LINE_OPT_FRAG_TTL, 1, 0, CMD_LINE_OPT_FRAG_TTL_NUM},
+	{CMD_LINE_OPT_EVENT_VECTOR, 0, 0, CMD_LINE_OPT_EVENT_VECTOR_NUM},
+	{CMD_LINE_OPT_VECTOR_SIZE, 1, 0, CMD_LINE_OPT_VECTOR_SIZE_NUM},
+	{CMD_LINE_OPT_VECTOR_TIMEOUT, 1, 0, CMD_LINE_OPT_VECTOR_TIMEOUT_NUM},
 	{NULL, 0, 0, 0}
 };
 
@@ -163,7 +172,7 @@ static int32_t promiscuous_on = 1;
 static int32_t numa_on = 1; /**< NUMA is enabled by default. */
 static uint32_t nb_lcores;
 static uint32_t single_sa;
-static uint32_t nb_bufs_in_pool;
+uint32_t nb_bufs_in_pool;
 
 /*
  * RX/TX HW offload capabilities to enable/use on ethernet ports.
@@ -1409,6 +1418,9 @@ print_usage(const char *prgname)
 		" [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]"
 		" [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]"
 		" [--" CMD_LINE_OPT_MTU " MTU]"
+		" [--event-vector]"
+		" [--vector-size SIZE]"
+		" [--vector-tmo TIMEOUT in ns]"
 		"\n\n"
 		"  -p PORTMASK: Hexadecimal bitmask of ports to configure\n"
 		"  -P : Enable promiscuous mode\n"
@@ -1462,6 +1474,10 @@ print_usage(const char *prgname)
 		"  --" CMD_LINE_OPT_FRAG_TTL " FRAG_TTL_NS"
 		": fragments lifetime in nanoseconds, default\n"
 		"    and maximum value is 10.000.000.000 ns (10 s)\n"
+		"  --event-vector enables event vectorization\n"
+		"  --vector-size Max vector size (default value: 16)\n"
+		"  --vector-tmo Max vector timeout in nanoseconds"
+		"    (default value: 102400)\n"
 		"\n",
 		prgname);
 }
@@ -1628,6 +1644,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf)
 	int32_t option_index;
 	char *prgname = argv[0];
 	int32_t f_present = 0;
+	struct eventmode_conf *em_conf = NULL;
 
 	argvopt = argv;
 
@@ -1813,6 +1830,28 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf)
 			}
 			frag_ttl_ns = ret;
 			break;
+		case CMD_LINE_OPT_EVENT_VECTOR_NUM:
+			em_conf = eh_conf->mode_params;
+			em_conf->ext_params.event_vector = 1;
+			break;
+		case CMD_LINE_OPT_VECTOR_SIZE_NUM:
+			ret = parse_decimal(optarg);
+
+			if (ret > MAX_PKT_BURST) {
+				printf("Invalid argument for \'%s\': %s\n",
+					CMD_LINE_OPT_VECTOR_SIZE, optarg);
+				print_usage(prgname);
+				return -1;
+			}
+			em_conf = eh_conf->mode_params;
+			em_conf->ext_params.vector_size = ret;
+			break;
+		case CMD_LINE_OPT_VECTOR_TIMEOUT_NUM:
+			ret = parse_decimal(optarg);
+
+			em_conf = eh_conf->mode_params;
+			em_conf->vector_tmo_ns = ret;
+			break;
 		default:
 			print_usage(prgname);
 			return -1;
diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h
index 96e22de..73d01d8 100644
--- a/examples/ipsec-secgw/ipsec-secgw.h
+++ b/examples/ipsec-secgw/ipsec-secgw.h
@@ -106,6 +106,8 @@ extern uint32_t single_sa_idx;
 
 extern volatile bool force_quit;
 
+extern uint32_t nb_bufs_in_pool;
+
 static inline uint8_t
 is_unprotected_port(uint16_t port_id)
 {
diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c
index 6d3f72a..7419e85 100644
--- a/examples/ipsec-secgw/ipsec_worker.c
+++ b/examples/ipsec-secgw/ipsec_worker.c
@@ -67,6 +67,25 @@ ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id)
 }
 
 static inline void
+ev_vector_attr_init(struct rte_event_vector *vec)
+{
+	vec->attr_valid = 1;
+	vec->port = 0xFFFF;
+	vec->queue = 0;
+}
+
+static inline void
+ev_vector_attr_update(struct rte_event_vector *vec, struct rte_mbuf *pkt)
+{
+	if (vec->port == 0xFFFF) {
+		vec->port = pkt->port;
+		return;
+	}
+	if (vec->attr_valid && (vec->port != pkt->port))
+		vec->attr_valid = 0;
+}
+
+static inline void
 prepare_out_sessions_tbl(struct sa_ctx *sa_out,
 			 struct port_drv_mode_data *data,
 			 uint16_t size)
@@ -133,6 +152,72 @@ check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx)
 	return 1;
 }
 
+static inline void
+check_sp_bulk(struct sp_ctx *sp, struct traffic_type *ip,
+	      struct traffic_type *ipsec)
+{
+	uint32_t i, j, res;
+	struct rte_mbuf *m;
+
+	if (unlikely(sp == NULL || ip->num == 0))
+		return;
+
+	rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, ip->num,
+			 DEFAULT_MAX_CATEGORIES);
+
+	j = 0;
+	for (i = 0; i < ip->num; i++) {
+		m = ip->pkts[i];
+		res = ip->res[i];
+		if (unlikely(res == DISCARD))
+			free_pkts(&m, 1);
+		else if (res == BYPASS)
+			ip->pkts[j++] = m;
+		else {
+			ipsec->res[ipsec->num] = res - 1;
+			ipsec->pkts[ipsec->num++] = m;
+		}
+	}
+	ip->num = j;
+}
+
+static inline void
+check_sp_sa_bulk(struct sp_ctx *sp, struct sa_ctx *sa_ctx,
+		 struct traffic_type *ip)
+{
+	struct ipsec_sa *sa;
+	uint32_t i, j, res;
+	struct rte_mbuf *m;
+
+	if (unlikely(sp == NULL || ip->num == 0))
+		return;
+
+	rte_acl_classify((struct rte_acl_ctx *)sp, ip->data, ip->res, ip->num,
+			 DEFAULT_MAX_CATEGORIES);
+
+	j = 0;
+	for (i = 0; i < ip->num; i++) {
+		m = ip->pkts[i];
+		res = ip->res[i];
+		if (unlikely(res == DISCARD))
+			free_pkts(&m, 1);
+		else if (res == BYPASS)
+			ip->pkts[j++] = m;
+		else {
+			sa = *(struct ipsec_sa **)rte_security_dynfield(m);
+			if (sa == NULL)
+				free_pkts(&m, 1);
+
+			/* SPI on the packet should match with the one in SA */
+			if (unlikely(sa->spi != sa_ctx->sa[res - 1].spi))
+				free_pkts(&m, 1);
+
+			ip->pkts[j++] = m;
+		}
+	}
+	ip->num = j;
+}
+
 static inline uint16_t
 route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx)
 {
@@ -393,6 +478,248 @@ process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt,
 	return PKT_DROPPED;
 }
 
+static inline int
+ipsec_ev_route_pkts(struct rte_event_vector *vec, struct route_table *rt,
+		    struct ipsec_traffic *t, struct sa_ctx *sa_ctx)
+{
+	struct rte_ipsec_session *sess;
+	uint32_t sa_idx, i, j = 0;
+	uint16_t port_id = 0;
+	struct rte_mbuf *pkt;
+	struct ipsec_sa *sa;
+
+	/* Route IPv4 packets */
+	for (i = 0; i < t->ip4.num; i++) {
+		pkt = t->ip4.pkts[i];
+		port_id = route4_pkt(pkt, rt->rt4_ctx);
+		if (port_id != RTE_MAX_ETHPORTS) {
+			/* Update mac addresses */
+			update_mac_addrs(pkt, port_id);
+			/* Update the event with the dest port */
+			ipsec_event_pre_forward(pkt, port_id);
+			ev_vector_attr_update(vec, pkt);
+			vec->mbufs[j++] = pkt;
+		} else
+			free_pkts(&pkt, 1);
+	}
+
+	/* Route IPv6 packets */
+	for (i = 0; i < t->ip6.num; i++) {
+		pkt = t->ip6.pkts[i];
+		port_id = route6_pkt(pkt, rt->rt6_ctx);
+		if (port_id != RTE_MAX_ETHPORTS) {
+			/* Update mac addresses */
+			update_mac_addrs(pkt, port_id);
+			/* Update the event with the dest port */
+			ipsec_event_pre_forward(pkt, port_id);
+			ev_vector_attr_update(vec, pkt);
+			vec->mbufs[j++] = pkt;
+		} else
+			free_pkts(&pkt, 1);
+	}
+
+	/* Route ESP packets */
+	for (i = 0; i < t->ipsec.num; i++) {
+		/* Validate sa_idx */
+		sa_idx = t->ipsec.res[i];
+		pkt = t->ipsec.pkts[i];
+		if (unlikely(sa_idx >= sa_ctx->nb_sa))
+			free_pkts(&pkt, 1);
+		else {
+			/* Else the packet has to be protected */
+			sa = &(sa_ctx->sa[sa_idx]);
+			/* Get IPsec session */
+			sess = ipsec_get_primary_session(sa);
+			/* Allow only inline protocol for now */
+			if (unlikely(sess->type !=
+				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)) {
+				RTE_LOG(ERR, IPSEC, "SA type not supported\n");
+				free_pkts(&pkt, 1);
+			}
+			rte_security_set_pkt_metadata(sess->security.ctx,
+						sess->security.ses, pkt, NULL);
+
+			pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+			port_id = sa->portid;
+			update_mac_addrs(pkt, port_id);
+			ipsec_event_pre_forward(pkt, port_id);
+			ev_vector_attr_update(vec, pkt);
+			vec->mbufs[j++] = pkt;
+		}
+	}
+
+	return j;
+}
+
+static inline void
+classify_pkt(struct rte_mbuf *pkt, struct ipsec_traffic *t)
+{
+	enum pkt_type type;
+	uint8_t *nlp;
+
+	/* Check the packet type */
+	type = process_ipsec_get_pkt_type(pkt, &nlp);
+
+	switch (type) {
+	case PKT_TYPE_PLAIN_IPV4:
+		t->ip4.data[t->ip4.num] = nlp;
+		t->ip4.pkts[(t->ip4.num)++] = pkt;
+		break;
+	case PKT_TYPE_PLAIN_IPV6:
+		t->ip6.data[t->ip6.num] = nlp;
+		t->ip6.pkts[(t->ip6.num)++] = pkt;
+		break;
+	default:
+		RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type);
+		free_pkts(&pkt, 1);
+		break;
+	}
+}
+
+static inline int
+process_ipsec_ev_inbound_vector(struct ipsec_ctx *ctx, struct route_table *rt,
+				struct rte_event_vector *vec)
+{
+	struct ipsec_traffic t;
+	struct rte_mbuf *pkt;
+	int i;
+
+	t.ip4.num = 0;
+	t.ip6.num = 0;
+	t.ipsec.num = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		/* Get pkt from event */
+		pkt = vec->mbufs[i];
+
+		if (pkt->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD) {
+			if (unlikely(pkt->ol_flags &
+				     RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED)) {
+				RTE_LOG(ERR, IPSEC,
+					"Inbound security offload failed\n");
+				free_pkts(&pkt, 1);
+				continue;
+			}
+		}
+
+		classify_pkt(pkt, &t);
+	}
+
+	check_sp_sa_bulk(ctx->sp4_ctx, ctx->sa_ctx, &t.ip4);
+	check_sp_sa_bulk(ctx->sp6_ctx, ctx->sa_ctx, &t.ip6);
+
+	return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx);
+}
+
+static inline int
+process_ipsec_ev_outbound_vector(struct ipsec_ctx *ctx, struct route_table *rt,
+				 struct rte_event_vector *vec)
+{
+	struct ipsec_traffic t;
+	struct rte_mbuf *pkt;
+	uint32_t i;
+
+	t.ip4.num = 0;
+	t.ip6.num = 0;
+	t.ipsec.num = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		/* Get pkt from event */
+		pkt = vec->mbufs[i];
+
+		classify_pkt(pkt, &t);
+
+		/* Provide L2 len for Outbound processing */
+		pkt->l2_len = RTE_ETHER_HDR_LEN;
+	}
+
+	check_sp_bulk(ctx->sp4_ctx, &t.ip4, &t.ipsec);
+	check_sp_bulk(ctx->sp6_ctx, &t.ip6, &t.ipsec);
+
+	return ipsec_ev_route_pkts(vec, rt, &t, ctx->sa_ctx);
+}
+
+static inline int
+process_ipsec_ev_drv_mode_outbound_vector(struct rte_event_vector *vec,
+					  struct port_drv_mode_data *data)
+{
+	struct rte_mbuf *pkt;
+	int16_t port_id;
+	uint32_t i;
+	int j = 0;
+
+	for (i = 0; i < vec->nb_elem; i++) {
+		pkt = vec->mbufs[i];
+		port_id = pkt->port;
+
+		if (unlikely(!data[port_id].sess)) {
+			free_pkts(&pkt, 1);
+			continue;
+		}
+		ipsec_event_pre_forward(pkt, port_id);
+		/* Save security session */
+		rte_security_set_pkt_metadata(data[port_id].ctx,
+					      data[port_id].sess, pkt,
+					      NULL);
+
+		/* Mark the packet for Tx security offload */
+		pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+
+		/* Provide L2 len for Outbound processing */
+		pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+		vec->mbufs[j++] = pkt;
+	}
+
+	return j;
+}
+
+static inline void
+ipsec_ev_vector_process(struct lcore_conf_ev_tx_int_port_wrkr *lconf,
+			struct eh_event_link_info *links,
+			struct rte_event *ev)
+{
+	struct rte_event_vector *vec = ev->vec;
+	struct rte_mbuf *pkt;
+	int ret;
+
+	pkt = vec->mbufs[0];
+
+	ev_vector_attr_init(vec);
+	if (is_unprotected_port(pkt->port))
+		ret = process_ipsec_ev_inbound_vector(&lconf->inbound,
+						      &lconf->rt, vec);
+	else
+		ret = process_ipsec_ev_outbound_vector(&lconf->outbound,
+						       &lconf->rt, vec);
+
+	if (ret > 0) {
+		vec->nb_elem = ret;
+		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
+						 links[0].event_port_id,
+						 ev, 1, 0);
+	}
+}
+
+static inline void
+ipsec_ev_vector_drv_mode_process(struct eh_event_link_info *links,
+				 struct rte_event *ev,
+				 struct port_drv_mode_data *data)
+{
+	struct rte_event_vector *vec = ev->vec;
+	struct rte_mbuf *pkt;
+
+	pkt = vec->mbufs[0];
+
+	if (!is_unprotected_port(pkt->port))
+		vec->nb_elem = process_ipsec_ev_drv_mode_outbound_vector(vec,
+									 data);
+	if (vec->nb_elem > 0)
+		rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id,
+						 links[0].event_port_id,
+						 ev, 1, 0);
+}
+
 /*
  * Event mode exposes various operating modes depending on the
  * capabilities of the event device and the operating mode
@@ -464,6 +791,19 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links,
 		if (nb_rx == 0)
 			continue;
 
+		switch (ev.event_type) {
+		case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR:
+		case RTE_EVENT_TYPE_ETHDEV_VECTOR:
+			ipsec_ev_vector_drv_mode_process(links, &ev, data);
+			continue;
+		case RTE_EVENT_TYPE_ETHDEV:
+			break;
+		default:
+			RTE_LOG(ERR, IPSEC, "Invalid event type %u",
+				ev.event_type);
+			continue;
+		}
+
 		pkt = ev.mbuf;
 		port_id = pkt->port;
 
@@ -573,10 +913,16 @@ ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links,
 		if (nb_rx == 0)
 			continue;
 
-		if (unlikely(ev.event_type != RTE_EVENT_TYPE_ETHDEV)) {
+		switch (ev.event_type) {
+		case RTE_EVENT_TYPE_ETH_RX_ADAPTER_VECTOR:
+		case RTE_EVENT_TYPE_ETHDEV_VECTOR:
+			ipsec_ev_vector_process(&lconf, links, &ev);
+			continue;
+		case RTE_EVENT_TYPE_ETHDEV:
+			break;
+		default:
 			RTE_LOG(ERR, IPSEC, "Invalid event type %u",
 				ev.event_type);
-
 			continue;
 		}
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-dev] [PATCH v3] examples/ipsec-secgw: add support for event vector
  2021-11-03  3:24 ` [dpdk-dev] [PATCH v3] " Nithin Dabilpuram
@ 2021-11-03 16:07   ` Akhil Goyal
  0 siblings, 0 replies; 7+ messages in thread
From: Akhil Goyal @ 2021-11-03 16:07 UTC (permalink / raw)
  To: Nithin Kumar Dabilpuram, Radu Nicolau
  Cc: dev, Jerin Jacob Kollanukkaran, Srujana Challa

> From: Srujana Challa <schalla@marvell.com>
> 
> Adds event vector support to inline protocol offload mode.
> By default vector support is disabled, it can be enabled by
> using the option --event-vector.
> Additional options to configure vector size and vector timeout are
> also implemented and can be used by specifying --vector-size and
> --vector-tmo.
> 
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
Acked-by: Akhil Goyal <gakhil@marvell.com>

Added release notes.
Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-11-03 16:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-26 10:03 [dpdk-dev] [PATCH] examples/ipsec-secgw: add support for event vector Srujana Challa
2021-09-14 13:14 ` [dpdk-dev] [PATCH v2] " Srujana Challa
2021-10-31 13:22   ` Akhil Goyal
2021-11-01 10:39     ` Ananyev, Konstantin
2021-11-01 14:03       ` Nicolau, Radu
2021-11-03  3:24 ` [dpdk-dev] [PATCH v3] " Nithin Dabilpuram
2021-11-03 16:07   ` Akhil Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).