DPDK patches and discussions
 help / color / mirror / Atom feed
* [RFC PATCH 1/2] security: add fallback security processing and Rx inject
@ 2023-08-11 11:45 Anoob Joseph
  2023-08-11 11:45 ` [RFC PATCH 2/2] test/cryptodev: add Rx inject test Anoob Joseph
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-08-11 11:45 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

Add alternate datapath API for security processing which would do Rx
injection (similar to loopback) after successful security processing.

With inline protocol offload, variable part of the session context
(AR windows, lifetime etc in case of IPsec), is not accessible to the
application. If packets are not getting processed in the inline path
due to non security reasons (such as outer fragmentation or rte_flow
packet steering limitations), then the packet cannot be security
processed as the session context is private to the PMD and security
library doesn't provide alternate APIs to make use of the same session.

Introduce new API and Rx injection as fallback mechanism to security
processing failures due to non-security reasons. For example, when there
is outer fragmentation and PMD doesn't support reassembly of outer
fragments, application would receive fragments which it can then
reassemble. Post successful reassembly, packet can be submitted for
security processing and Rx inject. The packets can be then received in
the application as normal inline protocol processed packets.

Same API can be leveraged in lookaside protocol offload mode to inject
packet to Rx. This would help in using rte_flow based packet parsing
after security processing. For example, with IPsec, this will help in
flow splitting after IPsec processing is done.

In both inline protocol capable ethdevs and lookaside protocol capable
cryptodevs, the packet would be received back in eth port & queue based
on rte_flow rules and packet parsing after security processing. The API
would behave like a loopback but with the additional security
processing.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
 doc/guides/cryptodevs/features/default.ini |  1 +
 doc/guides/nics/features.rst               | 18 +++++
 doc/guides/nics/features/default.ini       |  1 +
 lib/cryptodev/rte_cryptodev.h              |  2 +
 lib/ethdev/rte_ethdev.c                    |  1 +
 lib/ethdev/rte_ethdev.h                    |  2 +
 lib/security/rte_security.h                | 87 ++++++++++++++++++++++
 lib/security/version.map                   |  1 +
 8 files changed, 113 insertions(+)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 6f637fa7e2..f411d4bab7 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -34,6 +34,7 @@ Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
 Inner checksum         =
+Rx inject              =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 1a1dc16c1e..48e3184ad8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -443,6 +443,24 @@ protocol operations. See security library and PMD documentation for more details
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
 
 
+.. _nic_features_rx_inject_doc:
+
+Rx inject
+---------
+
+Supports Rx inject to handle packets that failed inline protocol offload
+processing but need to be handled with same security session. The NIC is
+capable of processing the packet same way as regular inline protocol processed
+packets and would be received on ethdev queue based on rte_flow rules
+configured.
+
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SEC_RX_INJECT``,
+* **[uses]       mbuf**: ``mbuf.l2_len``.
+* **[implements] rte_security_ctx**: ``inb_pkt_rx_inject``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SEC_RX_INJECT``.
+* **[related]    API**: ``rte_security_inb_pkt_rx_inject``.
+
+
 .. _nic_features_crc_offload:
 
 CRC offload
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 2011e97127..0a1f8dc54b 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -44,6 +44,7 @@ Rate limitation      =
 Congestion management =
 Inline crypto        =
 Inline protocol      =
+Rx inject            =
 CRC offload          =
 VLAN offload         =
 QinQ offload         =
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index ba730373fb..c3306b12b4 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -536,6 +536,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support wrapped key in cipher xform  */
 #define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
 /**< Support inner checksum computation/verification */
+#define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
+/**< Support Rx injection after security processing */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 0840d2b594..ae1c7619d1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -106,6 +106,7 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
 	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(SEC_RX_INJECT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04a2564f22..7054323c86 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1517,6 +1517,8 @@ struct rte_eth_conf {
 #define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  RTE_BIT64(18)
 #define RTE_ETH_RX_OFFLOAD_RSS_HASH         RTE_BIT64(19)
 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT     RTE_BIT64(20)
+#define RTE_ETH_RX_OFFLOAD_SEC_RX_INJECT    RTE_BIT64(21)
+#define DEV_RX_OFFLOAD_SEC_RX_INJECT        RTE_ETH_RX_OFFLOAD_SEC_RX_INJECT
 
 #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
 				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 3b2df526ba..9c1b89cc3a 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -55,6 +55,31 @@ enum rte_security_ipsec_tunnel_type {
  */
 #define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR     0x1
 #define RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR 0x2
+extern struct rte_security_session **sess;
+
+/**
+ * Perform security processing of the packet and do an Rx inject after the
+ * packet is processed.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				*rte_security_session* structures corresponding
+ *				to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ */
+typedef uint16_t (*security_inb_pkt_rx_inject)(void *device,
+		struct rte_mbuf **pkts, struct rte_security_session **sess,
+		uint16_t nb_pkts);
 
 /**
  * Security context for crypto/eth devices
@@ -78,6 +103,8 @@ struct rte_security_ctx {
 	/**< Number of MACsec SA attached to this context */
 	uint32_t flags;
 	/**< Flags for security context */
+	security_inb_pkt_rx_inject inb_pkt_rx_inject;
+	/**< Perform security processing and do Rx inject */
 };
 
 #define RTE_SEC_CTX_F_FAST_SET_MDATA 0x00000001
@@ -969,6 +996,66 @@ rte_security_attach_session(struct rte_crypto_op *op,
 	return __rte_security_attach_session(op->sym, sess);
 }
 
+/**
+ * Perform security processing of packets and do Rx inject after processing.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing. In case of ethdev loopback, application would be
+ * submitting packets to ethdev Tx queues and would be received as is from
+ * ethdev Rx queues. With Rx inject, packets would be received after security
+ * processing from ethdev Rx queues.
+ *
+ * With inline protocol offload capable ethdevs, Rx injection can be used to
+ * handle packets which failed the regular security Rx path. This can be due to
+ * cases such as outer fragmentation, in which case applications can reassemble
+ * the fragments and then subsequently submit for inbound processing and Rx
+ * injection, so that packets are received as regular security processed
+ * packets.
+ *
+ * With lookaside protocol offload capable cryptodevs, Rx injection can be used
+ * to perform packet parsing after security processing. This would allow for
+ * re-classification after security protocol processing is done. The ethdev port
+ * on which the packet would be received would be based on rte_flow rules
+ * matching the packet after security processing. Also, since the packet would
+ * be identical to an inline protocol processed packet, eth devices should have
+ * security enabled (`RTE_ETHDEV_RX_SECURITY_F`).
+ *
+ * Since the packet would be received back from ethdev Rx queues, it is expected
+ * that application retains/adds L2 header with the mbuf field 'l2_len'
+ * reflecting the size of L2 header in the packet.
+ *
+ * If `hash.fdir.h` field is set in mbuf, it would be treated as the value for
+ * `MARK` pattern for the subsequent rte_flow parsing.
+ *
+ * @param	ctx		Security ctx
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				*rte_security_session* structures corresponding
+ *				to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ */
+__rte_experimental
+static inline uint16_t
+rte_security_inb_pkt_rx_inject(struct rte_security_ctx *ctx,
+			       struct rte_mbuf **pkts,
+			       struct rte_security_session **sess,
+			       uint16_t nb_pkts)
+{
+#ifdef RTE_DEBUG
+	RTE_PTR_OR_ERR_RET(ctx, 0);
+	RTE_PTR_OR_ERR_RET(ctx->ops, 0);
+	RTE_FUNC_PTR_OR_ERR_RET(ctx->inb_pkt_rx_inject, 0);
+#endif
+	return ctx->inb_pkt_rx_inject(ctx->device, pkts, sess, nb_pkts);
+}
+
+
 struct rte_security_macsec_secy_stats {
 	uint64_t ctl_pkt_bcast_cnt;
 	uint64_t ctl_pkt_mcast_cnt;
diff --git a/lib/security/version.map b/lib/security/version.map
index b2097a969d..99d43dbeef 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -15,6 +15,7 @@ EXPERIMENTAL {
 
 	__rte_security_set_pkt_metadata;
 	rte_security_dynfield_offset;
+	rte_security_inb_pkt_rx_inject;
 	rte_security_macsec_sa_create;
 	rte_security_macsec_sa_destroy;
 	rte_security_macsec_sa_stats_get;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH 2/2] test/cryptodev: add Rx inject test
  2023-08-11 11:45 [RFC PATCH 1/2] security: add fallback security processing and Rx inject Anoob Joseph
@ 2023-08-11 11:45 ` Anoob Joseph
  2023-08-24  7:55 ` [RFC PATCH 1/2] security: add fallback security processing and Rx inject Akhil Goyal
  2023-09-29  7:16 ` [PATCH v2 " Anoob Joseph
  2 siblings, 0 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-08-11 11:45 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Vidya Sagar Velumuri, Hemant Agrawal, dev, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

From: Vidya Sagar Velumuri <vvelumuri@marvell.com>

Add test to verify Rx inject.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
 app/test/test_cryptodev.c                | 326 +++++++++++++++++++----
 app/test/test_cryptodev_security_ipsec.h |   1 +
 2 files changed, 274 insertions(+), 53 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index fb2af40b99..b74bbb1348 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17,6 +17,7 @@
 
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
+#include <rte_ethdev.h>
 #include <rte_ip.h>
 #include <rte_string_fns.h>
 #include <rte_tcp.h>
@@ -1426,6 +1427,81 @@ ut_setup_security(void)
 	return dev_configure_and_start(0);
 }
 
+static int
+ut_setup_security_rx_inject(void)
+{
+	struct rte_mempool *mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_eth_conf port_conf = {
+		.rxmode = {
+			.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				    RTE_ETH_RX_OFFLOAD_SECURITY,
+		},
+		.txmode = {
+			.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+		},
+		.lpbk_mode = 1,  /* Enable loopback */
+	};
+	struct rte_cryptodev_info dev_info;
+	struct rte_eth_rxconf rx_conf = {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 8,
+		},
+		.rx_free_thresh = 32,
+	};
+	uint16_t nb_ports;
+	int ret;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT)) {
+		RTE_LOG(INFO, USER1, "Feature requirements for IPsec Rx inject test case not met\n"
+		       );
+		return TEST_SKIPPED;
+	}
+
+	nb_ports = rte_eth_dev_count_avail();
+	if (nb_ports == 0)
+		return TEST_SKIPPED;
+
+	ret = rte_eth_dev_configure(0 /* port_id */,
+				    1 /* nb_rx_queue */,
+				    0 /* nb_tx_queue */,
+				    &port_conf);
+	if (ret) {
+		printf("Could not configure ethdev port 0 [err=%d]\n", ret);
+		return TEST_SKIPPED;
+	}
+
+	/* Rx queue setup */
+	ret = rte_eth_rx_queue_setup(0 /* port_id */,
+				     0 /* rx_queue_id */,
+				     1024 /* nb_rx_desc */,
+				     SOCKET_ID_ANY,
+				     &rx_conf,
+				     mbuf_pool);
+	if (ret) {
+		printf("Could not setup eth port 0 queue 0\n");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_dev_start(0);
+	if (ret) {
+		printf("Could not start ethdev");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_promiscuous_enable(0);
+	if (ret) {
+		printf("Could not enable promiscuous mode");
+		return TEST_SKIPPED;
+	}
+
+	/* Configure and start cryptodev with no features disabled */
+	return dev_configure_and_start(0);
+}
+
 void
 ut_teardown(void)
 {
@@ -1478,6 +1554,21 @@ ut_teardown(void)
 	rte_cryptodev_stop(ts_params->valid_devs[0]);
 }
 
+static void
+ut_teardown_rx_inject(void)
+{
+	int ret;
+
+	if  (rte_eth_dev_count_avail() != 0) {
+		ret = rte_eth_dev_reset(0);
+		if (ret)
+			printf("Could not reset eth port 0");
+
+	}
+
+	ut_teardown();
+}
+
 static int
 test_device_configure_invalid_dev_id(void)
 {
@@ -9748,6 +9839,145 @@ test_PDCP_SDAP_PROTO_decap_all(void)
 	return (all_err == TEST_SUCCESS) ? TEST_SUCCESS : TEST_FAILED;
 }
 
+static int
+test_ipsec_proto_crypto_op_enq(struct crypto_testsuite_params *ts_params,
+			       struct crypto_unittest_params *ut_params,
+			       struct rte_security_ipsec_xform *ipsec_xform,
+			       const struct ipsec_test_data *td,
+			       const struct ipsec_test_flags *flags,
+			       int pkt_num)
+{
+	uint8_t dev_id = ts_params->valid_devs[0];
+	enum rte_security_ipsec_sa_direction dir;
+	int ret;
+
+	dir = ipsec_xform->direction;
+
+	/* Generate crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	if (!ut_params->op) {
+		printf("Could not allocate crypto op");
+		return TEST_FAILED;
+	}
+
+	/* Attach session to operation */
+	rte_security_attach_session(ut_params->op, ut_params->sec_session);
+
+	/* Set crypto operation mbufs */
+	ut_params->op->sym->m_src = ut_params->ibuf;
+	ut_params->op->sym->m_dst = NULL;
+
+	/* Copy IV in crypto operation when IV generation is disabled */
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
+	    ipsec_xform->options.iv_gen_disable == 1) {
+		uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
+							uint8_t *,
+							IV_OFFSET);
+		int len;
+
+		if (td->aead)
+			len = td->xform.aead.aead.iv.length;
+		else if (td->aes_gmac)
+			len = td->xform.chain.auth.auth.iv.length;
+		else
+			len = td->xform.chain.cipher.cipher.iv.length;
+
+		memcpy(iv, td->iv.data, len);
+	}
+
+	/* Process crypto operation */
+	process_crypto_request(dev_id, ut_params->op);
+
+	ret = test_ipsec_status_check(td, ut_params->op, flags, dir, pkt_num);
+
+	rte_crypto_op_free(ut_params->op);
+	ut_params->op = NULL;
+
+	return ret;
+}
+
+static int
+test_ipsec_proto_mbuf_enq(struct crypto_testsuite_params *ts_params,
+			  struct crypto_unittest_params *ut_params,
+			  struct rte_security_ctx *ctx)
+{
+	struct rte_security_session **sec_sess;
+	struct rte_security_ctx *eth_sec_ctx;
+	struct rte_ether_hdr *hdr;
+	struct rte_mbuf *m;
+	uint64_t timeout;
+	void *userdata;
+	int ret;
+
+	RTE_SET_USED(ts_params);
+
+	hdr = (void *)rte_pktmbuf_prepend(ut_params->ibuf, sizeof(struct rte_ether_hdr));
+	hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+
+	ut_params->ibuf->l2_len = sizeof(struct rte_ether_hdr);
+
+	sec_sess = (struct rte_security_session **)&ut_params->sec_session;
+	ret = rte_security_inb_pkt_rx_inject(ctx, &ut_params->ibuf, sec_sess, 1);
+
+	if (ret != 1)
+		return TEST_FAILED;
+
+	ut_params->ibuf = NULL;
+
+	/* Add a timeout for 1 s */
+	timeout = rte_get_tsc_cycles() + rte_get_tsc_hz();
+
+	do {
+		/* Get packet from port 0, queue 0 */
+		ret = rte_eth_rx_burst(0, 0, &m, 1);
+	} while ((ret == 0) && (rte_get_tsc_cycles() > timeout));
+
+	if (ret == 0) {
+		printf("Could not receive packets from ethdev\n");
+		return TEST_FAILED;
+	}
+
+	if (m == NULL) {
+		printf("Received mbuf is NULL\n");
+		return TEST_FAILED;
+	}
+
+	ut_params->ibuf = m;
+
+	if (!(m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+		printf("Received packet is not Rx security processed\n");
+		return TEST_FAILED;
+	}
+
+	if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) {
+		printf("Received packet has failed Rx security processing\n");
+		return TEST_FAILED;
+	}
+
+	eth_sec_ctx = rte_eth_dev_get_sec_ctx(0);
+	if (eth_sec_ctx == NULL) {
+		printf("Could not fetch ethdev sec ctx\n");
+		return TEST_FAILED;
+	}
+
+	/*
+	 * 'ut_params' is set as userdata. Verify that the field is returned
+	 * correctly.
+	 */
+
+	userdata = rte_security_dynfield(m);
+	if (userdata != ut_params) {
+		printf("Userdata retrieved not matching expected\n");
+		return TEST_FAILED;
+	}
+
+	/* Trim L2 header */
+	rte_pktmbuf_adj(m, sizeof(struct rte_ether_hdr));
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_ipsec_proto_process(const struct ipsec_test_data td[],
 			 struct ipsec_test_data res_d[],
@@ -9937,6 +10167,9 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		}
 	}
 
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+		sess_conf.userdata = ut_params;
+
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
 					ts_params->session_mpool);
@@ -9959,58 +10192,29 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 
 		/* Copy test data before modification */
 		memcpy(input_text, td[i].input_text.data, td[i].input_text.len);
-		if (test_ipsec_pkt_update(input_text, flags))
-			return TEST_FAILED;
+		if (test_ipsec_pkt_update(input_text, flags)) {
+			ret = TEST_FAILED;
+			goto mbuf_free;
+		}
 
 		/* Setup source mbuf payload */
 		ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, td[i].input_text.len,
 				nb_segs, 0);
 		pktmbuf_write(ut_params->ibuf, 0, td[i].input_text.len, input_text);
 
-		/* Generate crypto op data structure */
-		ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
-					RTE_CRYPTO_OP_TYPE_SYMMETRIC);
-		if (!ut_params->op) {
-			printf("TestCase %s line %d: %s\n",
-				__func__, __LINE__,
-				"failed to allocate crypto op");
-			ret = TEST_FAILED;
-			goto crypto_op_free;
-		}
-
-		/* Attach session to operation */
-		rte_security_attach_session(ut_params->op,
-					    ut_params->sec_session);
-
-		/* Set crypto operation mbufs */
-		ut_params->op->sym->m_src = ut_params->ibuf;
-		ut_params->op->sym->m_dst = NULL;
-
-		/* Copy IV in crypto operation when IV generation is disabled */
-		if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
-		    ipsec_xform.options.iv_gen_disable == 1) {
-			uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
-								uint8_t *,
-								IV_OFFSET);
-			int len;
-
-			if (td[i].aead)
-				len = td[i].xform.aead.aead.iv.length;
-			else if (td[i].aes_gmac)
-				len = td[i].xform.chain.auth.auth.iv.length;
-			else
-				len = td[i].xform.chain.cipher.cipher.iv.length;
-
-			memcpy(iv, td[i].iv.data, len);
-		}
-
-		/* Process crypto operation */
-		process_crypto_request(dev_id, ut_params->op);
+		if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
+		    flags->rx_inject)
+			ret = test_ipsec_proto_mbuf_enq(ts_params, ut_params,
+							ctx);
+		else
+			ret = test_ipsec_proto_crypto_op_enq(ts_params,
+							     ut_params,
+							     &ipsec_xform,
+							     &td[i], flags,
+							     i + 1);
 
-		ret = test_ipsec_status_check(&td[i], ut_params->op, flags, dir,
-					      i + 1);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		if (res_d != NULL)
 			res_d_tmp = &res_d[i];
@@ -10018,24 +10222,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		ret = test_ipsec_post_process(ut_params->ibuf, &td[i],
 					      res_d_tmp, silent, flags);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		ret = test_ipsec_stats_verify(ctx, ut_params->sec_session,
 					      flags, dir);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
-
-		rte_crypto_op_free(ut_params->op);
-		ut_params->op = NULL;
+			goto mbuf_free;
 
 		rte_pktmbuf_free(ut_params->ibuf);
 		ut_params->ibuf = NULL;
 	}
 
-crypto_op_free:
-	rte_crypto_op_free(ut_params->op);
-	ut_params->op = NULL;
-
+mbuf_free:
 	rte_pktmbuf_free(ut_params->ibuf);
 	ut_params->ibuf = NULL;
 
@@ -10100,6 +10298,24 @@ test_ipsec_proto_known_vec_fragmented(const void *test_data)
 	return test_ipsec_proto_process(&td_outb, NULL, 1, false, &flags);
 }
 
+static int
+test_ipsec_proto_known_vec_inb_rx_inject(const void *test_data)
+{
+	const struct ipsec_test_data *td = test_data;
+	struct ipsec_test_flags flags;
+	struct ipsec_test_data td_inb;
+
+	memset(&flags, 0, sizeof(flags));
+	flags.rx_inject = true;
+
+	if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		test_ipsec_td_in_from_out(td, &td_inb);
+	else
+		memcpy(&td_inb, td, sizeof(td_inb));
+
+	return test_ipsec_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
 static int
 test_ipsec_proto_all(const struct ipsec_test_flags *flags)
 {
@@ -16133,6 +16349,10 @@ static struct unit_test_suite ipsec_proto_testsuite  = {
 			"Tunnel header IPv6 decrement inner hop limit",
 			ut_setup_security, ut_teardown,
 			test_ipsec_proto_ipv6_hop_limit_decrement),
+		TEST_CASE_NAMED_WITH_DATA(
+			"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128) Rx inject",
+			ut_setup_security_rx_inject, ut_teardown_rx_inject,
+			test_ipsec_proto_known_vec_inb_rx_inject, &pkt_aes_128_gcm),
 		TEST_CASE_NAMED_ST(
 			"Multi-segmented mode",
 			ut_setup_security, ut_teardown,
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 92e641ba0b..29fe0af6c6 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -110,6 +110,7 @@ struct ipsec_test_flags {
 	bool ah;
 	uint32_t plaintext_len;
 	int nb_segs_in_mbuf;
+	bool rx_inject;
 };
 
 struct crypto_param {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [RFC PATCH 1/2] security: add fallback security processing and Rx inject
  2023-08-11 11:45 [RFC PATCH 1/2] security: add fallback security processing and Rx inject Anoob Joseph
  2023-08-11 11:45 ` [RFC PATCH 2/2] test/cryptodev: add Rx inject test Anoob Joseph
@ 2023-08-24  7:55 ` Akhil Goyal
  2023-09-29  7:16 ` [PATCH v2 " Anoob Joseph
  2 siblings, 0 replies; 11+ messages in thread
From: Akhil Goyal @ 2023-08-24  7:55 UTC (permalink / raw)
  To: Anoob Joseph, Jerin Jacob Kollanukkaran, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power, marcinx.smoczynski

> +/**
> + * Perform security processing of packets and do Rx inject after processing.
> + *
> + * Rx inject would behave similarly to ethdev loopback but with the additional
> + * security processing. In case of ethdev loopback, application would be
> + * submitting packets to ethdev Tx queues and would be received as is from
> + * ethdev Rx queues. With Rx inject, packets would be received after security
> + * processing from ethdev Rx queues.
> + *
> + * With inline protocol offload capable ethdevs, Rx injection can be used to
> + * handle packets which failed the regular security Rx path. This can be due to
> + * cases such as outer fragmentation, in which case applications can
> reassemble
> + * the fragments and then subsequently submit for inbound processing and Rx
> + * injection, so that packets are received as regular security processed
> + * packets.
> + *
> + * With lookaside protocol offload capable cryptodevs, Rx injection can be
> used
> + * to perform packet parsing after security processing. This would allow for
> + * re-classification after security protocol processing is done. The ethdev port
> + * on which the packet would be received would be based on rte_flow rules
> + * matching the packet after security processing. Also, since the packet would
> + * be identical to an inline protocol processed packet, eth devices should have
> + * security enabled (`RTE_ETHDEV_RX_SECURITY_F`).
> + *
> + * Since the packet would be received back from ethdev Rx queues, it is
> expected
> + * that application retains/adds L2 header with the mbuf field 'l2_len'
> + * reflecting the size of L2 header in the packet.
> + *
> + * If `hash.fdir.h` field is set in mbuf, it would be treated as the value for
> + * `MARK` pattern for the subsequent rte_flow parsing.
> + *
> + * @param	ctx		Security ctx
> + * @param	pkts		The address of an array of *nb_pkts* pointers
> to
> + *				*rte_mbuf* structures which contain the
> packets.
> + * @param	sess		The address of an array of *nb_pkts* pointers
> to
> + *				*rte_security_session* structures
> corresponding
> + *				to each packet.
> + * @param	nb_pkts		The maximum number of packets to process.
> + *
> + * @return
> + *   The number of packets successfully injected to ethdev Rx. The return
> + *   value can be less than the value of the *nb_pkts* parameter when the
> + *   PMD internal queues have been filled up.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_security_inb_pkt_rx_inject(struct rte_security_ctx *ctx,
> +			       struct rte_mbuf **pkts,
> +			       struct rte_security_session **sess,
> +			       uint16_t nb_pkts)

rte_security_session is internal to library and not exposed.
Also security_ctx is planned to be made internal.
Can we make this as non-inline function and add as part of rte_security_op?
I believe this is a fallback flow, which means not very performance intensive.

> +{
> +#ifdef RTE_DEBUG
> +	RTE_PTR_OR_ERR_RET(ctx, 0);
> +	RTE_PTR_OR_ERR_RET(ctx->ops, 0);
> +	RTE_FUNC_PTR_OR_ERR_RET(ctx->inb_pkt_rx_inject, 0);
> +#endif
> +	return ctx->inb_pkt_rx_inject(ctx->device, pkts, sess, nb_pkts);
> +}
> +
> +
>  struct rte_security_macsec_secy_stats {
>  	uint64_t ctl_pkt_bcast_cnt;
>  	uint64_t ctl_pkt_mcast_cnt;
> diff --git a/lib/security/version.map b/lib/security/version.map
> index b2097a969d..99d43dbeef 100644
> --- a/lib/security/version.map
> +++ b/lib/security/version.map
> @@ -15,6 +15,7 @@ EXPERIMENTAL {
> 
>  	__rte_security_set_pkt_metadata;
>  	rte_security_dynfield_offset;
> +	rte_security_inb_pkt_rx_inject;
>  	rte_security_macsec_sa_create;
>  	rte_security_macsec_sa_destroy;
>  	rte_security_macsec_sa_stats_get;
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/2] security: add fallback security processing and Rx inject
  2023-08-11 11:45 [RFC PATCH 1/2] security: add fallback security processing and Rx inject Anoob Joseph
  2023-08-11 11:45 ` [RFC PATCH 2/2] test/cryptodev: add Rx inject test Anoob Joseph
  2023-08-24  7:55 ` [RFC PATCH 1/2] security: add fallback security processing and Rx inject Akhil Goyal
@ 2023-09-29  7:16 ` Anoob Joseph
  2023-09-29  7:16   ` [PATCH v2 2/2] test/cryptodev: add Rx inject test Anoob Joseph
  2023-09-29 15:39   ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Anoob Joseph
  2 siblings, 2 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-09-29  7:16 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

Add alternate datapath API for security processing which would do Rx
injection (similar to loopback) after successful security processing.

With inline protocol offload, variable part of the session context
(AR windows, lifetime etc in case of IPsec), is not accessible to the
application. If packets are not getting processed in the inline path
due to non security reasons (such as outer fragmentation or rte_flow
packet steering limitations), then the packet cannot be security
processed as the session context is private to the PMD and security
library doesn't provide alternate APIs to make use of the same session.

Introduce new API and Rx injection as fallback mechanism to security
processing failures due to non-security reasons. For example, when there
is outer fragmentation and PMD doesn't support reassembly of outer
fragments, application would receive fragments which it can then
reassemble. Post successful reassembly, packet can be submitted for
security processing and Rx inject. The packets can be then received in
the application as normal inline protocol processed packets.

Same API can be leveraged in lookaside protocol offload mode to inject
packet to Rx. This would help in using rte_flow based packet parsing
after security processing. For example, with IPsec, this will help in
inner parsing and flow splitting after IPsec processing is done.

In both inline protocol capable ethdevs and lookaside protocol capable
cryptodevs, the packet would be received back in eth port & queue based
on rte_flow rules and packet parsing after security processing. The API
would behave like a loopback but with the additional security
processing.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
v2:
* Added a new API for configuring security device to do Rx inject to a specific
  ethdev port
* Rebased

 doc/guides/cryptodevs/features/default.ini |  1 +
 lib/cryptodev/rte_cryptodev.h              |  2 +
 lib/security/rte_security.c                | 22 ++++++
 lib/security/rte_security.h                | 85 ++++++++++++++++++++++
 lib/security/rte_security_driver.h         | 44 +++++++++++
 lib/security/version.map                   |  3 +
 6 files changed, 157 insertions(+)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 6f637fa7e2..f411d4bab7 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -34,6 +34,7 @@ Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
 Inner checksum         =
+Rx inject              =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 9f07e1ed2c..05aabb6526 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -534,6 +534,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support wrapped key in cipher xform  */
 #define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
 /**< Support inner checksum computation/verification */
+#define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
+/**< Support Rx injection after security processing */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index ab44bbe0f0..fa8d2bb7ce 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -321,6 +321,28 @@ rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 	return NULL;
 }
 
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
+{
+	struct rte_security_ctx *instance = ctx;
+
+	RTE_PTR_OR_ERR_RET(instance, -EINVAL);
+	RTE_PTR_OR_ERR_RET(instance->ops, -ENOTSUP);
+	RTE_PTR_OR_ERR_RET(instance->ops->rx_inject_configure, -ENOTSUP);
+
+	return instance->ops->rx_inject_configure(instance->device, port_id, enable);
+}
+
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+			       uint16_t nb_pkts)
+{
+	struct rte_security_ctx *instance = ctx;
+
+	return instance->ops->inb_pkt_rx_inject(instance->device, pkts,
+						(struct rte_security_session **)sess, nb_pkts);
+}
+
 static int
 security_handle_cryptodev_list(const char *cmd __rte_unused,
 			       const char *params __rte_unused,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index c9cc7a45a6..fe8e8e9813 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -1310,6 +1310,91 @@ const struct rte_security_capability *
 rte_security_capability_get(void *instance,
 			    struct rte_security_capability_idx *idx);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Configure security device to inject packets to an ethdev port.
+ *
+ * This API must be called only when both security device and the ethdev is in
+ * stopped state. The security device need to be configured before any packets
+ * are submitted to ``rte_security_inb_pkt_rx_inject`` API.
+ *
+ * @param	ctx		Security ctx
+ * @param	port_id		Port identifier of the ethernet device to which
+ *				packets need to be injected.
+ * @param	enable		Flag to enable and disable connection between a
+ *				security device and an ethdev port.
+ * @return
+ *   - 0 if successful.
+ *   - -EINVAL if context NULL or port_id is invalid.
+ *   - -EBUSY if devices are not in stopped state.
+ *   - -ENOTSUP if security device does not support injecting to the ethdev
+ *      port.
+ *
+ * @see rte_security_inb_pkt_rx_inject
+ */
+__rte_experimental
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Perform security processing of packets and inject the processed packet to
+ * ethdev Rx.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing. In case of ethdev loopback, application would be
+ * submitting packets to ethdev Tx queues and would be received as is from
+ * ethdev Rx queues. With Rx inject, packets would be received after security
+ * processing from ethdev Rx queues.
+ *
+ * With inline protocol offload capable ethdevs, Rx injection can be used to
+ * handle packets which failed the regular security Rx path. This can be due to
+ * cases such as outer fragmentation, in which case applications can reassemble
+ * the fragments and then subsequently submit for inbound processing and Rx
+ * injection, so that packets are received as regular security processed
+ * packets.
+ *
+ * With lookaside protocol offload capable cryptodevs, Rx injection can be used
+ * to perform packet parsing after security processing. This would allow for
+ * re-classification after security protocol processing is done (ie, inner
+ * packet parsing). The ethdev queue on which the packet would be received would
+ * be based on rte_flow rules matching the packet after security processing.
+ *
+ * The security device which is injecting packets to ethdev Rx need to be
+ * configured using ``rte_security_rx_inject_configure`` with enable flag set
+ * to `true` before any packets are submitted.
+ *
+ * If `hash.fdir.h` field is set in mbuf, it would be treated as the value for
+ * `MARK` pattern for the subsequent rte_flow parsing. The packet would appear
+ * as if it is received from `port` field in mbuf.
+ *
+ * Since the packet would be received back from ethdev Rx queues, it is expected
+ * that application retains/adds L2 header with the mbuf field 'l2_len'
+ * reflecting the size of L2 header in the packet.
+ *
+ * @param	ctx		Security ctx
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				security sessions corresponding to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ *
+ * @see rte_security_rx_inject_configure
+ */
+__rte_experimental
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+			       uint16_t nb_pkts);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index e5e1c4cfe8..62664dacdb 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -257,6 +257,46 @@ typedef int (*security_set_pkt_metadata_t)(void *device,
 typedef const struct rte_security_capability *(*security_capabilities_get_t)(
 		void *device);
 
+/**
+ * Configure security device to inject packets to an ethdev port.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	port_id		Port identifier of the ethernet device to which packets need to be
+ *				injected.
+ * @param	enable		Flag to enable and disable connection between a security device and
+ *				an ethdev port.
+ * @return
+ *   - 0 if successful.
+ *   - -EINVAL if context NULL or port_id is invalid.
+ *   - -EBUSY if devices are not in stopped state.
+ *   - -ENOTSUP if security device does not support injecting to the ethdev port.
+ */
+typedef int (*security_rx_inject_configure)(void *device, uint16_t port_id, bool enable);
+
+/**
+ * Perform security processing of packets and inject the processed packet to
+ * ethdev Rx.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				*rte_security_session* structures corresponding
+ *				to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ */
+typedef uint16_t (*security_inb_pkt_rx_inject)(void *device,
+		struct rte_mbuf **pkts, struct rte_security_session **sess,
+		uint16_t nb_pkts);
+
 /** Security operations function pointer table */
 struct rte_security_ops {
 	security_session_create_t session_create;
@@ -285,6 +325,10 @@ struct rte_security_ops {
 	/**< Get MACsec SC statistics. */
 	security_macsec_sa_stats_get_t macsec_sa_stats_get;
 	/**< Get MACsec SA statistics. */
+	security_rx_inject_configure rx_inject_configure;
+	/**< Rx inject configure. */
+	security_inb_pkt_rx_inject inb_pkt_rx_inject;
+	/**< Perform security processing and do Rx inject. */
 };
 
 #ifdef __cplusplus
diff --git a/lib/security/version.map b/lib/security/version.map
index 86f976a302..e07fca33a1 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -24,6 +24,9 @@ EXPERIMENTAL {
 	rte_security_session_stats_get;
 	rte_security_session_update;
 	rte_security_oop_dynfield_offset;
+
+	rte_security_rx_inject_configure;
+	rte_security_inb_pkt_rx_inject;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 2/2] test/cryptodev: add Rx inject test
  2023-09-29  7:16 ` [PATCH v2 " Anoob Joseph
@ 2023-09-29  7:16   ` Anoob Joseph
  2023-09-29 15:39   ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Anoob Joseph
  1 sibling, 0 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-09-29  7:16 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Vidya Sagar Velumuri, Hemant Agrawal, dev, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

From: Vidya Sagar Velumuri <vvelumuri@marvell.com>

Add test to verify Rx inject. The test case added would push a known
vector to cryptodev which would be injected to ethdev Rx. The test
case verifies that the packet is received from ethdev Rx and is
processed successfully. It also verifies that the userdata matches with
the expectation.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
 app/test/test_cryptodev.c                | 341 +++++++++++++++++++----
 app/test/test_cryptodev_security_ipsec.h |   1 +
 2 files changed, 289 insertions(+), 53 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index f2112e181e..420f60553d 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17,6 +17,7 @@
 
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
+#include <rte_ethdev.h>
 #include <rte_ip.h>
 #include <rte_string_fns.h>
 #include <rte_tcp.h>
@@ -1426,6 +1427,93 @@ ut_setup_security(void)
 	return dev_configure_and_start(0);
 }
 
+static int
+ut_setup_security_rx_inject(void)
+{
+	struct rte_mempool *mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_eth_conf port_conf = {
+		.rxmode = {
+			.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				    RTE_ETH_RX_OFFLOAD_SECURITY,
+		},
+		.txmode = {
+			.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+		},
+		.lpbk_mode = 1,  /* Enable loopback */
+	};
+	struct rte_cryptodev_info dev_info;
+	struct rte_eth_rxconf rx_conf = {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 8,
+		},
+		.rx_free_thresh = 32,
+	};
+	uint16_t nb_ports;
+	void *sec_ctx;
+	int ret;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) ||
+	    !(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY)) {
+		RTE_LOG(INFO, USER1, "Feature requirements for IPsec Rx inject test case not met\n"
+		       );
+		return TEST_SKIPPED;
+	}
+
+	sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+	if (sec_ctx == NULL)
+		return TEST_SKIPPED;
+
+	nb_ports = rte_eth_dev_count_avail();
+	if (nb_ports == 0)
+		return TEST_SKIPPED;
+
+	ret = rte_eth_dev_configure(0 /* port_id */,
+				    1 /* nb_rx_queue */,
+				    0 /* nb_tx_queue */,
+				    &port_conf);
+	if (ret) {
+		printf("Could not configure ethdev port 0 [err=%d]\n", ret);
+		return TEST_SKIPPED;
+	}
+
+	/* Rx queue setup */
+	ret = rte_eth_rx_queue_setup(0 /* port_id */,
+				     0 /* rx_queue_id */,
+				     1024 /* nb_rx_desc */,
+				     SOCKET_ID_ANY,
+				     &rx_conf,
+				     mbuf_pool);
+	if (ret) {
+		printf("Could not setup eth port 0 queue 0\n");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_security_rx_inject_configure(sec_ctx, 0, true);
+	if (ret) {
+		printf("Could not enable Rx inject offload");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_dev_start(0);
+	if (ret) {
+		printf("Could not start ethdev");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_promiscuous_enable(0);
+	if (ret) {
+		printf("Could not enable promiscuous mode");
+		return TEST_SKIPPED;
+	}
+
+	/* Configure and start cryptodev with no features disabled */
+	return dev_configure_and_start(0);
+}
+
 void
 ut_teardown(void)
 {
@@ -1478,6 +1566,33 @@ ut_teardown(void)
 	rte_cryptodev_stop(ts_params->valid_devs[0]);
 }
 
+static void
+ut_teardown_rx_inject(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	void *sec_ctx;
+	int ret;
+
+	if  (rte_eth_dev_count_avail() != 0) {
+		ret = rte_eth_dev_reset(0);
+		if (ret)
+			printf("Could not reset eth port 0");
+
+	}
+
+	ut_teardown();
+
+	sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+	if (sec_ctx == NULL)
+		return;
+
+	ret = rte_security_rx_inject_configure(sec_ctx, 0, false);
+	if (ret) {
+		printf("Could not disable Rx inject offload");
+		return;
+	}
+}
+
 static int
 test_device_configure_invalid_dev_id(void)
 {
@@ -9875,6 +9990,137 @@ ext_mbuf_create(struct rte_mempool *mbuf_pool, int pkt_len,
 	return NULL;
 }
 
+static int
+test_ipsec_proto_crypto_op_enq(struct crypto_testsuite_params *ts_params,
+			       struct crypto_unittest_params *ut_params,
+			       struct rte_security_ipsec_xform *ipsec_xform,
+			       const struct ipsec_test_data *td,
+			       const struct ipsec_test_flags *flags,
+			       int pkt_num)
+{
+	uint8_t dev_id = ts_params->valid_devs[0];
+	enum rte_security_ipsec_sa_direction dir;
+	int ret;
+
+	dir = ipsec_xform->direction;
+
+	/* Generate crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	if (!ut_params->op) {
+		printf("Could not allocate crypto op");
+		return TEST_FAILED;
+	}
+
+	/* Attach session to operation */
+	rte_security_attach_session(ut_params->op, ut_params->sec_session);
+
+	/* Set crypto operation mbufs */
+	ut_params->op->sym->m_src = ut_params->ibuf;
+	ut_params->op->sym->m_dst = NULL;
+
+	/* Copy IV in crypto operation when IV generation is disabled */
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
+	    ipsec_xform->options.iv_gen_disable == 1) {
+		uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
+							uint8_t *,
+							IV_OFFSET);
+		int len;
+
+		if (td->aead)
+			len = td->xform.aead.aead.iv.length;
+		else if (td->aes_gmac)
+			len = td->xform.chain.auth.auth.iv.length;
+		else
+			len = td->xform.chain.cipher.cipher.iv.length;
+
+		memcpy(iv, td->iv.data, len);
+	}
+
+	/* Process crypto operation */
+	process_crypto_request(dev_id, ut_params->op);
+
+	ret = test_ipsec_status_check(td, ut_params->op, flags, dir, pkt_num);
+
+	rte_crypto_op_free(ut_params->op);
+	ut_params->op = NULL;
+
+	return ret;
+}
+
+static int
+test_ipsec_proto_mbuf_enq(struct crypto_testsuite_params *ts_params,
+			  struct crypto_unittest_params *ut_params,
+			  void *ctx)
+{
+	struct rte_ether_hdr *hdr;
+	struct rte_mbuf *m;
+	uint64_t timeout;
+	void **sec_sess;
+	void *userdata;
+	int ret;
+
+	RTE_SET_USED(ts_params);
+
+	hdr = (void *)rte_pktmbuf_prepend(ut_params->ibuf, sizeof(struct rte_ether_hdr));
+	hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+
+	ut_params->ibuf->l2_len = sizeof(struct rte_ether_hdr);
+
+	sec_sess = &ut_params->sec_session;
+	ret = rte_security_inb_pkt_rx_inject(ctx, &ut_params->ibuf, sec_sess, 1);
+
+	if (ret != 1)
+		return TEST_FAILED;
+
+	ut_params->ibuf = NULL;
+
+	/* Add a timeout for 1 s */
+	timeout = rte_get_tsc_cycles() + rte_get_tsc_hz();
+
+	do {
+		/* Get packet from port 0, queue 0 */
+		ret = rte_eth_rx_burst(0, 0, &m, 1);
+	} while ((ret == 0) && (rte_get_tsc_cycles() > timeout));
+
+	if (ret == 0) {
+		printf("Could not receive packets from ethdev\n");
+		return TEST_FAILED;
+	}
+
+	if (m == NULL) {
+		printf("Received mbuf is NULL\n");
+		return TEST_FAILED;
+	}
+
+	ut_params->ibuf = m;
+
+	if (!(m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+		printf("Received packet is not Rx security processed\n");
+		return TEST_FAILED;
+	}
+
+	if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) {
+		printf("Received packet has failed Rx security processing\n");
+		return TEST_FAILED;
+	}
+
+	/*
+	 * 'ut_params' is set as userdata. Verify that the field is returned
+	 * correctly.
+	 */
+	userdata = (void *)*rte_security_dynfield(m);
+	if (userdata != ut_params) {
+		printf("Userdata retrieved not matching expected\n");
+		return TEST_FAILED;
+	}
+
+	/* Trim L2 header */
+	rte_pktmbuf_adj(m, sizeof(struct rte_ether_hdr));
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_ipsec_proto_process(const struct ipsec_test_data td[],
 			 struct ipsec_test_data res_d[],
@@ -10064,6 +10310,9 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		}
 	}
 
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+		sess_conf.userdata = ut_params;
+
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
 					ts_params->session_mpool);
@@ -10086,8 +10335,10 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 
 		/* Copy test data before modification */
 		memcpy(input_text, td[i].input_text.data, td[i].input_text.len);
-		if (test_ipsec_pkt_update(input_text, flags))
-			return TEST_FAILED;
+		if (test_ipsec_pkt_update(input_text, flags)) {
+			ret = TEST_FAILED;
+			goto mbuf_free;
+		}
 
 		/* Setup source mbuf payload */
 		if (flags->use_ext_mbuf) {
@@ -10099,50 +10350,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 			pktmbuf_write(ut_params->ibuf, 0, td[i].input_text.len, input_text);
 		}
 
-		/* Generate crypto op data structure */
-		ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
-					RTE_CRYPTO_OP_TYPE_SYMMETRIC);
-		if (!ut_params->op) {
-			printf("TestCase %s line %d: %s\n",
-				__func__, __LINE__,
-				"failed to allocate crypto op");
-			ret = TEST_FAILED;
-			goto crypto_op_free;
-		}
-
-		/* Attach session to operation */
-		rte_security_attach_session(ut_params->op,
-					    ut_params->sec_session);
-
-		/* Set crypto operation mbufs */
-		ut_params->op->sym->m_src = ut_params->ibuf;
-		ut_params->op->sym->m_dst = NULL;
-
-		/* Copy IV in crypto operation when IV generation is disabled */
-		if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
-		    ipsec_xform.options.iv_gen_disable == 1) {
-			uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
-								uint8_t *,
-								IV_OFFSET);
-			int len;
-
-			if (td[i].aead)
-				len = td[i].xform.aead.aead.iv.length;
-			else if (td[i].aes_gmac)
-				len = td[i].xform.chain.auth.auth.iv.length;
-			else
-				len = td[i].xform.chain.cipher.cipher.iv.length;
-
-			memcpy(iv, td[i].iv.data, len);
-		}
-
-		/* Process crypto operation */
-		process_crypto_request(dev_id, ut_params->op);
+		if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+			ret = test_ipsec_proto_mbuf_enq(ts_params, ut_params,
+							ctx);
+		else
+			ret = test_ipsec_proto_crypto_op_enq(ts_params,
+							     ut_params,
+							     &ipsec_xform,
+							     &td[i], flags,
+							     i + 1);
 
-		ret = test_ipsec_status_check(&td[i], ut_params->op, flags, dir,
-					      i + 1);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		if (res_d != NULL)
 			res_d_tmp = &res_d[i];
@@ -10150,24 +10369,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		ret = test_ipsec_post_process(ut_params->ibuf, &td[i],
 					      res_d_tmp, silent, flags);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		ret = test_ipsec_stats_verify(ctx, ut_params->sec_session,
 					      flags, dir);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
-
-		rte_crypto_op_free(ut_params->op);
-		ut_params->op = NULL;
+			goto mbuf_free;
 
 		rte_pktmbuf_free(ut_params->ibuf);
 		ut_params->ibuf = NULL;
 	}
 
-crypto_op_free:
-	rte_crypto_op_free(ut_params->op);
-	ut_params->op = NULL;
-
+mbuf_free:
 	if (flags->use_ext_mbuf)
 		ext_mbuf_memzone_free(nb_segs);
 
@@ -10256,6 +10469,24 @@ test_ipsec_proto_known_vec_fragmented(const void *test_data)
 	return test_ipsec_proto_process(&td_outb, NULL, 1, false, &flags);
 }
 
+static int
+test_ipsec_proto_known_vec_inb_rx_inject(const void *test_data)
+{
+	const struct ipsec_test_data *td = test_data;
+	struct ipsec_test_flags flags;
+	struct ipsec_test_data td_inb;
+
+	memset(&flags, 0, sizeof(flags));
+	flags.rx_inject = true;
+
+	if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		test_ipsec_td_in_from_out(td, &td_inb);
+	else
+		memcpy(&td_inb, td, sizeof(td_inb));
+
+	return test_ipsec_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
 static int
 test_ipsec_proto_all(const struct ipsec_test_flags *flags)
 {
@@ -16319,6 +16550,10 @@ static struct unit_test_suite ipsec_proto_testsuite  = {
 			"Multi-segmented external mbuf mode",
 			ut_setup_security, ut_teardown,
 			test_ipsec_proto_sgl_ext_mbuf),
+		TEST_CASE_NAMED_WITH_DATA(
+			"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128) Rx inject",
+			ut_setup_security_rx_inject, ut_teardown_rx_inject,
+			test_ipsec_proto_known_vec_inb_rx_inject, &pkt_aes_128_gcm),
 		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 8587fc4577..d7fc562751 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -112,6 +112,7 @@ struct ipsec_test_flags {
 	uint32_t plaintext_len;
 	int nb_segs_in_mbuf;
 	bool inb_oop;
+	bool rx_inject;
 };
 
 struct crypto_param {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/2] security: add fallback security processing and Rx inject
  2023-09-29  7:16 ` [PATCH v2 " Anoob Joseph
  2023-09-29  7:16   ` [PATCH v2 2/2] test/cryptodev: add Rx inject test Anoob Joseph
@ 2023-09-29 15:39   ` Anoob Joseph
  2023-09-29 15:39     ` [PATCH v3 2/2] test/cryptodev: add Rx inject test Anoob Joseph
                       ` (2 more replies)
  1 sibling, 3 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-09-29 15:39 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

Add alternate datapath API for security processing which would do Rx
injection (similar to loopback) after successful security processing.

With inline protocol offload, variable part of the session context
(AR windows, lifetime etc in case of IPsec), is not accessible to the
application. If packets are not getting processed in the inline path
due to non security reasons (such as outer fragmentation or rte_flow
packet steering limitations), then the packet cannot be security
processed as the session context is private to the PMD and security
library doesn't provide alternate APIs to make use of the same session.

Introduce new API and Rx injection as fallback mechanism to security
processing failures due to non-security reasons. For example, when there
is outer fragmentation and PMD doesn't support reassembly of outer
fragments, application would receive fragments which it can then
reassemble. Post successful reassembly, packet can be submitted for
security processing and Rx inject. The packets can be then received in
the application as normal inline protocol processed packets.

Same API can be leveraged in lookaside protocol offload mode to inject
packet to Rx. This would help in using rte_flow based packet parsing
after security processing. For example, with IPsec, this will help in
flow splitting after IPsec processing is done.

In both inline protocol capable ethdevs and lookaside protocol capable
cryptodevs, the packet would be received back in eth port & queue based
on rte_flow rules and packet parsing after security processing. The API
would behave like a loopback but with the additional security
processing.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
v3:
* Resolved compilation error with 32 bit build

v2:
* Added a new API for configuring security device to do Rx inject to a specific
  ethdev port
* Rebased

 doc/guides/cryptodevs/features/default.ini |  1 +
 lib/cryptodev/rte_cryptodev.h              |  2 +
 lib/security/rte_security.c                | 22 ++++++
 lib/security/rte_security.h                | 85 ++++++++++++++++++++++
 lib/security/rte_security_driver.h         | 44 +++++++++++
 lib/security/version.map                   |  3 +
 6 files changed, 157 insertions(+)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 6f637fa7e2..f411d4bab7 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -34,6 +34,7 @@ Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
 Inner checksum         =
+Rx inject              =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 9f07e1ed2c..05aabb6526 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -534,6 +534,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support wrapped key in cipher xform  */
 #define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
 /**< Support inner checksum computation/verification */
+#define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
+/**< Support Rx injection after security processing */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index ab44bbe0f0..fa8d2bb7ce 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -321,6 +321,28 @@ rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 	return NULL;
 }
 
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
+{
+	struct rte_security_ctx *instance = ctx;
+
+	RTE_PTR_OR_ERR_RET(instance, -EINVAL);
+	RTE_PTR_OR_ERR_RET(instance->ops, -ENOTSUP);
+	RTE_PTR_OR_ERR_RET(instance->ops->rx_inject_configure, -ENOTSUP);
+
+	return instance->ops->rx_inject_configure(instance->device, port_id, enable);
+}
+
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+			       uint16_t nb_pkts)
+{
+	struct rte_security_ctx *instance = ctx;
+
+	return instance->ops->inb_pkt_rx_inject(instance->device, pkts,
+						(struct rte_security_session **)sess, nb_pkts);
+}
+
 static int
 security_handle_cryptodev_list(const char *cmd __rte_unused,
 			       const char *params __rte_unused,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index c9cc7a45a6..fe8e8e9813 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -1310,6 +1310,91 @@ const struct rte_security_capability *
 rte_security_capability_get(void *instance,
 			    struct rte_security_capability_idx *idx);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Configure security device to inject packets to an ethdev port.
+ *
+ * This API must be called only when both security device and the ethdev is in
+ * stopped state. The security device need to be configured before any packets
+ * are submitted to ``rte_security_inb_pkt_rx_inject`` API.
+ *
+ * @param	ctx		Security ctx
+ * @param	port_id		Port identifier of the ethernet device to which
+ *				packets need to be injected.
+ * @param	enable		Flag to enable and disable connection between a
+ *				security device and an ethdev port.
+ * @return
+ *   - 0 if successful.
+ *   - -EINVAL if context NULL or port_id is invalid.
+ *   - -EBUSY if devices are not in stopped state.
+ *   - -ENOTSUP if security device does not support injecting to the ethdev
+ *      port.
+ *
+ * @see rte_security_inb_pkt_rx_inject
+ */
+__rte_experimental
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Perform security processing of packets and inject the processed packet to
+ * ethdev Rx.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing. In case of ethdev loopback, application would be
+ * submitting packets to ethdev Tx queues and would be received as is from
+ * ethdev Rx queues. With Rx inject, packets would be received after security
+ * processing from ethdev Rx queues.
+ *
+ * With inline protocol offload capable ethdevs, Rx injection can be used to
+ * handle packets which failed the regular security Rx path. This can be due to
+ * cases such as outer fragmentation, in which case applications can reassemble
+ * the fragments and then subsequently submit for inbound processing and Rx
+ * injection, so that packets are received as regular security processed
+ * packets.
+ *
+ * With lookaside protocol offload capable cryptodevs, Rx injection can be used
+ * to perform packet parsing after security processing. This would allow for
+ * re-classification after security protocol processing is done (ie, inner
+ * packet parsing). The ethdev queue on which the packet would be received would
+ * be based on rte_flow rules matching the packet after security processing.
+ *
+ * The security device which is injecting packets to ethdev Rx need to be
+ * configured using ``rte_security_rx_inject_configure`` with enable flag set
+ * to `true` before any packets are submitted.
+ *
+ * If `hash.fdir.h` field is set in mbuf, it would be treated as the value for
+ * `MARK` pattern for the subsequent rte_flow parsing. The packet would appear
+ * as if it is received from `port` field in mbuf.
+ *
+ * Since the packet would be received back from ethdev Rx queues, it is expected
+ * that application retains/adds L2 header with the mbuf field 'l2_len'
+ * reflecting the size of L2 header in the packet.
+ *
+ * @param	ctx		Security ctx
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				security sessions corresponding to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ *
+ * @see rte_security_rx_inject_configure
+ */
+__rte_experimental
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+			       uint16_t nb_pkts);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index e5e1c4cfe8..62664dacdb 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -257,6 +257,46 @@ typedef int (*security_set_pkt_metadata_t)(void *device,
 typedef const struct rte_security_capability *(*security_capabilities_get_t)(
 		void *device);
 
+/**
+ * Configure security device to inject packets to an ethdev port.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	port_id		Port identifier of the ethernet device to which packets need to be
+ *				injected.
+ * @param	enable		Flag to enable and disable connection between a security device and
+ *				an ethdev port.
+ * @return
+ *   - 0 if successful.
+ *   - -EINVAL if context NULL or port_id is invalid.
+ *   - -EBUSY if devices are not in stopped state.
+ *   - -ENOTSUP if security device does not support injecting to the ethdev port.
+ */
+typedef int (*security_rx_inject_configure)(void *device, uint16_t port_id, bool enable);
+
+/**
+ * Perform security processing of packets and inject the processed packet to
+ * ethdev Rx.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				*rte_security_session* structures corresponding
+ *				to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ */
+typedef uint16_t (*security_inb_pkt_rx_inject)(void *device,
+		struct rte_mbuf **pkts, struct rte_security_session **sess,
+		uint16_t nb_pkts);
+
 /** Security operations function pointer table */
 struct rte_security_ops {
 	security_session_create_t session_create;
@@ -285,6 +325,10 @@ struct rte_security_ops {
 	/**< Get MACsec SC statistics. */
 	security_macsec_sa_stats_get_t macsec_sa_stats_get;
 	/**< Get MACsec SA statistics. */
+	security_rx_inject_configure rx_inject_configure;
+	/**< Rx inject configure. */
+	security_inb_pkt_rx_inject inb_pkt_rx_inject;
+	/**< Perform security processing and do Rx inject. */
 };
 
 #ifdef __cplusplus
diff --git a/lib/security/version.map b/lib/security/version.map
index 86f976a302..e07fca33a1 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -24,6 +24,9 @@ EXPERIMENTAL {
 	rte_security_session_stats_get;
 	rte_security_session_update;
 	rte_security_oop_dynfield_offset;
+
+	rte_security_rx_inject_configure;
+	rte_security_inb_pkt_rx_inject;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 2/2] test/cryptodev: add Rx inject test
  2023-09-29 15:39   ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Anoob Joseph
@ 2023-09-29 15:39     ` Anoob Joseph
  2023-10-09 20:11     ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Akhil Goyal
  2023-10-10 10:32     ` [PATCH v4 " Anoob Joseph
  2 siblings, 0 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-09-29 15:39 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Vidya Sagar Velumuri, Hemant Agrawal, dev, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

From: Vidya Sagar Velumuri <vvelumuri@marvell.com>

Add test to verify Rx inject. The test case added would push a known
vector to cryptodev which would be injected to ethdev Rx. The test
case verifies that the packet is received from ethdev Rx and is
processed successfully. It also verifies that the userdata matches with
the expectation.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
---
 app/test/test_cryptodev.c                | 340 +++++++++++++++++++----
 app/test/test_cryptodev_security_ipsec.h |   1 +
 2 files changed, 288 insertions(+), 53 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index f2112e181e..b645cb32f1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17,6 +17,7 @@
 
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
+#include <rte_ethdev.h>
 #include <rte_ip.h>
 #include <rte_string_fns.h>
 #include <rte_tcp.h>
@@ -1426,6 +1427,93 @@ ut_setup_security(void)
 	return dev_configure_and_start(0);
 }
 
+static int
+ut_setup_security_rx_inject(void)
+{
+	struct rte_mempool *mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_eth_conf port_conf = {
+		.rxmode = {
+			.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				    RTE_ETH_RX_OFFLOAD_SECURITY,
+		},
+		.txmode = {
+			.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+		},
+		.lpbk_mode = 1,  /* Enable loopback */
+	};
+	struct rte_cryptodev_info dev_info;
+	struct rte_eth_rxconf rx_conf = {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 8,
+		},
+		.rx_free_thresh = 32,
+	};
+	uint16_t nb_ports;
+	void *sec_ctx;
+	int ret;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) ||
+	    !(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY)) {
+		RTE_LOG(INFO, USER1,
+			"Feature requirements for IPsec Rx inject test case not met\n");
+		return TEST_SKIPPED;
+	}
+
+	sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+	if (sec_ctx == NULL)
+		return TEST_SKIPPED;
+
+	nb_ports = rte_eth_dev_count_avail();
+	if (nb_ports == 0)
+		return TEST_SKIPPED;
+
+	ret = rte_eth_dev_configure(0 /* port_id */,
+				    1 /* nb_rx_queue */,
+				    0 /* nb_tx_queue */,
+				    &port_conf);
+	if (ret) {
+		printf("Could not configure ethdev port 0 [err=%d]\n", ret);
+		return TEST_SKIPPED;
+	}
+
+	/* Rx queue setup */
+	ret = rte_eth_rx_queue_setup(0 /* port_id */,
+				     0 /* rx_queue_id */,
+				     1024 /* nb_rx_desc */,
+				     SOCKET_ID_ANY,
+				     &rx_conf,
+				     mbuf_pool);
+	if (ret) {
+		printf("Could not setup eth port 0 queue 0\n");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_security_rx_inject_configure(sec_ctx, 0, true);
+	if (ret) {
+		printf("Could not enable Rx inject offload");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_dev_start(0);
+	if (ret) {
+		printf("Could not start ethdev");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_promiscuous_enable(0);
+	if (ret) {
+		printf("Could not enable promiscuous mode");
+		return TEST_SKIPPED;
+	}
+
+	/* Configure and start cryptodev with no features disabled */
+	return dev_configure_and_start(0);
+}
+
 void
 ut_teardown(void)
 {
@@ -1478,6 +1566,33 @@ ut_teardown(void)
 	rte_cryptodev_stop(ts_params->valid_devs[0]);
 }
 
+static void
+ut_teardown_rx_inject(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	void *sec_ctx;
+	int ret;
+
+	if  (rte_eth_dev_count_avail() != 0) {
+		ret = rte_eth_dev_reset(0);
+		if (ret)
+			printf("Could not reset eth port 0");
+
+	}
+
+	ut_teardown();
+
+	sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+	if (sec_ctx == NULL)
+		return;
+
+	ret = rte_security_rx_inject_configure(sec_ctx, 0, false);
+	if (ret) {
+		printf("Could not disable Rx inject offload");
+		return;
+	}
+}
+
 static int
 test_device_configure_invalid_dev_id(void)
 {
@@ -9875,6 +9990,136 @@ ext_mbuf_create(struct rte_mempool *mbuf_pool, int pkt_len,
 	return NULL;
 }
 
+static int
+test_ipsec_proto_crypto_op_enq(struct crypto_testsuite_params *ts_params,
+			       struct crypto_unittest_params *ut_params,
+			       struct rte_security_ipsec_xform *ipsec_xform,
+			       const struct ipsec_test_data *td,
+			       const struct ipsec_test_flags *flags,
+			       int pkt_num)
+{
+	uint8_t dev_id = ts_params->valid_devs[0];
+	enum rte_security_ipsec_sa_direction dir;
+	int ret;
+
+	dir = ipsec_xform->direction;
+
+	/* Generate crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	if (!ut_params->op) {
+		printf("Could not allocate crypto op");
+		return TEST_FAILED;
+	}
+
+	/* Attach session to operation */
+	rte_security_attach_session(ut_params->op, ut_params->sec_session);
+
+	/* Set crypto operation mbufs */
+	ut_params->op->sym->m_src = ut_params->ibuf;
+	ut_params->op->sym->m_dst = NULL;
+
+	/* Copy IV in crypto operation when IV generation is disabled */
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
+	    ipsec_xform->options.iv_gen_disable == 1) {
+		uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
+							uint8_t *,
+							IV_OFFSET);
+		int len;
+
+		if (td->aead)
+			len = td->xform.aead.aead.iv.length;
+		else if (td->aes_gmac)
+			len = td->xform.chain.auth.auth.iv.length;
+		else
+			len = td->xform.chain.cipher.cipher.iv.length;
+
+		memcpy(iv, td->iv.data, len);
+	}
+
+	/* Process crypto operation */
+	process_crypto_request(dev_id, ut_params->op);
+
+	ret = test_ipsec_status_check(td, ut_params->op, flags, dir, pkt_num);
+
+	rte_crypto_op_free(ut_params->op);
+	ut_params->op = NULL;
+
+	return ret;
+}
+
+static int
+test_ipsec_proto_mbuf_enq(struct crypto_testsuite_params *ts_params,
+			  struct crypto_unittest_params *ut_params,
+			  void *ctx)
+{
+	uint64_t timeout, userdata;
+	struct rte_ether_hdr *hdr;
+	struct rte_mbuf *m;
+	void **sec_sess;
+	int ret;
+
+	RTE_SET_USED(ts_params);
+
+	hdr = (void *)rte_pktmbuf_prepend(ut_params->ibuf, sizeof(struct rte_ether_hdr));
+	hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+
+	ut_params->ibuf->l2_len = sizeof(struct rte_ether_hdr);
+
+	sec_sess = &ut_params->sec_session;
+	ret = rte_security_inb_pkt_rx_inject(ctx, &ut_params->ibuf, sec_sess, 1);
+
+	if (ret != 1)
+		return TEST_FAILED;
+
+	ut_params->ibuf = NULL;
+
+	/* Add a timeout for 1 s */
+	timeout = rte_get_tsc_cycles() + rte_get_tsc_hz();
+
+	do {
+		/* Get packet from port 0, queue 0 */
+		ret = rte_eth_rx_burst(0, 0, &m, 1);
+	} while ((ret == 0) && (rte_get_tsc_cycles() > timeout));
+
+	if (ret == 0) {
+		printf("Could not receive packets from ethdev\n");
+		return TEST_FAILED;
+	}
+
+	if (m == NULL) {
+		printf("Received mbuf is NULL\n");
+		return TEST_FAILED;
+	}
+
+	ut_params->ibuf = m;
+
+	if (!(m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+		printf("Received packet is not Rx security processed\n");
+		return TEST_FAILED;
+	}
+
+	if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) {
+		printf("Received packet has failed Rx security processing\n");
+		return TEST_FAILED;
+	}
+
+	/*
+	 * 'ut_params' is set as userdata. Verify that the field is returned
+	 * correctly.
+	 */
+	userdata = *(uint64_t *)rte_security_dynfield(m);
+	if (userdata != (uint64_t)ut_params) {
+		printf("Userdata retrieved not matching expected\n");
+		return TEST_FAILED;
+	}
+
+	/* Trim L2 header */
+	rte_pktmbuf_adj(m, sizeof(struct rte_ether_hdr));
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_ipsec_proto_process(const struct ipsec_test_data td[],
 			 struct ipsec_test_data res_d[],
@@ -10064,6 +10309,9 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		}
 	}
 
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+		sess_conf.userdata = ut_params;
+
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
 					ts_params->session_mpool);
@@ -10086,8 +10334,10 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 
 		/* Copy test data before modification */
 		memcpy(input_text, td[i].input_text.data, td[i].input_text.len);
-		if (test_ipsec_pkt_update(input_text, flags))
-			return TEST_FAILED;
+		if (test_ipsec_pkt_update(input_text, flags)) {
+			ret = TEST_FAILED;
+			goto mbuf_free;
+		}
 
 		/* Setup source mbuf payload */
 		if (flags->use_ext_mbuf) {
@@ -10099,50 +10349,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 			pktmbuf_write(ut_params->ibuf, 0, td[i].input_text.len, input_text);
 		}
 
-		/* Generate crypto op data structure */
-		ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
-					RTE_CRYPTO_OP_TYPE_SYMMETRIC);
-		if (!ut_params->op) {
-			printf("TestCase %s line %d: %s\n",
-				__func__, __LINE__,
-				"failed to allocate crypto op");
-			ret = TEST_FAILED;
-			goto crypto_op_free;
-		}
-
-		/* Attach session to operation */
-		rte_security_attach_session(ut_params->op,
-					    ut_params->sec_session);
-
-		/* Set crypto operation mbufs */
-		ut_params->op->sym->m_src = ut_params->ibuf;
-		ut_params->op->sym->m_dst = NULL;
-
-		/* Copy IV in crypto operation when IV generation is disabled */
-		if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
-		    ipsec_xform.options.iv_gen_disable == 1) {
-			uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
-								uint8_t *,
-								IV_OFFSET);
-			int len;
-
-			if (td[i].aead)
-				len = td[i].xform.aead.aead.iv.length;
-			else if (td[i].aes_gmac)
-				len = td[i].xform.chain.auth.auth.iv.length;
-			else
-				len = td[i].xform.chain.cipher.cipher.iv.length;
-
-			memcpy(iv, td[i].iv.data, len);
-		}
-
-		/* Process crypto operation */
-		process_crypto_request(dev_id, ut_params->op);
+		if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+			ret = test_ipsec_proto_mbuf_enq(ts_params, ut_params,
+							ctx);
+		else
+			ret = test_ipsec_proto_crypto_op_enq(ts_params,
+							     ut_params,
+							     &ipsec_xform,
+							     &td[i], flags,
+							     i + 1);
 
-		ret = test_ipsec_status_check(&td[i], ut_params->op, flags, dir,
-					      i + 1);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		if (res_d != NULL)
 			res_d_tmp = &res_d[i];
@@ -10150,24 +10368,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		ret = test_ipsec_post_process(ut_params->ibuf, &td[i],
 					      res_d_tmp, silent, flags);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		ret = test_ipsec_stats_verify(ctx, ut_params->sec_session,
 					      flags, dir);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
-
-		rte_crypto_op_free(ut_params->op);
-		ut_params->op = NULL;
+			goto mbuf_free;
 
 		rte_pktmbuf_free(ut_params->ibuf);
 		ut_params->ibuf = NULL;
 	}
 
-crypto_op_free:
-	rte_crypto_op_free(ut_params->op);
-	ut_params->op = NULL;
-
+mbuf_free:
 	if (flags->use_ext_mbuf)
 		ext_mbuf_memzone_free(nb_segs);
 
@@ -10256,6 +10468,24 @@ test_ipsec_proto_known_vec_fragmented(const void *test_data)
 	return test_ipsec_proto_process(&td_outb, NULL, 1, false, &flags);
 }
 
+static int
+test_ipsec_proto_known_vec_inb_rx_inject(const void *test_data)
+{
+	const struct ipsec_test_data *td = test_data;
+	struct ipsec_test_flags flags;
+	struct ipsec_test_data td_inb;
+
+	memset(&flags, 0, sizeof(flags));
+	flags.rx_inject = true;
+
+	if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		test_ipsec_td_in_from_out(td, &td_inb);
+	else
+		memcpy(&td_inb, td, sizeof(td_inb));
+
+	return test_ipsec_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
 static int
 test_ipsec_proto_all(const struct ipsec_test_flags *flags)
 {
@@ -16319,6 +16549,10 @@ static struct unit_test_suite ipsec_proto_testsuite  = {
 			"Multi-segmented external mbuf mode",
 			ut_setup_security, ut_teardown,
 			test_ipsec_proto_sgl_ext_mbuf),
+		TEST_CASE_NAMED_WITH_DATA(
+			"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128) Rx inject",
+			ut_setup_security_rx_inject, ut_teardown_rx_inject,
+			test_ipsec_proto_known_vec_inb_rx_inject, &pkt_aes_128_gcm),
 		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 8587fc4577..d7fc562751 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -112,6 +112,7 @@ struct ipsec_test_flags {
 	uint32_t plaintext_len;
 	int nb_segs_in_mbuf;
 	bool inb_oop;
+	bool rx_inject;
 };
 
 struct crypto_param {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v3 1/2] security: add fallback security processing and Rx inject
  2023-09-29 15:39   ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Anoob Joseph
  2023-09-29 15:39     ` [PATCH v3 2/2] test/cryptodev: add Rx inject test Anoob Joseph
@ 2023-10-09 20:11     ` Akhil Goyal
  2023-10-10 10:32     ` [PATCH v4 " Anoob Joseph
  2 siblings, 0 replies; 11+ messages in thread
From: Akhil Goyal @ 2023-10-09 20:11 UTC (permalink / raw)
  To: Anoob Joseph, Jerin Jacob Kollanukkaran, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

> Subject: [PATCH v3 1/2] security: add fallback security processing and Rx inject
> 
> Add alternate datapath API for security processing which would do Rx
> injection (similar to loopback) after successful security processing.
> 
> With inline protocol offload, variable part of the session context
> (AR windows, lifetime etc in case of IPsec), is not accessible to the
> application. If packets are not getting processed in the inline path
> due to non security reasons (such as outer fragmentation or rte_flow
> packet steering limitations), then the packet cannot be security
> processed as the session context is private to the PMD and security
> library doesn't provide alternate APIs to make use of the same session.
> 
> Introduce new API and Rx injection as fallback mechanism to security
> processing failures due to non-security reasons. For example, when there
> is outer fragmentation and PMD doesn't support reassembly of outer
> fragments, application would receive fragments which it can then
> reassemble. Post successful reassembly, packet can be submitted for
> security processing and Rx inject. The packets can be then received in
> the application as normal inline protocol processed packets.
> 
> Same API can be leveraged in lookaside protocol offload mode to inject
> packet to Rx. This would help in using rte_flow based packet parsing
> after security processing. For example, with IPsec, this will help in
> flow splitting after IPsec processing is done.
> 
> In both inline protocol capable ethdevs and lookaside protocol capable
> cryptodevs, the packet would be received back in eth port & queue based
> on rte_flow rules and packet parsing after security processing. The API
> would behave like a loopback but with the additional security
> processing.
> 
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> ---
> v3:
> * Resolved compilation error with 32 bit build
Series Acked-by: Akhil Goyal <gakhil@marvell.com>

Please add release notes for the new feature.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 1/2] security: add fallback security processing and Rx inject
  2023-09-29 15:39   ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Anoob Joseph
  2023-09-29 15:39     ` [PATCH v3 2/2] test/cryptodev: add Rx inject test Anoob Joseph
  2023-10-09 20:11     ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Akhil Goyal
@ 2023-10-10 10:32     ` Anoob Joseph
  2023-10-10 10:32       ` [PATCH v4 2/2] test/cryptodev: add Rx inject test Anoob Joseph
  2023-10-10 16:48       ` [PATCH v4 1/2] security: add fallback security processing and Rx inject Akhil Goyal
  2 siblings, 2 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-10-10 10:32 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

Add alternate datapath API for security processing which would do Rx
injection (similar to loopback) after successful security processing.

With inline protocol offload, variable part of the session context
(AR windows, lifetime etc in case of IPsec), is not accessible to the
application. If packets are not getting processed in the inline path
due to non security reasons (such as outer fragmentation or rte_flow
packet steering limitations), then the packet cannot be security
processed as the session context is private to the PMD and security
library doesn't provide alternate APIs to make use of the same session.

Introduce new API and Rx injection as fallback mechanism to security
processing failures due to non-security reasons. For example, when there
is outer fragmentation and PMD doesn't support reassembly of outer
fragments, application would receive fragments which it can then
reassemble. Post successful reassembly, packet can be submitted for
security processing and Rx inject. The packets can be then received in
the application as normal inline protocol processed packets.

Same API can be leveraged in lookaside protocol offload mode to inject
packet to Rx. This would help in using rte_flow based packet parsing
after security processing. For example, with IPsec, this will help in
flow splitting after IPsec processing is done.

In both inline protocol capable ethdevs and lookaside protocol capable
cryptodevs, the packet would be received back in eth port & queue based
on rte_flow rules and packet parsing after security processing. The API
would behave like a loopback but with the additional security
processing.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
v4:
Updated release notes

v3:
* Resolved compilation error with 32 bit build

v2:
* Added a new API for configuring security device to do Rx inject to a specific
  ethdev port
* Rebased

 doc/guides/cryptodevs/features/default.ini |  1 +
 doc/guides/rel_notes/release_23_11.rst     | 19 +++++
 lib/cryptodev/rte_cryptodev.h              |  2 +
 lib/security/rte_security.c                | 22 ++++++
 lib/security/rte_security.h                | 85 ++++++++++++++++++++++
 lib/security/rte_security_driver.h         | 44 +++++++++++
 lib/security/version.map                   |  3 +
 7 files changed, 176 insertions(+)

diff --git a/doc/guides/cryptodevs/features/default.ini b/doc/guides/cryptodevs/features/default.ini
index 6f637fa7e2..f411d4bab7 100644
--- a/doc/guides/cryptodevs/features/default.ini
+++ b/doc/guides/cryptodevs/features/default.ini
@@ -34,6 +34,7 @@ Sym raw data path API  =
 Cipher multiple data units =
 Cipher wrapped key     =
 Inner checksum         =
+Rx inject              =
 
 ;
 ; Supported crypto algorithms of a default crypto driver.
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index be51f00dbf..6853c907c9 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -107,6 +107,25 @@ New Features
   enhancements to ``rte_crypto_op`` fields to handle all datapath requirements
   of TLS and DTLS. The support is added for TLS 1.2, TLS 1.3 and DTLS 1.2.
 
+* **Added support for rte_security Rx inject API.**
+
+  Added Rx inject API to allow applications to submit packets for protocol
+  offload and have them injected back to ethdev Rx so that further ethdev Rx
+  actions (IP reassembly, packet parsing and flow lookups) can happen based on
+  inner packet.
+
+  The API when implemented by an ethdev, may be used to process packets that the
+  application wants to process with inline protocol offload enabled rte_security
+  session. These can be packets that are received from other non-inline capable
+  ethdevs or can be packets that failed inline protocol offload (such as
+  receiving fragmented ESP packets in case of inline IPsec offload).
+
+  The API when implemented by a cryptodev, can be used for injecting packets to
+  ethdev Rx after IPsec processing and take advantage of ethdev Rx processing
+  for the inner packet. The API helps application to avail ethdev Rx actions
+  based on inner packet while working with rte_security sessions which cannot
+  be accelerated in inline protocol offload mode.
+
 * **Updated ipsec_mb crypto driver.**
 
   Added support for digest encrypted to AESNI_MB asynchronous crypto driver.
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 6c8f532797..be0698ce9f 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -559,6 +559,8 @@ rte_cryptodev_asym_get_xform_string(enum rte_crypto_asym_xform_type xform_enum);
 /**< Support wrapped key in cipher xform  */
 #define RTE_CRYPTODEV_FF_SECURITY_INNER_CSUM		(1ULL << 27)
 /**< Support inner checksum computation/verification */
+#define RTE_CRYPTODEV_FF_SECURITY_RX_INJECT		(1ULL << 28)
+/**< Support Rx injection after security processing */
 
 /**
  * Get the name of a crypto device feature flag
diff --git a/lib/security/rte_security.c b/lib/security/rte_security.c
index 04872ec1a0..b082a29029 100644
--- a/lib/security/rte_security.c
+++ b/lib/security/rte_security.c
@@ -325,6 +325,28 @@ rte_security_capability_get(void *ctx, struct rte_security_capability_idx *idx)
 	return NULL;
 }
 
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable)
+{
+	struct rte_security_ctx *instance = ctx;
+
+	RTE_PTR_OR_ERR_RET(instance, -EINVAL);
+	RTE_PTR_OR_ERR_RET(instance->ops, -ENOTSUP);
+	RTE_PTR_OR_ERR_RET(instance->ops->rx_inject_configure, -ENOTSUP);
+
+	return instance->ops->rx_inject_configure(instance->device, port_id, enable);
+}
+
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+			       uint16_t nb_pkts)
+{
+	struct rte_security_ctx *instance = ctx;
+
+	return instance->ops->inb_pkt_rx_inject(instance->device, pkts,
+						(struct rte_security_session **)sess, nb_pkts);
+}
+
 static int
 security_handle_cryptodev_list(const char *cmd __rte_unused,
 			       const char *params __rte_unused,
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 8279bed013..181aa28f5e 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -1455,6 +1455,91 @@ const struct rte_security_capability *
 rte_security_capability_get(void *instance,
 			    struct rte_security_capability_idx *idx);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Configure security device to inject packets to an ethdev port.
+ *
+ * This API must be called only when both security device and the ethdev is in
+ * stopped state. The security device need to be configured before any packets
+ * are submitted to ``rte_security_inb_pkt_rx_inject`` API.
+ *
+ * @param	ctx		Security ctx
+ * @param	port_id		Port identifier of the ethernet device to which
+ *				packets need to be injected.
+ * @param	enable		Flag to enable and disable connection between a
+ *				security device and an ethdev port.
+ * @return
+ *   - 0 if successful.
+ *   - -EINVAL if context NULL or port_id is invalid.
+ *   - -EBUSY if devices are not in stopped state.
+ *   - -ENOTSUP if security device does not support injecting to the ethdev
+ *      port.
+ *
+ * @see rte_security_inb_pkt_rx_inject
+ */
+__rte_experimental
+int
+rte_security_rx_inject_configure(void *ctx, uint16_t port_id, bool enable);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice
+ *
+ * Perform security processing of packets and inject the processed packet to
+ * ethdev Rx.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing. In case of ethdev loopback, application would be
+ * submitting packets to ethdev Tx queues and would be received as is from
+ * ethdev Rx queues. With Rx inject, packets would be received after security
+ * processing from ethdev Rx queues.
+ *
+ * With inline protocol offload capable ethdevs, Rx injection can be used to
+ * handle packets which failed the regular security Rx path. This can be due to
+ * cases such as outer fragmentation, in which case applications can reassemble
+ * the fragments and then subsequently submit for inbound processing and Rx
+ * injection, so that packets are received as regular security processed
+ * packets.
+ *
+ * With lookaside protocol offload capable cryptodevs, Rx injection can be used
+ * to perform packet parsing after security processing. This would allow for
+ * re-classification after security protocol processing is done (ie, inner
+ * packet parsing). The ethdev queue on which the packet would be received would
+ * be based on rte_flow rules matching the packet after security processing.
+ *
+ * The security device which is injecting packets to ethdev Rx need to be
+ * configured using ``rte_security_rx_inject_configure`` with enable flag set
+ * to `true` before any packets are submitted.
+ *
+ * If `hash.fdir.h` field is set in mbuf, it would be treated as the value for
+ * `MARK` pattern for the subsequent rte_flow parsing. The packet would appear
+ * as if it is received from `port` field in mbuf.
+ *
+ * Since the packet would be received back from ethdev Rx queues, it is expected
+ * that application retains/adds L2 header with the mbuf field 'l2_len'
+ * reflecting the size of L2 header in the packet.
+ *
+ * @param	ctx		Security ctx
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				security sessions corresponding to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ *
+ * @see rte_security_rx_inject_configure
+ */
+__rte_experimental
+uint16_t
+rte_security_inb_pkt_rx_inject(void *ctx, struct rte_mbuf **pkts, void **sess,
+			       uint16_t nb_pkts);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/security/rte_security_driver.h b/lib/security/rte_security_driver.h
index e5e1c4cfe8..62664dacdb 100644
--- a/lib/security/rte_security_driver.h
+++ b/lib/security/rte_security_driver.h
@@ -257,6 +257,46 @@ typedef int (*security_set_pkt_metadata_t)(void *device,
 typedef const struct rte_security_capability *(*security_capabilities_get_t)(
 		void *device);
 
+/**
+ * Configure security device to inject packets to an ethdev port.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	port_id		Port identifier of the ethernet device to which packets need to be
+ *				injected.
+ * @param	enable		Flag to enable and disable connection between a security device and
+ *				an ethdev port.
+ * @return
+ *   - 0 if successful.
+ *   - -EINVAL if context NULL or port_id is invalid.
+ *   - -EBUSY if devices are not in stopped state.
+ *   - -ENOTSUP if security device does not support injecting to the ethdev port.
+ */
+typedef int (*security_rx_inject_configure)(void *device, uint16_t port_id, bool enable);
+
+/**
+ * Perform security processing of packets and inject the processed packet to
+ * ethdev Rx.
+ *
+ * Rx inject would behave similarly to ethdev loopback but with the additional
+ * security processing.
+ *
+ * @param	device		Crypto/eth device pointer
+ * @param	pkts		The address of an array of *nb_pkts* pointers to
+ *				*rte_mbuf* structures which contain the packets.
+ * @param	sess		The address of an array of *nb_pkts* pointers to
+ *				*rte_security_session* structures corresponding
+ *				to each packet.
+ * @param	nb_pkts		The maximum number of packets to process.
+ *
+ * @return
+ *   The number of packets successfully injected to ethdev Rx. The return
+ *   value can be less than the value of the *nb_pkts* parameter when the
+ *   PMD internal queues have been filled up.
+ */
+typedef uint16_t (*security_inb_pkt_rx_inject)(void *device,
+		struct rte_mbuf **pkts, struct rte_security_session **sess,
+		uint16_t nb_pkts);
+
 /** Security operations function pointer table */
 struct rte_security_ops {
 	security_session_create_t session_create;
@@ -285,6 +325,10 @@ struct rte_security_ops {
 	/**< Get MACsec SC statistics. */
 	security_macsec_sa_stats_get_t macsec_sa_stats_get;
 	/**< Get MACsec SA statistics. */
+	security_rx_inject_configure rx_inject_configure;
+	/**< Rx inject configure. */
+	security_inb_pkt_rx_inject inb_pkt_rx_inject;
+	/**< Perform security processing and do Rx inject. */
 };
 
 #ifdef __cplusplus
diff --git a/lib/security/version.map b/lib/security/version.map
index 86f976a302..e07fca33a1 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -24,6 +24,9 @@ EXPERIMENTAL {
 	rte_security_session_stats_get;
 	rte_security_session_update;
 	rte_security_oop_dynfield_offset;
+
+	rte_security_rx_inject_configure;
+	rte_security_inb_pkt_rx_inject;
 };
 
 INTERNAL {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 2/2] test/cryptodev: add Rx inject test
  2023-10-10 10:32     ` [PATCH v4 " Anoob Joseph
@ 2023-10-10 10:32       ` Anoob Joseph
  2023-10-10 16:48       ` [PATCH v4 1/2] security: add fallback security processing and Rx inject Akhil Goyal
  1 sibling, 0 replies; 11+ messages in thread
From: Anoob Joseph @ 2023-10-10 10:32 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Konstantin Ananyev
  Cc: Vidya Sagar Velumuri, Hemant Agrawal, dev, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power

From: Vidya Sagar Velumuri <vvelumuri@marvell.com>

Add test to verify Rx inject. The test case added would push a known
vector to cryptodev which would be injected to ethdev Rx. The test
case verifies that the packet is received from ethdev Rx and is
processed successfully. It also verifies that the userdata matches with
the expectation.

Signed-off-by: Anoob Joseph <anoobj@marvell.com>
Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
 app/test/test_cryptodev.c                | 340 +++++++++++++++++++----
 app/test/test_cryptodev_security_ipsec.h |   1 +
 2 files changed, 288 insertions(+), 53 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index f2112e181e..b645cb32f1 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -17,6 +17,7 @@
 
 #include <rte_crypto.h>
 #include <rte_cryptodev.h>
+#include <rte_ethdev.h>
 #include <rte_ip.h>
 #include <rte_string_fns.h>
 #include <rte_tcp.h>
@@ -1426,6 +1427,93 @@ ut_setup_security(void)
 	return dev_configure_and_start(0);
 }
 
+static int
+ut_setup_security_rx_inject(void)
+{
+	struct rte_mempool *mbuf_pool = rte_mempool_lookup("CRYPTO_MBUFPOOL");
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_eth_conf port_conf = {
+		.rxmode = {
+			.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				    RTE_ETH_RX_OFFLOAD_SECURITY,
+		},
+		.txmode = {
+			.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+		},
+		.lpbk_mode = 1,  /* Enable loopback */
+	};
+	struct rte_cryptodev_info dev_info;
+	struct rte_eth_rxconf rx_conf = {
+		.rx_thresh = {
+			.pthresh = 8,
+			.hthresh = 8,
+			.wthresh = 8,
+		},
+		.rx_free_thresh = 32,
+	};
+	uint16_t nb_ports;
+	void *sec_ctx;
+	int ret;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	if (!(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY_RX_INJECT) ||
+	    !(dev_info.feature_flags & RTE_CRYPTODEV_FF_SECURITY)) {
+		RTE_LOG(INFO, USER1,
+			"Feature requirements for IPsec Rx inject test case not met\n");
+		return TEST_SKIPPED;
+	}
+
+	sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+	if (sec_ctx == NULL)
+		return TEST_SKIPPED;
+
+	nb_ports = rte_eth_dev_count_avail();
+	if (nb_ports == 0)
+		return TEST_SKIPPED;
+
+	ret = rte_eth_dev_configure(0 /* port_id */,
+				    1 /* nb_rx_queue */,
+				    0 /* nb_tx_queue */,
+				    &port_conf);
+	if (ret) {
+		printf("Could not configure ethdev port 0 [err=%d]\n", ret);
+		return TEST_SKIPPED;
+	}
+
+	/* Rx queue setup */
+	ret = rte_eth_rx_queue_setup(0 /* port_id */,
+				     0 /* rx_queue_id */,
+				     1024 /* nb_rx_desc */,
+				     SOCKET_ID_ANY,
+				     &rx_conf,
+				     mbuf_pool);
+	if (ret) {
+		printf("Could not setup eth port 0 queue 0\n");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_security_rx_inject_configure(sec_ctx, 0, true);
+	if (ret) {
+		printf("Could not enable Rx inject offload");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_dev_start(0);
+	if (ret) {
+		printf("Could not start ethdev");
+		return TEST_SKIPPED;
+	}
+
+	ret = rte_eth_promiscuous_enable(0);
+	if (ret) {
+		printf("Could not enable promiscuous mode");
+		return TEST_SKIPPED;
+	}
+
+	/* Configure and start cryptodev with no features disabled */
+	return dev_configure_and_start(0);
+}
+
 void
 ut_teardown(void)
 {
@@ -1478,6 +1566,33 @@ ut_teardown(void)
 	rte_cryptodev_stop(ts_params->valid_devs[0]);
 }
 
+static void
+ut_teardown_rx_inject(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	void *sec_ctx;
+	int ret;
+
+	if  (rte_eth_dev_count_avail() != 0) {
+		ret = rte_eth_dev_reset(0);
+		if (ret)
+			printf("Could not reset eth port 0");
+
+	}
+
+	ut_teardown();
+
+	sec_ctx = rte_cryptodev_get_sec_ctx(ts_params->valid_devs[0]);
+	if (sec_ctx == NULL)
+		return;
+
+	ret = rte_security_rx_inject_configure(sec_ctx, 0, false);
+	if (ret) {
+		printf("Could not disable Rx inject offload");
+		return;
+	}
+}
+
 static int
 test_device_configure_invalid_dev_id(void)
 {
@@ -9875,6 +9990,136 @@ ext_mbuf_create(struct rte_mempool *mbuf_pool, int pkt_len,
 	return NULL;
 }
 
+static int
+test_ipsec_proto_crypto_op_enq(struct crypto_testsuite_params *ts_params,
+			       struct crypto_unittest_params *ut_params,
+			       struct rte_security_ipsec_xform *ipsec_xform,
+			       const struct ipsec_test_data *td,
+			       const struct ipsec_test_flags *flags,
+			       int pkt_num)
+{
+	uint8_t dev_id = ts_params->valid_devs[0];
+	enum rte_security_ipsec_sa_direction dir;
+	int ret;
+
+	dir = ipsec_xform->direction;
+
+	/* Generate crypto op data structure */
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	if (!ut_params->op) {
+		printf("Could not allocate crypto op");
+		return TEST_FAILED;
+	}
+
+	/* Attach session to operation */
+	rte_security_attach_session(ut_params->op, ut_params->sec_session);
+
+	/* Set crypto operation mbufs */
+	ut_params->op->sym->m_src = ut_params->ibuf;
+	ut_params->op->sym->m_dst = NULL;
+
+	/* Copy IV in crypto operation when IV generation is disabled */
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
+	    ipsec_xform->options.iv_gen_disable == 1) {
+		uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
+							uint8_t *,
+							IV_OFFSET);
+		int len;
+
+		if (td->aead)
+			len = td->xform.aead.aead.iv.length;
+		else if (td->aes_gmac)
+			len = td->xform.chain.auth.auth.iv.length;
+		else
+			len = td->xform.chain.cipher.cipher.iv.length;
+
+		memcpy(iv, td->iv.data, len);
+	}
+
+	/* Process crypto operation */
+	process_crypto_request(dev_id, ut_params->op);
+
+	ret = test_ipsec_status_check(td, ut_params->op, flags, dir, pkt_num);
+
+	rte_crypto_op_free(ut_params->op);
+	ut_params->op = NULL;
+
+	return ret;
+}
+
+static int
+test_ipsec_proto_mbuf_enq(struct crypto_testsuite_params *ts_params,
+			  struct crypto_unittest_params *ut_params,
+			  void *ctx)
+{
+	uint64_t timeout, userdata;
+	struct rte_ether_hdr *hdr;
+	struct rte_mbuf *m;
+	void **sec_sess;
+	int ret;
+
+	RTE_SET_USED(ts_params);
+
+	hdr = (void *)rte_pktmbuf_prepend(ut_params->ibuf, sizeof(struct rte_ether_hdr));
+	hdr->ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+
+	ut_params->ibuf->l2_len = sizeof(struct rte_ether_hdr);
+
+	sec_sess = &ut_params->sec_session;
+	ret = rte_security_inb_pkt_rx_inject(ctx, &ut_params->ibuf, sec_sess, 1);
+
+	if (ret != 1)
+		return TEST_FAILED;
+
+	ut_params->ibuf = NULL;
+
+	/* Add a timeout for 1 s */
+	timeout = rte_get_tsc_cycles() + rte_get_tsc_hz();
+
+	do {
+		/* Get packet from port 0, queue 0 */
+		ret = rte_eth_rx_burst(0, 0, &m, 1);
+	} while ((ret == 0) && (rte_get_tsc_cycles() > timeout));
+
+	if (ret == 0) {
+		printf("Could not receive packets from ethdev\n");
+		return TEST_FAILED;
+	}
+
+	if (m == NULL) {
+		printf("Received mbuf is NULL\n");
+		return TEST_FAILED;
+	}
+
+	ut_params->ibuf = m;
+
+	if (!(m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+		printf("Received packet is not Rx security processed\n");
+		return TEST_FAILED;
+	}
+
+	if (m->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED) {
+		printf("Received packet has failed Rx security processing\n");
+		return TEST_FAILED;
+	}
+
+	/*
+	 * 'ut_params' is set as userdata. Verify that the field is returned
+	 * correctly.
+	 */
+	userdata = *(uint64_t *)rte_security_dynfield(m);
+	if (userdata != (uint64_t)ut_params) {
+		printf("Userdata retrieved not matching expected\n");
+		return TEST_FAILED;
+	}
+
+	/* Trim L2 header */
+	rte_pktmbuf_adj(m, sizeof(struct rte_ether_hdr));
+
+	return TEST_SUCCESS;
+}
+
 static int
 test_ipsec_proto_process(const struct ipsec_test_data td[],
 			 struct ipsec_test_data res_d[],
@@ -10064,6 +10309,9 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		}
 	}
 
+	if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+		sess_conf.userdata = ut_params;
+
 	/* Create security session */
 	ut_params->sec_session = rte_security_session_create(ctx, &sess_conf,
 					ts_params->session_mpool);
@@ -10086,8 +10334,10 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 
 		/* Copy test data before modification */
 		memcpy(input_text, td[i].input_text.data, td[i].input_text.len);
-		if (test_ipsec_pkt_update(input_text, flags))
-			return TEST_FAILED;
+		if (test_ipsec_pkt_update(input_text, flags)) {
+			ret = TEST_FAILED;
+			goto mbuf_free;
+		}
 
 		/* Setup source mbuf payload */
 		if (flags->use_ext_mbuf) {
@@ -10099,50 +10349,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 			pktmbuf_write(ut_params->ibuf, 0, td[i].input_text.len, input_text);
 		}
 
-		/* Generate crypto op data structure */
-		ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
-					RTE_CRYPTO_OP_TYPE_SYMMETRIC);
-		if (!ut_params->op) {
-			printf("TestCase %s line %d: %s\n",
-				__func__, __LINE__,
-				"failed to allocate crypto op");
-			ret = TEST_FAILED;
-			goto crypto_op_free;
-		}
-
-		/* Attach session to operation */
-		rte_security_attach_session(ut_params->op,
-					    ut_params->sec_session);
-
-		/* Set crypto operation mbufs */
-		ut_params->op->sym->m_src = ut_params->ibuf;
-		ut_params->op->sym->m_dst = NULL;
-
-		/* Copy IV in crypto operation when IV generation is disabled */
-		if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
-		    ipsec_xform.options.iv_gen_disable == 1) {
-			uint8_t *iv = rte_crypto_op_ctod_offset(ut_params->op,
-								uint8_t *,
-								IV_OFFSET);
-			int len;
-
-			if (td[i].aead)
-				len = td[i].xform.aead.aead.iv.length;
-			else if (td[i].aes_gmac)
-				len = td[i].xform.chain.auth.auth.iv.length;
-			else
-				len = td[i].xform.chain.cipher.cipher.iv.length;
-
-			memcpy(iv, td[i].iv.data, len);
-		}
-
-		/* Process crypto operation */
-		process_crypto_request(dev_id, ut_params->op);
+		if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS && flags->rx_inject)
+			ret = test_ipsec_proto_mbuf_enq(ts_params, ut_params,
+							ctx);
+		else
+			ret = test_ipsec_proto_crypto_op_enq(ts_params,
+							     ut_params,
+							     &ipsec_xform,
+							     &td[i], flags,
+							     i + 1);
 
-		ret = test_ipsec_status_check(&td[i], ut_params->op, flags, dir,
-					      i + 1);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		if (res_d != NULL)
 			res_d_tmp = &res_d[i];
@@ -10150,24 +10368,18 @@ test_ipsec_proto_process(const struct ipsec_test_data td[],
 		ret = test_ipsec_post_process(ut_params->ibuf, &td[i],
 					      res_d_tmp, silent, flags);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
+			goto mbuf_free;
 
 		ret = test_ipsec_stats_verify(ctx, ut_params->sec_session,
 					      flags, dir);
 		if (ret != TEST_SUCCESS)
-			goto crypto_op_free;
-
-		rte_crypto_op_free(ut_params->op);
-		ut_params->op = NULL;
+			goto mbuf_free;
 
 		rte_pktmbuf_free(ut_params->ibuf);
 		ut_params->ibuf = NULL;
 	}
 
-crypto_op_free:
-	rte_crypto_op_free(ut_params->op);
-	ut_params->op = NULL;
-
+mbuf_free:
 	if (flags->use_ext_mbuf)
 		ext_mbuf_memzone_free(nb_segs);
 
@@ -10256,6 +10468,24 @@ test_ipsec_proto_known_vec_fragmented(const void *test_data)
 	return test_ipsec_proto_process(&td_outb, NULL, 1, false, &flags);
 }
 
+static int
+test_ipsec_proto_known_vec_inb_rx_inject(const void *test_data)
+{
+	const struct ipsec_test_data *td = test_data;
+	struct ipsec_test_flags flags;
+	struct ipsec_test_data td_inb;
+
+	memset(&flags, 0, sizeof(flags));
+	flags.rx_inject = true;
+
+	if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		test_ipsec_td_in_from_out(td, &td_inb);
+	else
+		memcpy(&td_inb, td, sizeof(td_inb));
+
+	return test_ipsec_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
 static int
 test_ipsec_proto_all(const struct ipsec_test_flags *flags)
 {
@@ -16319,6 +16549,10 @@ static struct unit_test_suite ipsec_proto_testsuite  = {
 			"Multi-segmented external mbuf mode",
 			ut_setup_security, ut_teardown,
 			test_ipsec_proto_sgl_ext_mbuf),
+		TEST_CASE_NAMED_WITH_DATA(
+			"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128) Rx inject",
+			ut_setup_security_rx_inject, ut_teardown_rx_inject,
+			test_ipsec_proto_known_vec_inb_rx_inject, &pkt_aes_128_gcm),
 		TEST_CASES_END() /**< NULL terminate unit test array */
 	}
 };
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 8587fc4577..d7fc562751 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -112,6 +112,7 @@ struct ipsec_test_flags {
 	uint32_t plaintext_len;
 	int nb_segs_in_mbuf;
 	bool inb_oop;
+	bool rx_inject;
 };
 
 struct crypto_param {
-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* RE: [PATCH v4 1/2] security: add fallback security processing and Rx inject
  2023-10-10 10:32     ` [PATCH v4 " Anoob Joseph
  2023-10-10 10:32       ` [PATCH v4 2/2] test/cryptodev: add Rx inject test Anoob Joseph
@ 2023-10-10 16:48       ` Akhil Goyal
  1 sibling, 0 replies; 11+ messages in thread
From: Akhil Goyal @ 2023-10-10 16:48 UTC (permalink / raw)
  To: Anoob Joseph, Jerin Jacob Kollanukkaran, Konstantin Ananyev
  Cc: Hemant Agrawal, dev, Vidya Sagar Velumuri, david.coyle, kai.ji,
	kevin.osullivan, Ciara Power



> -----Original Message-----
> From: Anoob Joseph <anoobj@marvell.com>
> Sent: Tuesday, October 10, 2023 4:02 PM
> To: Akhil Goyal <gakhil@marvell.com>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
> Cc: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org; Vidya Sagar
> Velumuri <vvelumuri@marvell.com>; david.coyle@intel.com; kai.ji@intel.com;
> kevin.osullivan@intel.com; Ciara Power <ciara.power@intel.com>
> Subject: [PATCH v4 1/2] security: add fallback security processing and Rx inject
> 
> Add alternate datapath API for security processing which would do Rx
> injection (similar to loopback) after successful security processing.
> 
> With inline protocol offload, variable part of the session context
> (AR windows, lifetime etc in case of IPsec), is not accessible to the
> application. If packets are not getting processed in the inline path
> due to non security reasons (such as outer fragmentation or rte_flow
> packet steering limitations), then the packet cannot be security
> processed as the session context is private to the PMD and security
> library doesn't provide alternate APIs to make use of the same session.
> 
> Introduce new API and Rx injection as fallback mechanism to security
> processing failures due to non-security reasons. For example, when there
> is outer fragmentation and PMD doesn't support reassembly of outer
> fragments, application would receive fragments which it can then
> reassemble. Post successful reassembly, packet can be submitted for
> security processing and Rx inject. The packets can be then received in
> the application as normal inline protocol processed packets.
> 
> Same API can be leveraged in lookaside protocol offload mode to inject
> packet to Rx. This would help in using rte_flow based packet parsing
> after security processing. For example, with IPsec, this will help in
> flow splitting after IPsec processing is done.
> 
> In both inline protocol capable ethdevs and lookaside protocol capable
> cryptodevs, the packet would be received back in eth port & queue based
> on rte_flow rules and packet parsing after security processing. The API
> would behave like a loopback but with the additional security
> processing.
> 
> Signed-off-by: Anoob Joseph <anoobj@marvell.com>
> Signed-off-by: Vidya Sagar Velumuri <vvelumuri@marvell.com>
> Acked-by: Akhil Goyal <gakhil@marvell.com>
> ---
> v4:
> Updated release notes
> 
> v3:
> * Resolved compilation error with 32 bit build
> 
> v2:
> * Added a new API for configuring security device to do Rx inject to a specific
>   ethdev port
> * Rebased
> 
>  doc/guides/cryptodevs/features/default.ini |  1 +
>  doc/guides/rel_notes/release_23_11.rst     | 19 +++++
>  lib/cryptodev/rte_cryptodev.h              |  2 +
>  lib/security/rte_security.c                | 22 ++++++
>  lib/security/rte_security.h                | 85 ++++++++++++++++++++++
>  lib/security/rte_security_driver.h         | 44 +++++++++++
>  lib/security/version.map                   |  3 +
>  7 files changed, 176 insertions(+)
> 
> diff --git a/doc/guides/cryptodevs/features/default.ini
> b/doc/guides/cryptodevs/features/default.ini
> index 6f637fa7e2..f411d4bab7 100644
> --- a/doc/guides/cryptodevs/features/default.ini
> +++ b/doc/guides/cryptodevs/features/default.ini
> @@ -34,6 +34,7 @@ Sym raw data path API  =
>  Cipher multiple data units =
>  Cipher wrapped key     =
>  Inner checksum         =
> +Rx inject              =
> 
>  ;
>  ; Supported crypto algorithms of a default crypto driver.
> diff --git a/doc/guides/rel_notes/release_23_11.rst
> b/doc/guides/rel_notes/release_23_11.rst
> index be51f00dbf..6853c907c9 100644
> --- a/doc/guides/rel_notes/release_23_11.rst
> +++ b/doc/guides/rel_notes/release_23_11.rst
> @@ -107,6 +107,25 @@ New Features
>    enhancements to ``rte_crypto_op`` fields to handle all datapath requirements
>    of TLS and DTLS. The support is added for TLS 1.2, TLS 1.3 and DTLS 1.2.
> 
> +* **Added support for rte_security Rx inject API.**
> +
> +  Added Rx inject API to allow applications to submit packets for protocol
> +  offload and have them injected back to ethdev Rx so that further ethdev Rx
> +  actions (IP reassembly, packet parsing and flow lookups) can happen based on
> +  inner packet.
> +
> +  The API when implemented by an ethdev, may be used to process packets that
> the
> +  application wants to process with inline protocol offload enabled rte_security
> +  session. These can be packets that are received from other non-inline capable
> +  ethdevs or can be packets that failed inline protocol offload (such as
> +  receiving fragmented ESP packets in case of inline IPsec offload).
> +
> +  The API when implemented by a cryptodev, can be used for injecting packets
> to
> +  ethdev Rx after IPsec processing and take advantage of ethdev Rx processing
> +  for the inner packet. The API helps application to avail ethdev Rx actions
> +  based on inner packet while working with rte_security sessions which cannot
> +  be accelerated in inline protocol offload mode.
> +
Reworded the above release notes as below.

* **Added support for rte_security Rx inject API.**

  Added Rx inject API to allow applications to submit packets for protocol
  offload and have them injected back to ethdev Rx so that further ethdev Rx
  actions (IP reassembly, packet parsing and flow lookups) can happen based on
  inner packet.

  The API when implemented by an ethdev, application would be able to process
  packets that are received without/failed inline offload processing
  (such as fragmented ESP packets with inline IPsec offload).
  The API when implemented by a cryptodev, can be used for injecting packets
  to ethdev Rx after IPsec processing and take advantage of ethdev Rx actions
  for the inner packet which cannot be accelerated in inline protocol offload mode.

Applied to dpdk-next-crypto

Thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2023-10-10 16:48 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-11 11:45 [RFC PATCH 1/2] security: add fallback security processing and Rx inject Anoob Joseph
2023-08-11 11:45 ` [RFC PATCH 2/2] test/cryptodev: add Rx inject test Anoob Joseph
2023-08-24  7:55 ` [RFC PATCH 1/2] security: add fallback security processing and Rx inject Akhil Goyal
2023-09-29  7:16 ` [PATCH v2 " Anoob Joseph
2023-09-29  7:16   ` [PATCH v2 2/2] test/cryptodev: add Rx inject test Anoob Joseph
2023-09-29 15:39   ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Anoob Joseph
2023-09-29 15:39     ` [PATCH v3 2/2] test/cryptodev: add Rx inject test Anoob Joseph
2023-10-09 20:11     ` [PATCH v3 1/2] security: add fallback security processing and Rx inject Akhil Goyal
2023-10-10 10:32     ` [PATCH v4 " Anoob Joseph
2023-10-10 10:32       ` [PATCH v4 2/2] test/cryptodev: add Rx inject test Anoob Joseph
2023-10-10 16:48       ` [PATCH v4 1/2] security: add fallback security processing and Rx inject Akhil Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).