DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec
@ 2021-09-02  2:14 Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 01/27] common/cnxk: add security support for cn9k fast path Nithin Dabilpuram
                   ` (29 more replies)
  0 siblings, 30 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  Cc: jerinj, schalla, dev, Nithin Dabilpuram

Support for inline ipsec in CN9K event mode and in Cn10K event mode and
poll mode.

Depends-on: series-18524 ("Crypto adapter support for Marvell CNXK driver)
Depends-on: series-18262 ("security: Improve inline fast path routines")
Depends-on: series-18562 ("add lookaside IPsec additional features)

Kommula Shiva Shankar (1):
  common/cnxk: add cq enable support in nix Tx path

Nithin Dabilpuram (17):
  common/cnxk: add helper API to dump cpt parse header
  common/cnxk: allow reuse of SSO API for inline dev
  common/cnxk: change nix debug API and queue API interface
  common/cnxk: add nix inline device irq API
  common/cnxk: add nix inline device init and fini
  common/cnxk: add nix inline inbound and outbound support API
  common/cnxk: dump cpt lf registers on error intr
  common/cnxk: align cpt lf enable/disable sequence
  common/cnxk: restore nix sqb pool limit before destroy
  common/cnxk: setup aura bp conf based on nix
  net/cnxk: add inline security support for cn9k
  net/cnxk: add inline security support for cn10k
  net/cnxk: add cn9k Rx support for security offload
  net/cnxk: add cn9k Tx support for security offload
  net/cnxk: add cn10k Rx support for security offload
  net/cnxk: add cn10k Tx support for security offload
  net/cnxk: reflect globally enabled offloads in queue conf

Satheesh Paul (2):
  common/cnxk: add inline IPsec support in rte flow
  net/cnxk: add devargs for configuring channel mask

Srujana Challa (7):
  common/cnxk: add security support for cn9k fast path
  common/cnxk: add anti-replay check implementation for cn9k
  net/cnxk: add cn9k anti replay support for security offload
  net/cnxk: add cn10k IPsec transport mode support
  net/cnxk: update ethertype for mixed IPsec tunnel versions
  net/cnxk: allow zero udp6 checksum for non inline device
  net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1

 doc/guides/nics/cnxk.rst                         |  122 +++
 doc/guides/rel_notes/release_21_11.rst           |    5 +
 drivers/common/cnxk/cnxk_security.c              |  212 +++++
 drivers/common/cnxk/cnxk_security.h              |   12 +
 drivers/common/cnxk/cnxk_security_ar.h           |  184 ++++
 drivers/common/cnxk/hw/cpt.h                     |   19 +
 drivers/common/cnxk/meson.build                  |    3 +
 drivers/common/cnxk/roc_api.h                    |   49 +-
 drivers/common/cnxk/roc_constants.h              |   58 ++
 drivers/common/cnxk/roc_cpt.c                    |   54 +-
 drivers/common/cnxk/roc_cpt.h                    |   10 +
 drivers/common/cnxk/roc_cpt_debug.c              |   63 +-
 drivers/common/cnxk/roc_cpt_priv.h               |    1 +
 drivers/common/cnxk/roc_idev.c                   |    2 +
 drivers/common/cnxk/roc_idev_priv.h              |    3 +
 drivers/common/cnxk/roc_io.h                     |    9 +
 drivers/common/cnxk/roc_io_generic.h             |    3 +-
 drivers/common/cnxk/roc_irq.c                    |    7 +-
 drivers/common/cnxk/roc_nix.c                    |    2 +-
 drivers/common/cnxk/roc_nix.h                    |    7 +
 drivers/common/cnxk/roc_nix_debug.c              |  168 +++-
 drivers/common/cnxk/roc_nix_fc.c                 |   23 +-
 drivers/common/cnxk/roc_nix_inl.c                |  739 ++++++++++++++++
 drivers/common/cnxk/roc_nix_inl.h                |  169 ++++
 drivers/common/cnxk/roc_nix_inl_dev.c            |  547 ++++++++++++
 drivers/common/cnxk/roc_nix_inl_dev_irq.c        |  359 ++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h           |   62 ++
 drivers/common/cnxk/roc_nix_priv.h               |   31 +
 drivers/common/cnxk/roc_nix_queue.c              |   98 ++-
 drivers/common/cnxk/roc_npc.c                    |   27 +-
 drivers/common/cnxk/roc_npc_mcam.c               |   28 +-
 drivers/common/cnxk/roc_platform.h               |   11 +-
 drivers/common/cnxk/roc_priv.h                   |    3 +
 drivers/common/cnxk/roc_sso.c                    |   52 +-
 drivers/common/cnxk/roc_sso_priv.h               |    9 +
 drivers/common/cnxk/version.map                  |   33 +
 drivers/event/cnxk/cn10k_eventdev.c              |   93 +-
 drivers/event/cnxk/cn10k_worker.h                |  147 +++-
 drivers/event/cnxk/cn10k_worker_deq.c            |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_burst.c      |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_ca.c         |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_tmo.c        |    2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq.c         |    2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq_seg.c     |    2 +-
 drivers/event/cnxk/cn9k_eventdev.c               |  180 ++--
 drivers/event/cnxk/cn9k_worker.h                 |  171 +++-
 drivers/event/cnxk/cn9k_worker_deq.c             |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_burst.c       |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_ca.c          |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_tmo.c         |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq.c        |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_burst.c  |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c     |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c    |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |    2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq.c          |    2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |    2 +-
 drivers/event/cnxk/cnxk_eventdev_adptr.c         |   36 +-
 drivers/net/cnxk/cn10k_ethdev.c                  |   36 +-
 drivers/net/cnxk/cn10k_ethdev.h                  |   48 ++
 drivers/net/cnxk/cn10k_ethdev_sec.c              |  492 +++++++++++
 drivers/net/cnxk/cn10k_rx.c                      |   31 +-
 drivers/net/cnxk/cn10k_rx.h                      |  649 +++++++++++---
 drivers/net/cnxk/cn10k_rx_mseg.c                 |    2 +-
 drivers/net/cnxk/cn10k_rx_vec.c                  |    4 +-
 drivers/net/cnxk/cn10k_rx_vec_mseg.c             |    4 +-
 drivers/net/cnxk/cn10k_tx.c                      |   31 +-
 drivers/net/cnxk/cn10k_tx.h                      | 1006 +++++++++++++++++++---
 drivers/net/cnxk/cn10k_tx_mseg.c                 |    2 +-
 drivers/net/cnxk/cn10k_tx_vec.c                  |    2 +-
 drivers/net/cnxk/cn10k_tx_vec_mseg.c             |    2 +-
 drivers/net/cnxk/cn9k_ethdev.c                   |   23 +
 drivers/net/cnxk/cn9k_ethdev.h                   |   64 ++
 drivers/net/cnxk/cn9k_ethdev_sec.c               |  382 ++++++++
 drivers/net/cnxk/cn9k_rx.c                       |   31 +-
 drivers/net/cnxk/cn9k_rx.h                       |  493 +++++++++--
 drivers/net/cnxk/cn9k_rx_mseg.c                  |    2 +-
 drivers/net/cnxk/cn9k_rx_vec.c                   |    2 +-
 drivers/net/cnxk/cn9k_rx_vec_mseg.c              |    2 +-
 drivers/net/cnxk/cn9k_tx.c                       |   29 +-
 drivers/net/cnxk/cn9k_tx.h                       |  393 ++++++---
 drivers/net/cnxk/cn9k_tx_mseg.c                  |    2 +-
 drivers/net/cnxk/cn9k_tx_vec.c                   |    2 +-
 drivers/net/cnxk/cn9k_tx_vec_mseg.c              |    2 +-
 drivers/net/cnxk/cnxk_ethdev.c                   |  221 ++++-
 drivers/net/cnxk/cnxk_ethdev.h                   |  124 ++-
 drivers/net/cnxk/cnxk_ethdev_devargs.c           |   88 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c               |  315 +++++++
 drivers/net/cnxk/cnxk_lookup.c                   |   50 +-
 drivers/net/cnxk/meson.build                     |    3 +
 drivers/net/cnxk/version.map                     |    5 +
 usertools/dpdk-devbind.py                        |    8 +-
 93 files changed, 7501 insertions(+), 896 deletions(-)
 create mode 100644 drivers/common/cnxk/cnxk_security_ar.h
 create mode 100644 drivers/common/cnxk/roc_constants.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c

-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 01/27] common/cnxk: add security support for cn9k fast path
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 02/27] common/cnxk: add helper API to dump cpt parse header Nithin Dabilpuram
                   ` (28 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Ray Kinsella
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Add security support to init cn9k fast path SA data
for AES GCM and AES CBC + HMAC SHA1.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/cnxk_security.c | 211 ++++++++++++++++++++++++++++++++++++
 drivers/common/cnxk/cnxk_security.h |  12 ++
 drivers/common/cnxk/version.map     |   4 +
 3 files changed, 227 insertions(+)

diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index 4f7fd1b..c25b3fd 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -383,6 +383,217 @@ cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
 	return !!sa->w2.s.valid;
 }
 
+static inline int
+ipsec_xfrm_verify(struct rte_security_ipsec_xform *ipsec_xfrm,
+		  struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	if (crypto_xfrm->next == NULL)
+		return -EINVAL;
+
+	if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+		    crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+	} else {
+		if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+		    crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
+			       uint8_t *cipher_key, uint8_t *hmac_opad_ipad,
+			       struct rte_security_ipsec_xform *ipsec_xfrm,
+			       struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+	int rc, length, auth_key_len;
+	const uint8_t *key = NULL;
+
+	/* Set direction */
+	switch (ipsec_xfrm->direction) {
+	case RTE_SECURITY_IPSEC_SA_DIR_INGRESS:
+		ctl->direction = ROC_IE_SA_DIR_INBOUND;
+		auth_xfrm = crypto_xfrm;
+		cipher_xfrm = crypto_xfrm->next;
+		break;
+	case RTE_SECURITY_IPSEC_SA_DIR_EGRESS:
+		ctl->direction = ROC_IE_SA_DIR_OUTBOUND;
+		cipher_xfrm = crypto_xfrm;
+		auth_xfrm = crypto_xfrm->next;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set protocol - ESP vs AH */
+	switch (ipsec_xfrm->proto) {
+	case RTE_SECURITY_IPSEC_SA_PROTO_ESP:
+		ctl->ipsec_proto = ROC_IE_SA_PROTOCOL_ESP;
+		break;
+	case RTE_SECURITY_IPSEC_SA_PROTO_AH:
+		return -ENOTSUP;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set mode - transport vs tunnel */
+	switch (ipsec_xfrm->mode) {
+	case RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT:
+		ctl->ipsec_mode = ROC_IE_SA_MODE_TRANSPORT;
+		break;
+	case RTE_SECURITY_IPSEC_SA_MODE_TUNNEL:
+		ctl->ipsec_mode = ROC_IE_SA_MODE_TUNNEL;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set encryption algorithm */
+	if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		length = crypto_xfrm->aead.key.length;
+
+		switch (crypto_xfrm->aead.algo) {
+		case RTE_CRYPTO_AEAD_AES_GCM:
+			ctl->enc_type = ROC_IE_ON_SA_ENC_AES_GCM;
+			ctl->auth_type = ROC_IE_ON_SA_AUTH_NULL;
+			memcpy(salt, &ipsec_xfrm->salt, 4);
+			key = crypto_xfrm->aead.key.data;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+
+	} else {
+		rc = ipsec_xfrm_verify(ipsec_xfrm, crypto_xfrm);
+		if (rc)
+			return rc;
+
+		switch (cipher_xfrm->cipher.algo) {
+		case RTE_CRYPTO_CIPHER_AES_CBC:
+			ctl->enc_type = ROC_IE_ON_SA_ENC_AES_CBC;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+
+		switch (auth_xfrm->auth.algo) {
+		case RTE_CRYPTO_AUTH_SHA1_HMAC:
+			ctl->auth_type = ROC_IE_ON_SA_AUTH_SHA1;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+		auth_key_len = auth_xfrm->auth.key.length;
+		if (auth_key_len < 20 || auth_key_len > 64)
+			return -ENOTSUP;
+
+		key = cipher_xfrm->cipher.key.data;
+		length = cipher_xfrm->cipher.key.length;
+
+		ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+	}
+
+	switch (length) {
+	case ROC_CPT_AES128_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_128;
+		break;
+	case ROC_CPT_AES192_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_192;
+		break;
+	case ROC_CPT_AES256_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_256;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	memcpy(cipher_key, key, length);
+
+	if (ipsec_xfrm->options.esn)
+		ctl->esn_en = 1;
+
+	ctl->spi = rte_cpu_to_be_32(ipsec_xfrm->spi);
+	return 0;
+}
+
+int
+cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
+			   struct rte_security_ipsec_xform *ipsec_xfrm,
+			   struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
+	int rc;
+
+	rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
+					    sa->hmac_key, ipsec_xfrm,
+					    crypto_xfrm);
+	if (rc)
+		return rc;
+
+	rte_wmb();
+
+	/* Enable SA */
+	ctl->valid = 1;
+	return 0;
+}
+
+int
+cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
+			    struct rte_security_ipsec_xform *ipsec_xfrm,
+			    struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct rte_security_ipsec_tunnel_param *tunnel = &ipsec_xfrm->tunnel;
+	struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
+	int rc;
+
+	/* Fill common params */
+	rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
+					    sa->hmac_key, ipsec_xfrm,
+					    crypto_xfrm);
+	if (rc)
+		return rc;
+
+	if (ipsec_xfrm->mode != RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
+		goto skip_tunnel_info;
+
+	/* Tunnel header info */
+	switch (tunnel->type) {
+	case RTE_SECURITY_IPSEC_TUNNEL_IPV4:
+		memcpy(&sa->ip_src, &tunnel->ipv4.src_ip,
+		       sizeof(struct in_addr));
+		memcpy(&sa->ip_dst, &tunnel->ipv4.dst_ip,
+		       sizeof(struct in_addr));
+		break;
+	case RTE_SECURITY_IPSEC_TUNNEL_IPV6:
+		return -ENOTSUP;
+	default:
+		return -EINVAL;
+	}
+
+skip_tunnel_info:
+	rte_wmb();
+
+	/* Enable SA */
+	ctl->valid = 1;
+	return 0;
+}
+
+bool
+cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa)
+{
+	return !!sa->ctl.valid;
+}
+
+bool
+cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa)
+{
+	return !!sa->ctl.valid;
+}
+
 uint8_t
 cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		     enum rte_crypto_auth_algorithm a_algo,
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 602f583..db97887 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -46,4 +46,16 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 bool __roc_api cnxk_ot_ipsec_inb_sa_valid(struct roc_ot_ipsec_inb_sa *sa);
 bool __roc_api cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa);
 
+/* [CN9K, CN10K) */
+int __roc_api
+cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
+			   struct rte_security_ipsec_xform *ipsec_xfrm,
+			   struct rte_crypto_sym_xform *crypto_xfrm);
+int __roc_api
+cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
+			    struct rte_security_ipsec_xform *ipsec_xfrm,
+			    struct rte_crypto_sym_xform *crypto_xfrm);
+bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
+bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
+
 #endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 34a844b..7814b60 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -14,6 +14,10 @@ INTERNAL {
 	cnxk_logtype_sso;
 	cnxk_logtype_tim;
 	cnxk_logtype_tm;
+	cnxk_onf_ipsec_inb_sa_fill;
+	cnxk_onf_ipsec_outb_sa_fill;
+	cnxk_onf_ipsec_inb_sa_valid;
+	cnxk_onf_ipsec_outb_sa_valid;
 	cnxk_ot_ipsec_inb_sa_fill;
 	cnxk_ot_ipsec_outb_sa_fill;
 	cnxk_ot_ipsec_inb_sa_valid;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 02/27] common/cnxk: add helper API to dump cpt parse header
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 01/27] common/cnxk: add security support for cn9k fast path Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 03/27] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Ray Kinsella
  Cc: jerinj, schalla, dev

Add helper API to dump cpt parse header.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_cpt.h       |  2 ++
 drivers/common/cnxk/roc_cpt_debug.c | 31 +++++++++++++++++++++++++++++++
 drivers/common/cnxk/version.map     |  1 +
 3 files changed, 34 insertions(+)

diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index f0f505a..9b55303 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -154,4 +154,6 @@ void __roc_api roc_cpt_iq_enable(struct roc_cpt_lf *lf);
 int __roc_api roc_cpt_lmtline_init(struct roc_cpt *roc_cpt,
 				   struct roc_cpt_lmtline *lmtline, int lf_id);
 
+void __roc_api roc_cpt_parse_hdr_dump(const struct cpt_parse_hdr_s *cpth);
+
 #endif /* _ROC_CPT_H_ */
diff --git a/drivers/common/cnxk/roc_cpt_debug.c b/drivers/common/cnxk/roc_cpt_debug.c
index 9a9dcba..a6c9004 100644
--- a/drivers/common/cnxk/roc_cpt_debug.c
+++ b/drivers/common/cnxk/roc_cpt_debug.c
@@ -5,6 +5,37 @@
 #include "roc_api.h"
 #include "roc_priv.h"
 
+void
+roc_cpt_parse_hdr_dump(const struct cpt_parse_hdr_s *cpth)
+{
+	plt_print("CPT_PARSE \t0x%p:", cpth);
+
+	/* W0 */
+	plt_print("W0: cookie \t0x%x\t\tmatch_id \t0x%04x\t\terr_sum \t%u \t",
+		  cpth->w0.cookie, cpth->w0.match_id, cpth->w0.err_sum);
+	plt_print("W0: reas_sts \t0x%x\t\tet_owr \t%u\t\tpkt_fmt \t%u \t",
+		  cpth->w0.reas_sts, cpth->w0.et_owr, cpth->w0.pkt_fmt);
+	plt_print("W0: pad_len \t%u\t\tnum_frags \t%u\t\tpkt_out \t%u \t",
+		  cpth->w0.pad_len, cpth->w0.num_frags, cpth->w0.pkt_out);
+
+	/* W1 */
+	plt_print("W1: wqe_ptr \t0x%016lx\t", cpth->wqe_ptr);
+
+	/* W2 */
+	plt_print("W2: frag_age \t0x%x\t\torig_pf_func \t0x%04x",
+		  cpth->w2.frag_age, cpth->w2.orig_pf_func);
+	plt_print("W2: il3_off \t0x%x\t\tfi_pad \t0x%x\t\tfi_offset \t0x%x \t",
+		  cpth->w2.il3_off, cpth->w2.fi_pad, cpth->w2.fi_offset);
+
+	/* W3 */
+	plt_print("W3: hw_ccode \t0x%x\t\tuc_ccode \t0x%x\t\tspi \t0x%08x",
+		  cpth->w3.hw_ccode, cpth->w3.uc_ccode, cpth->w3.spi);
+
+	/* W4 */
+	plt_print("W4: esn \t%" PRIx64 " \t OR frag1_wqe_ptr \t0x%" PRIx64,
+		  cpth->esn, cpth->frag1_wqe_ptr);
+}
+
 static int
 cpt_af_reg_read(struct roc_cpt *roc_cpt, uint64_t reg, uint64_t *val)
 {
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 7814b60..5dbb21c 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -66,6 +66,7 @@ INTERNAL {
 	roc_cpt_lf_fini;
 	roc_cpt_lfs_print;
 	roc_cpt_lmtline_init;
+	roc_cpt_parse_hdr_dump;
 	roc_cpt_rxc_time_cfg;
 	roc_error_msg_get;
 	roc_hash_sha1_gen;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 03/27] common/cnxk: allow reuse of SSO API for inline dev
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 01/27] common/cnxk: add security support for cn9k fast path Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 02/27] common/cnxk: add helper API to dump cpt parse header Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 04/27] common/cnxk: change nix debug API and queue API interface Nithin Dabilpuram
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Rework interface of sso internal functions to use for nix inline dev's
sso LF's.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 52 ++++++++++++++++++++++++--------------
 drivers/common/cnxk/roc_sso_priv.h |  9 +++++++
 2 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 1ccf262..bdf973f 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -6,11 +6,10 @@
 #include "roc_priv.h"
 
 /* Private functions. */
-static int
-sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf,
+int
+sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	     void **rsp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	int rc = -ENOSPC;
 
 	switch (lf_type) {
@@ -41,10 +40,9 @@ sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf,
 	return 0;
 }
 
-static int
-sso_lf_free(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf)
+int
+sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	int rc = -ENOSPC;
 
 	switch (lf_type) {
@@ -152,7 +150,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 	return 0;
 }
 
-static void
+void
 sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
 		    uint16_t hwgrp[], uint16_t n, uint16_t enable)
 {
@@ -172,8 +170,10 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
 		k = k ? k : 4;
 		for (j = 0; j < k; j++) {
 			mask[j] = hwgrp[i + j] | enable << 14;
-			enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
-				 plt_bitmap_clear(bmp, hwgrp[i + j]);
+			if (bmp) {
+				enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
+					 plt_bitmap_clear(bmp, hwgrp[i + j]);
+			}
 			plt_sso_dbg("HWS %d Linked to HWGRP %d", hws,
 				    hwgrp[i + j]);
 		}
@@ -388,10 +388,8 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 }
 
 int
-roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
-			uint16_t hwgrps)
+sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	struct sso_hw_setconfig *req;
 	int rc = -ENOSPC;
 
@@ -406,9 +404,17 @@ roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 }
 
 int
-roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
+roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
+			uint16_t hwgrps)
 {
 	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+
+	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+}
+
+int
+sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
+{
 	struct sso_hw_xaq_release *req;
 
 	req = mbox_alloc_msg_sso_hw_release_xaq_aura(dev->mbox);
@@ -420,6 +426,14 @@ roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 }
 
 int
+roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
+{
+	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+
+	return sso_hwgrp_release_xaq(dev, hwgrps);
+}
+
+int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
@@ -468,13 +482,13 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto hwgrp_atch_fail;
 	}
 
-	rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWS, nb_hws, NULL);
+	rc = sso_lf_alloc(&sso->dev, SSO_LF_TYPE_HWS, nb_hws, NULL);
 	if (rc < 0) {
 		plt_err("Unable to alloc SSO HWS LFs");
 		goto hws_alloc_fail;
 	}
 
-	rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp,
+	rc = sso_lf_alloc(&sso->dev, SSO_LF_TYPE_HWGRP, nb_hwgrp,
 			  (void **)&rsp_hwgrp);
 	if (rc < 0) {
 		plt_err("Unable to alloc SSO HWGRP Lfs");
@@ -503,9 +517,9 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 
 	return 0;
 sso_msix_fail:
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, nb_hwgrp);
 hwgrp_alloc_fail:
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, nb_hws);
 hws_alloc_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
@@ -523,8 +537,8 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
 				 roc_sso->nb_hws, roc_sso->nb_hwgrp);
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
 
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 5361d4f..8dffa3f 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -39,6 +39,15 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
 	return (struct sso *)&roc_sso->reserved[0];
 }
 
+/* SSO LF ops */
+int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
+		 void **rsp);
+int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
+			 uint16_t hwgrp[], uint16_t n, uint16_t enable);
+int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
+int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
+
 /* SSO IRQ */
 int sso_register_irqs_priv(struct roc_sso *roc_sso,
 			   struct plt_intr_handle *handle, uint16_t nb_hws,
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 04/27] common/cnxk: change nix debug API and queue API interface
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (2 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 03/27] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 05/27] common/cnxk: add nix inline device irq API Nithin Dabilpuram
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Change nix debug API and queue API interface for use by
internal nix inline device initialization.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix.c       |   2 +-
 drivers/common/cnxk/roc_nix_debug.c | 118 +++++++++++++++++++++++++++---------
 drivers/common/cnxk/roc_nix_priv.h  |  16 +++++
 drivers/common/cnxk/roc_nix_queue.c |  89 +++++++++++++++------------
 4 files changed, 159 insertions(+), 66 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index 23d508b..3ab954e 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -300,7 +300,7 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix)
 	}
 }
 
-static inline uint64_t
+uint64_t
 nix_get_blkaddr(struct dev *dev)
 {
 	uint64_t reg;
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 6e56513..9539bb9 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -110,17 +110,12 @@ roc_nix_lf_get_reg_count(struct roc_nix *roc_nix)
 }
 
 int
-roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+nix_lf_gen_reg_dump(uintptr_t nix_lf_base, uint64_t *data)
 {
-	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
-	uintptr_t nix_lf_base = nix->base;
 	bool dump_stdout;
 	uint64_t reg;
 	uint32_t i;
 
-	if (roc_nix == NULL)
-		return NIX_ERR_PARAM;
-
 	dump_stdout = data ? 0 : 1;
 
 	for (i = 0; i < PLT_DIM(nix_lf_reg); i++) {
@@ -131,8 +126,21 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 			*data++ = reg;
 	}
 
+	return i;
+}
+
+int
+nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint8_t lf_tx_stats,
+		     uint8_t lf_rx_stats)
+{
+	uint32_t i, count = 0;
+	bool dump_stdout;
+	uint64_t reg;
+
+	dump_stdout = data ? 0 : 1;
+
 	/* NIX_LF_TX_STATX */
-	for (i = 0; i < nix->lf_tx_stats; i++) {
+	for (i = 0; i < lf_tx_stats; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_TX_STATX(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_TX_STATX", i,
@@ -140,9 +148,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_RX_STATX */
-	for (i = 0; i < nix->lf_rx_stats; i++) {
+	for (i = 0; i < lf_rx_stats; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_RX_STATX(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_RX_STATX", i,
@@ -151,8 +160,21 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 			*data++ = reg;
 	}
 
+	return count + i;
+}
+
+int
+nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
+		    uint16_t cints)
+{
+	uint32_t i, count = 0;
+	bool dump_stdout;
+	uint64_t reg;
+
+	dump_stdout = data ? 0 : 1;
+
 	/* NIX_LF_QINTX_CNT*/
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_CNT", i,
@@ -160,9 +182,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_INT */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_INT", i,
@@ -170,9 +193,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_ENA_W1S */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1S",
@@ -180,9 +204,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_ENA_W1C */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1C",
@@ -190,9 +215,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_CNT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_CNT", i,
@@ -200,9 +226,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_WAIT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_WAIT", i,
@@ -210,9 +237,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_INT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT", i,
@@ -220,9 +248,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_INT_W1S */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT_W1S",
@@ -230,9 +259,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_ENA_W1S */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1S",
@@ -240,9 +270,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_ENA_W1C */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1C",
@@ -250,12 +281,40 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+
+	return count + i;
+}
+
+int
+roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	bool dump_stdout = data ? 0 : 1;
+	uintptr_t nix_base;
+	uint32_t i;
+
+	if (roc_nix == NULL)
+		return NIX_ERR_PARAM;
+
+	nix_base = nix->base;
+	/* General registers */
+	i = nix_lf_gen_reg_dump(nix_base, data);
+
+	/* Rx, Tx stat registers */
+	i += nix_lf_stat_reg_dump(nix_base, dump_stdout ? NULL : &data[i],
+				  nix->lf_tx_stats, nix->lf_rx_stats);
+
+	/* Intr registers */
+	i += nix_lf_int_reg_dump(nix_base, dump_stdout ? NULL : &data[i],
+				 nix->qints, nix->cints);
+
 	return 0;
 }
 
-static int
-nix_q_ctx_get(struct mbox *mbox, uint8_t ctype, uint16_t qid, __io void **ctx_p)
+int
+nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, __io void **ctx_p)
 {
+	struct mbox *mbox = dev->mbox;
 	int rc;
 
 	if (roc_model_is_cn9k()) {
@@ -485,7 +544,7 @@ nix_cn9k_lf_rq_dump(__io struct nix_rq_ctx_s *ctx)
 	nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
 }
 
-static inline void
+void
 nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx)
 {
 	nix_dump("W0: wqe_aura \t\t\t%d\nW0: len_ol3_dis \t\t\t%d",
@@ -595,12 +654,12 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 {
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	int rc = -1, q, rq = nix->nb_rx_queues;
-	struct mbox *mbox = (&nix->dev)->mbox;
 	struct npa_aq_enq_rsp *npa_rsp;
 	struct npa_aq_enq_req *npa_aq;
-	volatile void *ctx;
+	struct dev *dev = &nix->dev;
 	int sq = nix->nb_tx_queues;
 	struct npa_lf *npa_lf;
+	volatile void *ctx;
 	uint32_t sqb_aura;
 
 	npa_lf = idev_npa_obj_get();
@@ -608,7 +667,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 		return NPA_ERR_DEVICE_NOT_BOUNDED;
 
 	for (q = 0; q < rq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_CQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_CQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get cq context");
 			goto fail;
@@ -619,7 +678,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 	}
 
 	for (q = 0; q < rq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_RQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get rq context");
 			goto fail;
@@ -633,7 +692,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 	}
 
 	for (q = 0; q < sq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_SQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_SQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get sq context");
 			goto fail;
@@ -686,11 +745,13 @@ roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+	const uint64_t *sgs = (const uint64_t *)(rx + 1);
+	int i;
 
 	nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
 		 cq->tag, cq->q, cq->node, cq->cqe_type);
 
-	nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", rx->chan,
+	nix_dump("W0: chan \t0x%x\t\tdesc_sizem1 \t%d", rx->chan,
 		 rx->desc_sizem1);
 	nix_dump("W0: imm_copy \t%d\t\texpress \t%d", rx->imm_copy,
 		 rx->express);
@@ -731,6 +792,9 @@ roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 
 	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
 		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+
+	for (i = 0; i < (rx->desc_sizem1 + 1) << 1; i++)
+		nix_dump("sg[%u] = %p", i, (void *)sgs[i]);
 }
 
 void
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 9dc0c88..79c15ea 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -348,6 +348,12 @@ int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node,
 			 bool rr_quantum_only);
 int nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix);
 
+int nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
+		    bool cfg, bool ena);
+int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
+	       bool ena);
+int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable);
+
 /*
  * TM priv utils.
  */
@@ -393,4 +399,14 @@ void nix_tm_node_free(struct nix_tm_node *node);
 struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void);
 void nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile);
 
+uint64_t nix_get_blkaddr(struct dev *dev);
+void nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx);
+int nix_lf_gen_reg_dump(uintptr_t nix_lf_base, uint64_t *data);
+int nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data,
+			 uint8_t lf_tx_stats, uint8_t lf_rx_stats);
+int nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
+			uint16_t cints);
+int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid,
+		  __io void **ctx_p);
+
 #endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 7e2f86e..de63361 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -27,46 +27,54 @@ nix_qsize_clampup(uint32_t val)
 }
 
 int
+nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable)
+{
+	struct mbox *mbox = dev->mbox;
+
+	/* Pkts will be dropped silently if RQ is disabled */
+	if (roc_model_is_cn9k()) {
+		struct nix_aq_enq_req *aq;
+
+		aq = mbox_alloc_msg_nix_aq_enq(mbox);
+		aq->qidx = rq->qid;
+		aq->ctype = NIX_AQ_CTYPE_RQ;
+		aq->op = NIX_AQ_INSTOP_WRITE;
+
+		aq->rq.ena = enable;
+		aq->rq_mask.ena = ~(aq->rq_mask.ena);
+	} else {
+		struct nix_cn10k_aq_enq_req *aq;
+
+		aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+		aq->qidx = rq->qid;
+		aq->ctype = NIX_AQ_CTYPE_RQ;
+		aq->op = NIX_AQ_INSTOP_WRITE;
+
+		aq->rq.ena = enable;
+		aq->rq_mask.ena = ~(aq->rq_mask.ena);
+	}
+
+	return mbox_process(mbox);
+}
+
+int
 roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable)
 {
 	struct nix *nix = roc_nix_to_nix_priv(rq->roc_nix);
-	struct mbox *mbox = (&nix->dev)->mbox;
 	int rc;
 
-	/* Pkts will be dropped silently if RQ is disabled */
-	if (roc_model_is_cn9k()) {
-		struct nix_aq_enq_req *aq;
-
-		aq = mbox_alloc_msg_nix_aq_enq(mbox);
-		aq->qidx = rq->qid;
-		aq->ctype = NIX_AQ_CTYPE_RQ;
-		aq->op = NIX_AQ_INSTOP_WRITE;
-
-		aq->rq.ena = enable;
-		aq->rq_mask.ena = ~(aq->rq_mask.ena);
-	} else {
-		struct nix_cn10k_aq_enq_req *aq;
-
-		aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
-		aq->qidx = rq->qid;
-		aq->ctype = NIX_AQ_CTYPE_RQ;
-		aq->op = NIX_AQ_INSTOP_WRITE;
-
-		aq->rq.ena = enable;
-		aq->rq_mask.ena = ~(aq->rq_mask.ena);
-	}
-
-	rc = mbox_process(mbox);
+	rc = nix_rq_ena_dis(&nix->dev, rq, enable);
 
 	if (roc_model_is_cn10k())
 		plt_write64(rq->qid, nix->base + NIX_LF_OP_VWQE_FLUSH);
 	return rc;
 }
 
-static int
-rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+int
+nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
+		bool cfg, bool ena)
 {
-	struct mbox *mbox = (&nix->dev)->mbox;
+	struct mbox *mbox = dev->mbox;
 	struct nix_aq_enq_req *aq;
 
 	aq = mbox_alloc_msg_nix_aq_enq(mbox);
@@ -116,7 +124,7 @@ rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
 	aq->rq.rq_int_ena = 0;
 	/* Many to one reduction */
-	aq->rq.qint_idx = rq->qid % nix->qints;
+	aq->rq.qint_idx = rq->qid % qints;
 	aq->rq.xqe_drop_ena = 1;
 
 	/* If RED enabled, then fill enable for all cases */
@@ -177,11 +185,12 @@ rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	return 0;
 }
 
-static int
-rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+int
+nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
+	   bool ena)
 {
-	struct mbox *mbox = (&nix->dev)->mbox;
 	struct nix_cn10k_aq_enq_req *aq;
+	struct mbox *mbox = dev->mbox;
 
 	aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
 	aq->qidx = rq->qid;
@@ -218,8 +227,10 @@ rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 		aq->rq.cq = rq->qid;
 	}
 
-	if (rq->ipsech_ena)
+	if (rq->ipsech_ena) {
 		aq->rq.ipsech_ena = 1;
+		aq->rq.ipsecd_drop_en = 1;
+	}
 
 	aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle);
 
@@ -258,7 +269,7 @@ rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
 	aq->rq.rq_int_ena = 0;
 	/* Many to one reduction */
-	aq->rq.qint_idx = rq->qid % nix->qints;
+	aq->rq.qint_idx = rq->qid % qints;
 	aq->rq.xqe_drop_ena = 1;
 
 	/* If RED enabled, then fill enable for all cases */
@@ -357,6 +368,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct mbox *mbox = (&nix->dev)->mbox;
 	bool is_cn9k = roc_model_is_cn9k();
+	struct dev *dev = &nix->dev;
 	int rc;
 
 	if (roc_nix == NULL || rq == NULL)
@@ -368,9 +380,9 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	rq->roc_nix = roc_nix;
 
 	if (is_cn9k)
-		rc = rq_cn9k_cfg(nix, rq, false, ena);
+		rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena);
 	else
-		rc = rq_cfg(nix, rq, false, ena);
+		rc = nix_rq_cfg(dev, rq, nix->qints, false, ena);
 
 	if (rc)
 		return rc;
@@ -384,6 +396,7 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct mbox *mbox = (&nix->dev)->mbox;
 	bool is_cn9k = roc_model_is_cn9k();
+	struct dev *dev = &nix->dev;
 	int rc;
 
 	if (roc_nix == NULL || rq == NULL)
@@ -395,9 +408,9 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	rq->roc_nix = roc_nix;
 
 	if (is_cn9k)
-		rc = rq_cn9k_cfg(nix, rq, true, ena);
+		rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, true, ena);
 	else
-		rc = rq_cfg(nix, rq, true, ena);
+		rc = nix_rq_cfg(dev, rq, nix->qints, true, ena);
 
 	if (rc)
 		return rc;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 05/27] common/cnxk: add nix inline device irq API
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (3 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 04/27] common/cnxk: change nix debug API and queue API interface Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 06/27] common/cnxk: add nix inline device init and fini Nithin Dabilpuram
                   ` (24 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Add API to setup nix inline device IRQ's.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/meson.build           |   1 +
 drivers/common/cnxk/roc_api.h             |   3 +
 drivers/common/cnxk/roc_irq.c             |   7 +-
 drivers/common/cnxk/roc_nix_inl.h         |  10 +
 drivers/common/cnxk/roc_nix_inl_dev_irq.c | 359 ++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h    |  57 +++++
 drivers/common/cnxk/roc_platform.h        |   9 +-
 drivers/common/cnxk/roc_priv.h            |   3 +
 8 files changed, 442 insertions(+), 7 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_nix_inl.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h

diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 8a551d1..207ca00 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
         'roc_nix_mcast.c',
         'roc_nix_npc.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 7dec845..c1af95e 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -129,4 +129,7 @@
 /* HASH computation */
 #include "roc_hash.h"
 
+/* NIX Inline dev */
+#include "roc_nix_inl.h"
+
 #endif /* _ROC_API_H_ */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 4c2b4c3..28fe691 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -138,9 +138,10 @@ dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
 		irq_init(intr_handle);
 	}
 
-	if (vec > intr_handle->max_intr) {
-		plt_err("Vector=%d greater than max_intr=%d", vec,
-			intr_handle->max_intr);
+	if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
+		plt_err("Vector=%d greater than max_intr=%d or "
+			"max_efd=%" PRIu64,
+			vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
 		return -EINVAL;
 	}
 
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
new file mode 100644
index 0000000..1ec3dda
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_NIX_INL_H_
+#define _ROC_NIX_INL_H_
+
+/* Inline device SSO Work callback */
+typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
+
+#endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
new file mode 100644
index 0000000..25ed42f
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static void
+nix_inl_sso_work_cb(struct nix_inl_dev *inl_dev)
+{
+	uintptr_t getwrk_op = inl_dev->ssow_base + SSOW_LF_GWS_OP_GET_WORK0;
+	uintptr_t tag_wqe_op = inl_dev->ssow_base + SSOW_LF_GWS_WQE0;
+	uint32_t wdata = BIT(16) | 1;
+	union {
+		__uint128_t get_work;
+		uint64_t u64[2];
+	} gw;
+	uint64_t work;
+
+again:
+	/* Try to do get work */
+	gw.get_work = wdata;
+	plt_write64(gw.u64[0], getwrk_op);
+	do {
+		roc_load_pair(gw.u64[0], gw.u64[1], tag_wqe_op);
+	} while (gw.u64[0] & BIT_ULL(63));
+
+	work = gw.u64[1];
+	/* Do we have any work? */
+	if (work) {
+		if (inl_dev->work_cb)
+			inl_dev->work_cb(gw.u64, inl_dev->cb_args);
+		else
+			plt_warn("Undelivered inl dev work gw0: %p gw1: %p",
+				 (void *)gw.u64[0], (void *)gw.u64[1]);
+		goto again;
+	}
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+}
+
+static int
+nix_inl_nix_reg_dump(struct nix_inl_dev *inl_dev)
+{
+	uintptr_t nix_base = inl_dev->nix_base;
+
+	/* General registers */
+	nix_lf_gen_reg_dump(nix_base, NULL);
+
+	/* Rx, Tx stat registers */
+	nix_lf_stat_reg_dump(nix_base, NULL, inl_dev->lf_tx_stats,
+			     inl_dev->lf_rx_stats);
+
+	/* Intr registers */
+	nix_lf_int_reg_dump(nix_base, NULL, inl_dev->qints, inl_dev->cints);
+
+	return 0;
+}
+
+static void
+nix_inl_sso_hwgrp_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint64_t intr;
+
+	intr = plt_read64(sso_base + SSO_LF_GGRP_INT);
+	if (intr == 0)
+		return;
+
+	/* Check for work executable interrupt */
+	if (intr & BIT(1))
+		nix_inl_sso_work_cb(inl_dev);
+
+	if (!(intr & BIT(1)))
+		plt_err("GGRP 0 GGRP_INT=0x%" PRIx64 "", intr);
+
+	/* Clear interrupt */
+	plt_write64(intr, sso_base + SSO_LF_GGRP_INT);
+}
+
+static void
+nix_inl_sso_hws_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uint64_t intr;
+
+	intr = plt_read64(ssow_base + SSOW_LF_GWS_INT);
+	if (intr == 0)
+		return;
+
+	plt_err("GWS 0 GWS_INT=0x%" PRIx64 "", intr);
+
+	/* Clear interrupt */
+	plt_write64(intr, ssow_base + SSOW_LF_GWS_INT);
+}
+
+int
+nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint16_t sso_msixoff, ssow_msixoff;
+	int rc;
+
+	ssow_msixoff = inl_dev->ssow_msixoff;
+	sso_msixoff = inl_dev->sso_msixoff;
+	if (sso_msixoff == MSIX_VECTOR_INVALID ||
+	    ssow_msixoff == MSIX_VECTOR_INVALID) {
+		plt_err("Invalid SSO/SSOW MSIX offsets (0x%x, 0x%x)",
+			sso_msixoff, ssow_msixoff);
+		return -EINVAL;
+	}
+
+	/*
+	 * Setup SSOW interrupt
+	 */
+
+	/* Clear SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1C);
+	/* Register interrupt with vfio */
+	rc = dev_irq_register(handle, nix_inl_sso_hws_irq, inl_dev,
+			      ssow_msixoff + SSOW_LF_INT_VEC_IOP);
+	/* Set SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1S);
+
+	/*
+	 * Setup SSO/HWGRP interrupt
+	 */
+
+	/* Clear SSO interrupt enable */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1C);
+	/* Register IRQ */
+	rc |= dev_irq_register(handle, nix_inl_sso_hwgrp_irq, (void *)inl_dev,
+			       sso_msixoff + SSO_LF_INT_VEC_GRP);
+	/* Enable hw interrupt */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1S);
+
+	/* Setup threshold for work exec interrupt to 1 wqe in IAQ */
+	plt_write64(0x1ull, sso_base + SSO_LF_GGRP_INT_THR);
+
+	return rc;
+}
+
+void
+nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint16_t sso_msixoff, ssow_msixoff;
+
+	ssow_msixoff = inl_dev->ssow_msixoff;
+	sso_msixoff = inl_dev->sso_msixoff;
+
+	/* Clear SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1C);
+	/* Clear SSO/HWGRP interrupt enable */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1C);
+	/* Clear SSO threshold */
+	plt_write64(0, sso_base + SSO_LF_GGRP_INT_THR);
+
+	/* Unregister IRQ */
+	dev_irq_unregister(handle, nix_inl_sso_hws_irq, (void *)inl_dev,
+			   ssow_msixoff + SSOW_LF_INT_VEC_IOP);
+	dev_irq_unregister(handle, nix_inl_sso_hwgrp_irq, (void *)inl_dev,
+			   sso_msixoff + SSO_LF_INT_VEC_GRP);
+}
+
+static void
+nix_inl_nix_q_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t reg, intr;
+	uint8_t irq;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_QINTX_INT(0));
+	if (intr == 0)
+		return;
+
+	plt_err("Queue_intr=0x%" PRIx64 " qintx 0 pf=%d, vf=%d", intr, dev->pf,
+		dev->vf);
+
+	/* Get and clear RQ0 interrupt */
+	reg = roc_atomic64_add_nosync(0,
+				      (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
+	if (reg & BIT_ULL(42) /* OP_ERR */) {
+		plt_err("Failed to get rq_int");
+		return;
+	}
+	irq = reg & 0xff;
+	plt_write64(0 | irq, nix_base + NIX_LF_RQ_OP_INT);
+
+	if (irq & BIT_ULL(NIX_RQINT_DROP))
+		plt_err("RQ=0 NIX_RQINT_DROP");
+
+	if (irq & BIT_ULL(NIX_RQINT_RED))
+		plt_err("RQ=0 NIX_RQINT_RED");
+
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_QINTX_INT(0));
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+static void
+nix_inl_nix_ras_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t intr;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_RAS);
+	if (intr == 0)
+		return;
+
+	plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_RAS);
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+static void
+nix_inl_nix_err_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t intr;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_ERR_INT);
+	if (intr == 0)
+		return;
+
+	plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_ERR_INT);
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+int
+nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t nix_base = inl_dev->nix_base;
+	uint16_t msixoff;
+	int rc;
+
+	msixoff = inl_dev->nix_msixoff;
+	if (msixoff == MSIX_VECTOR_INVALID) {
+		plt_err("Invalid NIXLF MSIX vector offset: 0x%x", msixoff);
+		return -EINVAL;
+	}
+
+	/* Disable err interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_ERR_INT_ENA_W1C);
+	/* DIsable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1C);
+
+	/* Register err irq */
+	rc = dev_irq_register(handle, nix_inl_nix_err_irq, inl_dev,
+			      msixoff + NIX_LF_INT_VEC_ERR_INT);
+	rc |= dev_irq_register(handle, nix_inl_nix_ras_irq, inl_dev,
+			       msixoff + NIX_LF_INT_VEC_POISON);
+
+	/* Enable all nix lf error irqs except RQ_DISABLED and CQ_DISABLED */
+	plt_write64(~(BIT_ULL(11) | BIT_ULL(24)),
+		    nix_base + NIX_LF_ERR_INT_ENA_W1S);
+	/* Enable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1S);
+
+	/* Setup queue irq for RQ 0 */
+
+	/* Clear QINT CNT, interrupt */
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+
+	/* Register queue irq vector */
+	rc |= dev_irq_register(handle, nix_inl_nix_q_irq, inl_dev,
+			       msixoff + NIX_LF_INT_VEC_QINT_START);
+
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+	/* Enable QINT interrupt */
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(0));
+
+	return rc;
+}
+
+void
+nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t nix_base = inl_dev->nix_base;
+	uint16_t msixoff;
+
+	msixoff = inl_dev->nix_msixoff;
+	/* Disable err interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_ERR_INT_ENA_W1C);
+	/* DIsable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1C);
+
+	dev_irq_unregister(handle, nix_inl_nix_err_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_ERR_INT);
+	dev_irq_unregister(handle, nix_inl_nix_ras_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_POISON);
+
+	/* Clear QINT CNT */
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+
+	/* Disable QINT interrupt */
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+
+	/* Unregister queue irq vector */
+	dev_irq_unregister(handle, nix_inl_nix_q_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_QINT_START);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
new file mode 100644
index 0000000..f424009
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_NIX_INL_PRIV_H_
+#define _ROC_NIX_INL_PRIV_H_
+
+struct nix_inl_dev {
+	/* Base device object */
+	struct dev dev;
+
+	/* PCI device */
+	struct plt_pci_device *pci_dev;
+
+	/* LF specific BAR2 regions */
+	uintptr_t nix_base;
+	uintptr_t ssow_base;
+	uintptr_t sso_base;
+
+	/* MSIX vector offsets */
+	uint16_t nix_msixoff;
+	uint16_t ssow_msixoff;
+	uint16_t sso_msixoff;
+
+	/* SSO data */
+	uint32_t xaq_buf_size;
+	uint32_t xae_waes;
+	uint32_t iue;
+	uint64_t xaq_aura;
+	void *xaq_mem;
+	roc_nix_inl_sso_work_cb_t work_cb;
+	void *cb_args;
+
+	/* NIX data */
+	uint8_t lf_tx_stats;
+	uint8_t lf_rx_stats;
+	uint16_t cints;
+	uint16_t qints;
+	struct roc_nix_rq rq;
+	uint16_t rq_refs;
+	bool is_nix1;
+
+	/* NIX/CPT data */
+	void *inb_sa_base;
+	uint16_t inb_sa_sz;
+
+	/* Device arguments */
+	uint8_t selftest;
+	uint16_t ipsec_in_max_spi;
+};
+
+int nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev);
+void nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev);
+
+int nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev);
+void nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev);
+
+#endif /* _ROC_NIX_INL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b..177db3d 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -113,10 +113,11 @@
 #define plt_write64(val, addr)                                                 \
 	rte_write64_relaxed((val), (volatile void *)(addr))
 
-#define plt_wmb() rte_wmb()
-#define plt_rmb() rte_rmb()
-#define plt_io_wmb() rte_io_wmb()
-#define plt_io_rmb() rte_io_rmb()
+#define plt_wmb()		rte_wmb()
+#define plt_rmb()		rte_rmb()
+#define plt_io_wmb()		rte_io_wmb()
+#define plt_io_rmb()		rte_io_rmb()
+#define plt_atomic_thread_fence rte_atomic_thread_fence
 
 #define plt_mmap       mmap
 #define PLT_PROT_READ  PROT_READ
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index 7494b8d..f72bbd5 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -38,4 +38,7 @@
 /* CPT */
 #include "roc_cpt_priv.h"
 
+/* NIX Inline dev */
+#include "roc_nix_inl_priv.h"
+
 #endif /* _ROC_PRIV_H_ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 06/27] common/cnxk: add nix inline device init and fini
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (4 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 05/27] common/cnxk: add nix inline device irq API Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 07/27] common/cnxk: add nix inline inbound and outbound support API Nithin Dabilpuram
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Ray Kinsella
  Cc: jerinj, schalla, dev

Add support to init and fini nix inline device with NIX LF,
SSO LF and SSOW LF for inline inbound IPSec in CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/meson.build        |   1 +
 drivers/common/cnxk/roc_api.h          |   2 +
 drivers/common/cnxk/roc_cpt.c          |   7 +-
 drivers/common/cnxk/roc_idev.c         |   2 +
 drivers/common/cnxk/roc_idev_priv.h    |   3 +
 drivers/common/cnxk/roc_nix_debug.c    |  35 +++
 drivers/common/cnxk/roc_nix_inl.h      |  55 ++++
 drivers/common/cnxk/roc_nix_inl_dev.c  | 544 +++++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h |   2 +
 drivers/common/cnxk/roc_platform.h     |   2 +
 drivers/common/cnxk/version.map        |   3 +
 11 files changed, 653 insertions(+), 3 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c

diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 207ca00..e8940d7 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl_dev.c',
         'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
         'roc_nix_mcast.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index c1af95e..53f4e4b 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -53,6 +53,8 @@
 #define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
 #define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
 #define PCI_DEVID_CNXK_BPHY	      0xA089
+#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
+#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
 
 #define PCI_DEVID_CN9K_CGX  0xA059
 #define PCI_DEVID_CN10K_RPM 0xA060
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 5e35d1b..3222b3e 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -381,11 +381,12 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr,
 	if (blkaddr != RVU_BLOCK_ADDR_CPT0 && blkaddr != RVU_BLOCK_ADDR_CPT1)
 		return -EINVAL;
 
-	PLT_SET_USED(inl_dev_sso);
-
 	req = mbox_alloc_msg_cpt_lf_alloc(mbox);
 	req->nix_pf_func = 0;
-	req->sso_pf_func = idev_sso_pffunc_get();
+	if (inl_dev_sso && nix_inl_dev_pffunc_get())
+		req->sso_pf_func = nix_inl_dev_pffunc_get();
+	else
+		req->sso_pf_func = idev_sso_pffunc_get();
 	req->eng_grpmsk = eng_grpmsk;
 	req->blkaddr = blkaddr;
 
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index 1494187..648f37b 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -38,6 +38,8 @@ idev_set_defaults(struct idev_cfg *idev)
 	idev->num_lmtlines = 0;
 	idev->bphy = NULL;
 	idev->cpt = NULL;
+	idev->nix_inl_dev = NULL;
+	plt_spinlock_init(&idev->nix_inl_dev_lock);
 	__atomic_store_n(&idev->npa_refcnt, 0, __ATOMIC_RELEASE);
 }
 
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index 84e6f1e..2c8309b 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -9,6 +9,7 @@
 struct npa_lf;
 struct roc_bphy;
 struct roc_cpt;
+struct nix_inl_dev;
 struct idev_cfg {
 	uint16_t sso_pf_func;
 	uint16_t npa_pf_func;
@@ -20,6 +21,8 @@ struct idev_cfg {
 	uint64_t lmt_base_addr;
 	struct roc_bphy *bphy;
 	struct roc_cpt *cpt;
+	struct nix_inl_dev *nix_inl_dev;
+	plt_spinlock_t nix_inl_dev_lock;
 };
 
 /* Generic */
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 9539bb9..582f5a3 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -1213,3 +1213,38 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  \trss_alg_idx = %d", nix->rss_alg_idx);
 	nix_dump("  \ttx_pause = %d", nix->tx_pause);
 }
+
+void
+roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct nix_inl_dev *inl_dev =
+		(struct nix_inl_dev *)&roc_inl_dev->reserved;
+	struct dev *dev = &inl_dev->dev;
+
+	nix_dump("nix_inl_dev@%p", inl_dev);
+	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
+	nix_dump("  vf = %d", dev_get_vf(dev->pf_func));
+	nix_dump("  bar2 = 0x%" PRIx64, dev->bar2);
+	nix_dump("  bar4 = 0x%" PRIx64, dev->bar4);
+
+	nix_dump("  \tpci_dev = %p", inl_dev->pci_dev);
+	nix_dump("  \tnix_base = 0x%" PRIxPTR "", inl_dev->nix_base);
+	nix_dump("  \tsso_base = 0x%" PRIxPTR "", inl_dev->sso_base);
+	nix_dump("  \tssow_base = 0x%" PRIxPTR "", inl_dev->ssow_base);
+	nix_dump("  \tnix_msixoff = %d", inl_dev->nix_msixoff);
+	nix_dump("  \tsso_msixoff = %d", inl_dev->sso_msixoff);
+	nix_dump("  \tssow_msixoff = %d", inl_dev->ssow_msixoff);
+	nix_dump("  \tnix_cints = %d", inl_dev->cints);
+	nix_dump("  \tnix_qints = %d", inl_dev->qints);
+	nix_dump("  \trq_refs = %d", inl_dev->rq_refs);
+	nix_dump("  \tinb_sa_base = 0x%p", inl_dev->inb_sa_base);
+	nix_dump("  \tinb_sa_sz = %d", inl_dev->inb_sa_sz);
+	nix_dump("  \txaq_buf_size = %u", inl_dev->xaq_buf_size);
+	nix_dump("  \txae_waes = %u", inl_dev->xae_waes);
+	nix_dump("  \tiue = %u", inl_dev->iue);
+	nix_dump("  \txaq_aura = 0x%" PRIx64, inl_dev->xaq_aura);
+	nix_dump("  \txaq_mem = 0x%p", inl_dev->xaq_mem);
+
+	nix_dump("  \tinl_dev_rq:");
+	roc_nix_rq_dump(&inl_dev->rq);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 1ec3dda..f1fe4a2 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -4,7 +4,62 @@
 #ifndef _ROC_NIX_INL_H_
 #define _ROC_NIX_INL_H_
 
+/* ONF INB HW area */
+#define ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ                                        \
+	PLT_ALIGN(sizeof(struct roc_onf_ipsec_inb_sa), ROC_ALIGN)
+/* ONF INB SW reserved area */
+#define ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD 384
+#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ                                        \
+	(ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD)
+#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2 9
+
+/* ONF OUTB HW area */
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ                                       \
+	PLT_ALIGN(sizeof(struct roc_onf_ipsec_outb_sa), ROC_ALIGN)
+/* ONF OUTB SW reserved area */
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD 128
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ                                       \
+	(ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD)
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2 8
+
+/* OT INB HW area */
+#define ROC_NIX_INL_OT_IPSEC_INB_HW_SZ                                         \
+	PLT_ALIGN(sizeof(struct roc_ot_ipsec_inb_sa), ROC_ALIGN)
+/* OT INB SW reserved area */
+#define ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD 128
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ                                         \
+	(ROC_NIX_INL_OT_IPSEC_INB_HW_SZ + ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD)
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2 10
+
+/* OT OUTB HW area */
+#define ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ                                        \
+	PLT_ALIGN(sizeof(struct roc_ot_ipsec_outb_sa), ROC_ALIGN)
+/* OT OUTB SW reserved area */
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD 128
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ                                        \
+	(ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD)
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2 9
+
+/* Alignment of SA Base */
+#define ROC_NIX_INL_SA_BASE_ALIGN BIT_ULL(16)
+
 /* Inline device SSO Work callback */
 typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
 
+struct roc_nix_inl_dev {
+	/* Input parameters */
+	struct plt_pci_device *pci_dev;
+	uint16_t ipsec_in_max_spi;
+	bool selftest;
+	/* End of input parameters */
+
+#define ROC_NIX_INL_MEM_SZ (1024)
+	uint8_t reserved[ROC_NIX_INL_MEM_SZ] __plt_cache_aligned;
+} __plt_cache_aligned;
+
+/* NIX Inline Device API */
+int __roc_api roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev);
+int __roc_api roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev);
+void __roc_api roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev);
+
 #endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
new file mode 100644
index 0000000..214f183
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -0,0 +1,544 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define XAQ_CACHE_CNT 0x7
+
+/* Default Rx Config for Inline NIX LF */
+#define NIX_INL_LF_RX_CFG                                                      \
+	(ROC_NIX_LF_RX_CFG_DROP_RE | ROC_NIX_LF_RX_CFG_L2_LEN_ERR |            \
+	 ROC_NIX_LF_RX_CFG_IP6_UDP_OPT | ROC_NIX_LF_RX_CFG_DIS_APAD |          \
+	 ROC_NIX_LF_RX_CFG_CSUM_IL4 | ROC_NIX_LF_RX_CFG_CSUM_OL4 |             \
+	 ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |               \
+	 ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3)
+
+uint16_t
+nix_inl_dev_pffunc_get(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev != NULL) {
+		inl_dev = idev->nix_inl_dev;
+		if (inl_dev)
+			return inl_dev->dev.pf_func;
+	}
+	return 0;
+}
+
+static void
+nix_inl_selftest_work_cb(uint64_t *gw, void *args)
+{
+	uintptr_t work = gw[1];
+
+	*((uintptr_t *)args + (gw[0] & 0x1)) = work;
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+}
+
+static int
+nix_inl_selftest(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	roc_nix_inl_sso_work_cb_t save_cb;
+	static uintptr_t work_arr[2];
+	struct nix_inl_dev *inl_dev;
+	void *save_cb_args;
+	uint64_t add_work0;
+	int rc = 0;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inl_dev == NULL)
+		return -ENOTSUP;
+
+	plt_info("Performing nix inl self test");
+
+	/* Save and update cb to test cb */
+	save_cb = inl_dev->work_cb;
+	save_cb_args = inl_dev->cb_args;
+	inl_dev->work_cb = nix_inl_selftest_work_cb;
+	inl_dev->cb_args = work_arr;
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+#define WORK_MAGIC1 0x335577ff0
+#define WORK_MAGIC2 0xdeadbeef0
+
+	/* Add work */
+	add_work0 = ((uint64_t)(SSO_TT_ORDERED) << 32) | 0x0;
+	roc_store_pair(add_work0, WORK_MAGIC1, inl_dev->sso_base);
+	add_work0 = ((uint64_t)(SSO_TT_ORDERED) << 32) | 0x1;
+	roc_store_pair(add_work0, WORK_MAGIC2, inl_dev->sso_base);
+
+	plt_delay_ms(10000);
+
+	/* Check if we got expected work */
+	if (work_arr[0] != WORK_MAGIC1 || work_arr[1] != WORK_MAGIC2) {
+		plt_err("Failed to get expected work, [0]=%p [1]=%p",
+			(void *)work_arr[0], (void *)work_arr[1]);
+		rc = -EFAULT;
+		goto exit;
+	}
+
+	plt_info("Work, [0]=%p [1]=%p", (void *)work_arr[0],
+		 (void *)work_arr[1]);
+
+exit:
+	/* Restore state */
+	inl_dev->work_cb = save_cb;
+	inl_dev->cb_args = save_cb_args;
+	return rc;
+}
+
+static int
+nix_inl_nix_ipsec_cfg(struct nix_inl_dev *inl_dev, bool ena)
+{
+	struct nix_inline_ipsec_lf_cfg *lf_cfg;
+	struct mbox *mbox = (&inl_dev->dev)->mbox;
+	uint32_t sa_w;
+
+	lf_cfg = mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
+	if (lf_cfg == NULL)
+		return -ENOSPC;
+
+	if (ena) {
+		sa_w = plt_align32pow2(inl_dev->ipsec_in_max_spi + 1);
+		sa_w = plt_log2_u32(sa_w);
+
+		lf_cfg->enable = 1;
+		lf_cfg->sa_base_addr = (uintptr_t)inl_dev->inb_sa_base;
+		lf_cfg->ipsec_cfg1.sa_idx_w = sa_w;
+		/* CN9K SA size is different */
+		if (roc_model_is_cn9k())
+			lf_cfg->ipsec_cfg0.lenm1_max = NIX_CN9K_MAX_HW_FRS - 1;
+		else
+			lf_cfg->ipsec_cfg0.lenm1_max = NIX_RPM_MAX_HW_FRS - 1;
+		lf_cfg->ipsec_cfg1.sa_idx_max = inl_dev->ipsec_in_max_spi;
+		lf_cfg->ipsec_cfg0.sa_pow2_size =
+			plt_log2_u32(inl_dev->inb_sa_sz);
+
+		lf_cfg->ipsec_cfg0.tag_const = 0;
+		lf_cfg->ipsec_cfg0.tt = SSO_TT_ORDERED;
+	} else {
+		lf_cfg->enable = 0;
+	}
+
+	return mbox_process(mbox);
+}
+
+static int
+nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
+{
+	struct sso_lf_alloc_rsp *sso_rsp;
+	struct dev *dev = &inl_dev->dev;
+	uint32_t xaq_cnt, count, aura;
+	uint16_t hwgrp[1] = {0};
+	struct npa_pool_s pool;
+	uintptr_t iova;
+	int rc;
+
+	/* Alloc SSOW LF */
+	rc = sso_lf_alloc(dev, SSO_LF_TYPE_HWS, 1, NULL);
+	if (rc) {
+		plt_err("Failed to alloc SSO HWS, rc=%d", rc);
+		return rc;
+	}
+
+	/* Alloc HWGRP LF */
+	rc = sso_lf_alloc(dev, SSO_LF_TYPE_HWGRP, 1, (void **)&sso_rsp);
+	if (rc) {
+		plt_err("Failed to alloc SSO HWGRP, rc=%d", rc);
+		goto free_ssow;
+	}
+
+	inl_dev->xaq_buf_size = sso_rsp->xaq_buf_size;
+	inl_dev->xae_waes = sso_rsp->xaq_wq_entries;
+	inl_dev->iue = sso_rsp->in_unit_entries;
+
+	/* Create XAQ pool */
+	xaq_cnt = XAQ_CACHE_CNT;
+	xaq_cnt += inl_dev->iue / inl_dev->xae_waes;
+	plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+
+	inl_dev->xaq_mem = plt_zmalloc(inl_dev->xaq_buf_size * xaq_cnt,
+				       inl_dev->xaq_buf_size);
+	if (!inl_dev->xaq_mem) {
+		rc = NIX_ERR_NO_MEM;
+		plt_err("Failed to alloc xaq buf mem");
+		goto free_sso;
+	}
+
+	memset(&pool, 0, sizeof(struct npa_pool_s));
+	pool.nat_align = 1;
+	rc = roc_npa_pool_create(&inl_dev->xaq_aura, inl_dev->xaq_buf_size,
+				 xaq_cnt, NULL, &pool);
+	if (rc) {
+		plt_err("Failed to alloc aura for XAQ, rc=%d", rc);
+		goto free_mem;
+	}
+
+	/* Fill the XAQ buffers */
+	iova = (uint64_t)inl_dev->xaq_mem;
+	for (count = 0; count < xaq_cnt; count++) {
+		roc_npa_aura_op_free(inl_dev->xaq_aura, 0, iova);
+		iova += inl_dev->xaq_buf_size;
+	}
+	roc_npa_aura_op_range_set(inl_dev->xaq_aura, (uint64_t)inl_dev->xaq_mem,
+				  iova);
+
+	aura = roc_npa_aura_handle_to_aura(inl_dev->xaq_aura);
+
+	/* Setup xaq for hwgrps */
+	rc = sso_hwgrp_alloc_xaq(dev, aura, 1);
+	if (rc) {
+		plt_err("Failed to setup hwgrp xaq aura, rc=%d", rc);
+		goto destroy_pool;
+	}
+
+	/* Register SSO, SSOW error and work irq's */
+	rc = nix_inl_sso_register_irqs(inl_dev);
+	if (rc) {
+		plt_err("Failed to register sso irq's, rc=%d", rc);
+		goto release_xaq;
+	}
+
+	/* Setup hwgrp->hws link */
+	sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+
+	/* Enable HWGRP */
+	plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
+
+	return 0;
+
+release_xaq:
+	sso_hwgrp_release_xaq(&inl_dev->dev, 1);
+destroy_pool:
+	roc_npa_pool_destroy(inl_dev->xaq_aura);
+	inl_dev->xaq_aura = 0;
+free_mem:
+	plt_free(inl_dev->xaq_mem);
+	inl_dev->xaq_mem = NULL;
+free_sso:
+	sso_lf_free(dev, SSO_LF_TYPE_HWGRP, 1);
+free_ssow:
+	sso_lf_free(dev, SSO_LF_TYPE_HWS, 1);
+	return rc;
+}
+
+static int
+nix_inl_sso_release(struct nix_inl_dev *inl_dev)
+{
+	uint16_t hwgrp[1] = {0};
+
+	/* Disable HWGRP */
+	plt_write64(0, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
+
+	/* Unregister SSO/SSOW IRQ's */
+	nix_inl_sso_unregister_irqs(inl_dev);
+
+	/* Unlink hws */
+	sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+
+	/* Release XAQ aura */
+	sso_hwgrp_release_xaq(&inl_dev->dev, 1);
+
+	/* Free SSO, SSOW LF's */
+	sso_lf_free(&inl_dev->dev, SSO_LF_TYPE_HWS, 1);
+	sso_lf_free(&inl_dev->dev, SSO_LF_TYPE_HWGRP, 1);
+
+	return 0;
+}
+
+static int
+nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
+{
+	uint16_t ipsec_in_max_spi = inl_dev->ipsec_in_max_spi;
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct nix_lf_alloc_rsp *rsp;
+	struct nix_lf_alloc_req *req;
+	size_t inb_sa_sz;
+	int rc = -ENOSPC;
+
+	/* Alloc NIX LF needed for single RQ */
+	req = mbox_alloc_msg_nix_lf_alloc(mbox);
+	if (req == NULL)
+		return rc;
+	req->rq_cnt = 1;
+	req->sq_cnt = 1;
+	req->cq_cnt = 1;
+	/* XQESZ is W16 */
+	req->xqe_sz = NIX_XQESZ_W16;
+	/* RSS size does not matter as this RQ is only for UCAST_IPSEC action */
+	req->rss_sz = ROC_NIX_RSS_RETA_SZ_64;
+	req->rss_grps = ROC_NIX_RSS_GRPS;
+	req->npa_func = idev_npa_pffunc_get();
+	req->sso_func = dev->pf_func;
+	req->rx_cfg = NIX_INL_LF_RX_CFG;
+	req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
+
+	rc = mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		plt_err("Failed to alloc lf, rc=%d", rc);
+		return rc;
+	}
+
+	inl_dev->lf_tx_stats = rsp->lf_tx_stats;
+	inl_dev->lf_rx_stats = rsp->lf_rx_stats;
+	inl_dev->qints = rsp->qints;
+	inl_dev->cints = rsp->cints;
+
+	/* Register nix interrupts */
+	rc = nix_inl_nix_register_irqs(inl_dev);
+	if (rc) {
+		plt_err("Failed to register nix irq's, rc=%d", rc);
+		goto lf_free;
+	}
+
+	/* CN9K SA is different */
+	if (roc_model_is_cn9k())
+		inb_sa_sz = ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ;
+	else
+		inb_sa_sz = ROC_NIX_INL_OT_IPSEC_INB_SA_SZ;
+
+	/* Alloc contiguous memory for Inbound SA's */
+	inl_dev->inb_sa_sz = inb_sa_sz;
+	inl_dev->inb_sa_base = plt_zmalloc(inb_sa_sz * ipsec_in_max_spi,
+					   ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!inl_dev->inb_sa_base) {
+		plt_err("Failed to allocate memory for Inbound SA");
+		rc = -ENOMEM;
+		goto unregister_irqs;
+	}
+
+	/* Setup device specific inb SA table */
+	rc = nix_inl_nix_ipsec_cfg(inl_dev, true);
+	if (rc) {
+		plt_err("Failed to setup NIX Inbound SA conf, rc=%d", rc);
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	plt_free(inl_dev->inb_sa_base);
+	inl_dev->inb_sa_base = NULL;
+unregister_irqs:
+	nix_inl_nix_unregister_irqs(inl_dev);
+lf_free:
+	mbox_alloc_msg_nix_lf_free(mbox);
+	rc |= mbox_process(mbox);
+	return rc;
+}
+
+static int
+nix_inl_nix_release(struct nix_inl_dev *inl_dev)
+{
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct nix_lf_free_req *req;
+	struct ndc_sync_op *ndc_req;
+	int rc = -ENOSPC;
+
+	/* Disable Inbound processing */
+	rc = nix_inl_nix_ipsec_cfg(inl_dev, false);
+	if (rc)
+		plt_err("Failed to disable Inbound IPSec, rc=%d", rc);
+
+	/* Sync NDC-NIX for LF */
+	ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+	if (ndc_req == NULL)
+		return rc;
+	ndc_req->nix_lf_rx_sync = 1;
+	rc = mbox_process(mbox);
+	if (rc)
+		plt_err("Error on NDC-NIX-RX LF sync, rc %d", rc);
+
+	/* Unregister IRQs */
+	nix_inl_nix_unregister_irqs(inl_dev);
+
+	/* By default all associated mcam rules are deleted */
+	req = mbox_alloc_msg_nix_lf_free(mbox);
+	if (req == NULL)
+		return -ENOSPC;
+
+	return mbox_process(mbox);
+}
+
+static int
+nix_inl_lf_attach(struct nix_inl_dev *inl_dev)
+{
+	struct msix_offset_rsp *msix_rsp;
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct rsrc_attach_req *req;
+	uint64_t nix_blkaddr;
+	int rc = -ENOSPC;
+
+	req = mbox_alloc_msg_attach_resources(mbox);
+	if (req == NULL)
+		return rc;
+	req->modify = true;
+	/* Attach 1 NIXLF, SSO HWS and SSO HWGRP */
+	req->nixlf = true;
+	req->ssow = 1;
+	req->sso = 1;
+
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		return rc;
+
+	/* Get MSIX vector offsets */
+	mbox_alloc_msg_msix_offset(mbox);
+	rc = mbox_process_msg(dev->mbox, (void **)&msix_rsp);
+	if (rc)
+		return rc;
+
+	inl_dev->nix_msixoff = msix_rsp->nix_msixoff;
+	inl_dev->ssow_msixoff = msix_rsp->ssow_msixoff[0];
+	inl_dev->sso_msixoff = msix_rsp->sso_msixoff[0];
+
+	nix_blkaddr = nix_get_blkaddr(dev);
+	inl_dev->is_nix1 = (nix_blkaddr == RVU_BLOCK_ADDR_NIX1);
+
+	/* Update base addresses for LF's */
+	inl_dev->nix_base = dev->bar2 + (nix_blkaddr << 20);
+	inl_dev->ssow_base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20);
+	inl_dev->sso_base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20);
+	return 0;
+}
+
+static int
+nix_inl_lf_detach(struct nix_inl_dev *inl_dev)
+{
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct rsrc_detach_req *req;
+	int rc = -ENOSPC;
+
+	req = mbox_alloc_msg_detach_resources(mbox);
+	if (req == NULL)
+		return rc;
+	req->partial = true;
+	req->nixlf = true;
+	req->ssow = true;
+	req->sso = true;
+
+	return mbox_process(dev->mbox);
+}
+
+int
+roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct plt_pci_device *pci_dev;
+	struct nix_inl_dev *inl_dev;
+	struct idev_cfg *idev;
+	int rc;
+
+	pci_dev = roc_inl_dev->pci_dev;
+
+	/* Skip probe if already done */
+	idev = idev_get_cfg();
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	if (idev->nix_inl_dev) {
+		plt_info("Skipping device %s, inline device already probed",
+			 pci_dev->name);
+		return -EEXIST;
+	}
+
+	PLT_STATIC_ASSERT(sizeof(struct nix_inl_dev) <= ROC_NIX_INL_MEM_SZ);
+
+	inl_dev = (struct nix_inl_dev *)roc_inl_dev->reserved;
+	memset(inl_dev, 0, sizeof(*inl_dev));
+
+	inl_dev->pci_dev = pci_dev;
+	inl_dev->ipsec_in_max_spi = roc_inl_dev->ipsec_in_max_spi;
+	inl_dev->selftest = roc_inl_dev->selftest;
+
+	/* Initialize base device */
+	rc = dev_init(&inl_dev->dev, pci_dev);
+	if (rc) {
+		plt_err("Failed to init roc device");
+		goto error;
+	}
+
+	/* Attach LF resources */
+	rc = nix_inl_lf_attach(inl_dev);
+	if (rc) {
+		plt_err("Failed to attach LF resources, rc=%d", rc);
+		goto dev_cleanup;
+	}
+
+	/* Setup NIX LF */
+	rc = nix_inl_nix_setup(inl_dev);
+	if (rc)
+		goto lf_detach;
+
+	/* Setup SSO LF */
+	rc = nix_inl_sso_setup(inl_dev);
+	if (rc)
+		goto nix_release;
+
+	idev->nix_inl_dev = inl_dev;
+
+	/* Perform selftest if asked for */
+	if (inl_dev->selftest) {
+		rc = nix_inl_selftest();
+		if (rc)
+			goto nix_release;
+	}
+
+	return 0;
+nix_release:
+	rc |= nix_inl_nix_release(inl_dev);
+lf_detach:
+	rc |= nix_inl_lf_detach(inl_dev);
+dev_cleanup:
+	rc |= dev_fini(&inl_dev->dev, pci_dev);
+error:
+	return rc;
+}
+
+int
+roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct plt_pci_device *pci_dev;
+	struct nix_inl_dev *inl_dev;
+	struct idev_cfg *idev;
+	int rc;
+
+	idev = idev_get_cfg();
+	if (idev == NULL)
+		return 0;
+
+	if (!idev->nix_inl_dev ||
+	    PLT_PTR_DIFF(roc_inl_dev->reserved, idev->nix_inl_dev))
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	pci_dev = inl_dev->pci_dev;
+
+	/* Release SSO */
+	rc = nix_inl_sso_release(inl_dev);
+
+	/* Release NIX */
+	rc |= nix_inl_nix_release(inl_dev);
+
+	/* Detach LF's */
+	rc |= nix_inl_lf_detach(inl_dev);
+
+	/* Cleanup mbox */
+	rc |= dev_fini(&inl_dev->dev, pci_dev);
+	if (rc)
+		return rc;
+
+	idev->nix_inl_dev = NULL;
+	return 0;
+}
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index f424009..ab38062 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -54,4 +54,6 @@ void nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev);
 int nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev);
 void nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev);
 
+uint16_t nix_inl_dev_pffunc_get(void);
+
 #endif /* _ROC_NIX_INL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 177db3d..241655b 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -37,6 +37,7 @@
 #define PLT_MEMZONE_NAMESIZE	 RTE_MEMZONE_NAMESIZE
 #define PLT_STD_C11		 RTE_STD_C11
 #define PLT_PTR_ADD		 RTE_PTR_ADD
+#define PLT_PTR_DIFF		 RTE_PTR_DIFF
 #define PLT_MAX_RXTX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
 #define PLT_INTR_VEC_RXTX_OFFSET RTE_INTR_VEC_RXTX_OFFSET
 #define PLT_MIN			 RTE_MIN
@@ -77,6 +78,7 @@
 #define plt_cpu_to_be_64 rte_cpu_to_be_64
 #define plt_be_to_cpu_64 rte_be_to_cpu_64
 
+#define plt_align32pow2	    rte_align32pow2
 #define plt_align32prevpow2 rte_align32prevpow2
 
 #define plt_bitmap			rte_bitmap
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 5dbb21c..3a35233 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -100,6 +100,9 @@ INTERNAL {
 	roc_nix_get_pf_func;
 	roc_nix_get_vf;
 	roc_nix_get_vwqe_interval;
+	roc_nix_inl_dev_dump;
+	roc_nix_inl_dev_fini;
+	roc_nix_inl_dev_init;
 	roc_nix_is_lbk;
 	roc_nix_is_pf;
 	roc_nix_is_sdp;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 07/27] common/cnxk: add nix inline inbound and outbound support API
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (5 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 06/27] common/cnxk: add nix inline device init and fini Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 08/27] common/cnxk: dump cpt lf registers on error intr Nithin Dabilpuram
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Ray Kinsella
  Cc: jerinj, schalla, dev

Add API to support setting up nix inline inbound and
nix inline outbound.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/hw/cpt.h         |   8 +
 drivers/common/cnxk/meson.build      |   1 +
 drivers/common/cnxk/roc_api.h        |  48 +--
 drivers/common/cnxk/roc_constants.h  |  58 +++
 drivers/common/cnxk/roc_io.h         |   9 +
 drivers/common/cnxk/roc_io_generic.h |   3 +-
 drivers/common/cnxk/roc_nix.h        |   5 +
 drivers/common/cnxk/roc_nix_debug.c  |  15 +
 drivers/common/cnxk/roc_nix_inl.c    | 739 +++++++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl.h    | 100 +++++
 drivers/common/cnxk/roc_nix_priv.h   |  15 +
 drivers/common/cnxk/roc_npc.c        |  27 +-
 drivers/common/cnxk/version.map      |  25 ++
 13 files changed, 996 insertions(+), 57 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_constants.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl.c

diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 84ebf2d..975139f 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -40,6 +40,7 @@
 #define CPT_LF_CTX_ENC_PKT_CNT	(0x540ull)
 #define CPT_LF_CTX_DEC_BYTE_CNT (0x550ull)
 #define CPT_LF_CTX_DEC_PKT_CNT	(0x560ull)
+#define CPT_LF_CTX_RELOAD	(0x570ull)
 
 #define CPT_AF_LFX_CTL(a)  (0x27000ull | (uint64_t)(a) << 3)
 #define CPT_AF_LFX_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
@@ -68,6 +69,13 @@ union cpt_lf_ctx_flush {
 	} s;
 };
 
+union cpt_lf_ctx_reload {
+	uint64_t u;
+	struct {
+		uint64_t cptr : 46;
+	} s;
+};
+
 union cpt_lf_inprog {
 	uint64_t u;
 	struct cpt_lf_inprog_s {
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index e8940d7..cd19ad2 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl.c',
         'roc_nix_inl_dev.c',
         'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 53f4e4b..b8f3667 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -9,28 +9,21 @@
 #include <stdint.h>
 #include <string.h>
 
-/* Alignment */
-#define ROC_ALIGN 128
-
 /* Bits manipulation */
 #include "roc_bits.h"
 
 /* Bitfields manipulation */
 #include "roc_bitfield.h"
 
+/* ROC Constants */
+#include "roc_constants.h"
+
 /* Constants */
 #define PLT_ETHER_ADDR_LEN 6
 
 /* Platform definition */
 #include "roc_platform.h"
 
-#define ROC_LMT_LINE_SZ		    128
-#define ROC_NUM_LMT_LINES	    2048
-#define ROC_LMT_LINES_PER_CORE_LOG2 5
-#define ROC_LMT_LINE_SIZE_LOG2	    7
-#define ROC_LMT_BASE_PER_CORE_LOG2                                             \
-	(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
-
 /* IO */
 #if defined(__aarch64__)
 #include "roc_io.h"
@@ -38,41 +31,6 @@
 #include "roc_io_generic.h"
 #endif
 
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM	      0x177D
-#define PCI_DEVID_CNXK_RVU_PF	      0xA063
-#define PCI_DEVID_CNXK_RVU_VF	      0xA064
-#define PCI_DEVID_CNXK_RVU_AF	      0xA065
-#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_CNXK_RVU_NPA_PF     0xA0FB
-#define PCI_DEVID_CNXK_RVU_NPA_VF     0xA0FC
-#define PCI_DEVID_CNXK_RVU_AF_VF      0xA0f8
-#define PCI_DEVID_CNXK_DPI_VF	      0xA081
-#define PCI_DEVID_CNXK_EP_VF	      0xB203
-#define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
-#define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
-#define PCI_DEVID_CNXK_BPHY	      0xA089
-#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
-#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
-
-#define PCI_DEVID_CN9K_CGX  0xA059
-#define PCI_DEVID_CN10K_RPM 0xA060
-
-#define PCI_DEVID_CN9K_RVU_CPT_PF  0xA0FD
-#define PCI_DEVID_CN9K_RVU_CPT_VF  0xA0FE
-#define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2
-#define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3
-
-#define PCI_SUBSYSTEM_DEVID_CN10KA  0xB900
-#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
-
-#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
-#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
-#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
-#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
-#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
-
 /* HW structure definition */
 #include "hw/cpt.h"
 #include "hw/nix.h"
diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h
new file mode 100644
index 0000000..1e6427c
--- /dev/null
+++ b/drivers/common/cnxk/roc_constants.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_CONSTANTS_H_
+#define _ROC_CONSTANTS_H_
+
+/* Alignment */
+#define ROC_ALIGN 128
+
+/* LMTST constants */
+/* [CN10K, .) */
+#define ROC_LMT_LINE_SZ		    128
+#define ROC_NUM_LMT_LINES	    2048
+#define ROC_LMT_LINES_PER_CORE_LOG2 5
+#define ROC_LMT_LINE_SIZE_LOG2	    7
+#define ROC_LMT_BASE_PER_CORE_LOG2                                             \
+	(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
+#define ROC_LMT_MAX_THREADS		42UL
+#define ROC_LMT_CPT_LINES_PER_CORE_LOG2 4
+#define ROC_LMT_CPT_BASE_ID_OFF                                                \
+	(ROC_LMT_MAX_THREADS << ROC_LMT_LINES_PER_CORE_LOG2)
+
+/* PCI IDs */
+#define PCI_VENDOR_ID_CAVIUM	      0x177D
+#define PCI_DEVID_CNXK_RVU_PF	      0xA063
+#define PCI_DEVID_CNXK_RVU_VF	      0xA064
+#define PCI_DEVID_CNXK_RVU_AF	      0xA065
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
+#define PCI_DEVID_CNXK_RVU_NPA_PF     0xA0FB
+#define PCI_DEVID_CNXK_RVU_NPA_VF     0xA0FC
+#define PCI_DEVID_CNXK_RVU_AF_VF      0xA0f8
+#define PCI_DEVID_CNXK_DPI_VF	      0xA081
+#define PCI_DEVID_CNXK_EP_VF	      0xB203
+#define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
+#define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
+#define PCI_DEVID_CNXK_BPHY	      0xA089
+#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
+#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
+
+#define PCI_DEVID_CN9K_CGX  0xA059
+#define PCI_DEVID_CN10K_RPM 0xA060
+
+#define PCI_DEVID_CN9K_RVU_CPT_PF  0xA0FD
+#define PCI_DEVID_CN9K_RVU_CPT_VF  0xA0FE
+#define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2
+#define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3
+
+#define PCI_SUBSYSTEM_DEVID_CN10KA  0xB900
+#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
+
+#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
+#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
+#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
+#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
+#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
+
+#endif /* _ROC_CONSTANTS_H_ */
diff --git a/drivers/common/cnxk/roc_io.h b/drivers/common/cnxk/roc_io.h
index aee8c7f..fe5f7f4 100644
--- a/drivers/common/cnxk/roc_io.h
+++ b/drivers/common/cnxk/roc_io.h
@@ -13,6 +13,15 @@
 		(lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2);    \
 	} while (0)
 
+#define ROC_LMT_CPT_BASE_ID_GET(lmt_addr, lmt_id)                              \
+	do {                                                                   \
+		/* 16 Lines per core */                                        \
+		lmt_id = ROC_LMT_CPT_BASE_ID_OFF;                              \
+		lmt_id += (plt_lcore_id() << ROC_LMT_CPT_LINES_PER_CORE_LOG2); \
+		/* Each line is of 128B */                                     \
+		(lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2);    \
+	} while (0)
+
 #define roc_load_pair(val0, val1, addr)                                        \
 	({                                                                     \
 		asm volatile("ldp %x[x0], %x[x1], [%x[p1]]"                    \
diff --git a/drivers/common/cnxk/roc_io_generic.h b/drivers/common/cnxk/roc_io_generic.h
index 28cb096..ceaa3a3 100644
--- a/drivers/common/cnxk/roc_io_generic.h
+++ b/drivers/common/cnxk/roc_io_generic.h
@@ -5,7 +5,8 @@
 #ifndef _ROC_IO_GENERIC_H_
 #define _ROC_IO_GENERIC_H_
 
-#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
+#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id)	  (lmt_id = 0)
+#define ROC_LMT_CPT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
 
 #define roc_load_pair(val0, val1, addr)                                        \
 	do {                                                                   \
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 822c190..ed6e721 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -171,6 +171,7 @@ struct roc_nix_rq {
 	uint8_t spb_red_pass;
 	/* End of Input parameters */
 	struct roc_nix *roc_nix;
+	bool inl_dev_ref;
 };
 
 struct roc_nix_cq {
@@ -254,6 +255,10 @@ struct roc_nix {
 	bool enable_loop;
 	bool hw_vlan_ins;
 	uint8_t lock_rx_ctx;
+	uint32_t outb_nb_desc;
+	uint16_t outb_nb_crypto_qs;
+	uint16_t ipsec_in_max_spi;
+	uint16_t ipsec_out_max_sa;
 	/* End of input parameters */
 	/* LMT line base for "Per Core Tx LMT line" mode*/
 	uintptr_t lmt_base;
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 582f5a3..266935a 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -818,6 +818,7 @@ roc_nix_rq_dump(struct roc_nix_rq *rq)
 	nix_dump("  vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo);
 	nix_dump("  vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
 	nix_dump("  roc_nix = %p", rq->roc_nix);
+	nix_dump("  inl_dev_ref = %d", rq->inl_dev_ref);
 }
 
 void
@@ -1160,6 +1161,7 @@ roc_nix_dump(struct roc_nix *roc_nix)
 {
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct dev *dev = &nix->dev;
+	int i;
 
 	nix_dump("nix@%p", nix);
 	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
@@ -1169,6 +1171,7 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  port_id = %d", roc_nix->port_id);
 	nix_dump("  rss_tag_as_xor = %d", roc_nix->rss_tag_as_xor);
 	nix_dump("  rss_tag_as_xor = %d", roc_nix->max_sqb_count);
+	nix_dump("  outb_nb_desc = %u", roc_nix->outb_nb_desc);
 
 	nix_dump("  \tpci_dev = %p", nix->pci_dev);
 	nix_dump("  \tbase = 0x%" PRIxPTR "", nix->base);
@@ -1206,12 +1209,24 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  \ttx_link = %d", nix->tx_link);
 	nix_dump("  \tsqb_size = %d", nix->sqb_size);
 	nix_dump("  \tmsixoff = %d", nix->msixoff);
+	for (i = 0; i < nix->nb_cpt_lf; i++)
+		nix_dump("  \tcpt_msixoff[%d] = %d", i, nix->cpt_msixoff[i]);
 	nix_dump("  \tcints = %d", nix->cints);
 	nix_dump("  \tqints = %d", nix->qints);
 	nix_dump("  \tsdp_link = %d", nix->sdp_link);
 	nix_dump("  \tptp_en = %d", nix->ptp_en);
 	nix_dump("  \trss_alg_idx = %d", nix->rss_alg_idx);
 	nix_dump("  \ttx_pause = %d", nix->tx_pause);
+	nix_dump("  \tinl_inb_ena = %d", nix->inl_inb_ena);
+	nix_dump("  \tinl_outb_ena = %d", nix->inl_outb_ena);
+	nix_dump("  \tinb_sa_base = 0x%p", nix->inb_sa_base);
+	nix_dump("  \tinb_sa_sz = %" PRIu64, nix->inb_sa_sz);
+	nix_dump("  \toutb_sa_base = 0x%p", nix->outb_sa_base);
+	nix_dump("  \toutb_sa_sz = %" PRIu64, nix->outb_sa_sz);
+	nix_dump("  \toutb_err_sso_pffunc = 0x%x", nix->outb_err_sso_pffunc);
+	nix_dump("  \tcpt_lf_base = 0x%p", nix->cpt_lf_base);
+	nix_dump("  \tnb_cpt_lf = %d", nix->nb_cpt_lf);
+	nix_dump("  \tinb_inl_dev = %d", nix->inb_inl_dev);
 }
 
 void
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
new file mode 100644
index 0000000..d144b19
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -0,0 +1,739 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ ==
+		  1UL << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ == 512);
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ ==
+		  1UL << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ ==
+		  1UL << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ == 1024);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ ==
+		  1UL << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2);
+
+static int
+nix_inl_inb_sa_tbl_setup(struct roc_nix *roc_nix)
+{
+	uint16_t ipsec_in_max_spi = roc_nix->ipsec_in_max_spi;
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_nix_ipsec_cfg cfg;
+	size_t inb_sa_sz;
+	int rc;
+
+	/* CN9K SA size is different */
+	if (roc_model_is_cn9k())
+		inb_sa_sz = ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ;
+	else
+		inb_sa_sz = ROC_NIX_INL_OT_IPSEC_INB_SA_SZ;
+
+	/* Alloc contiguous memory for Inbound SA's */
+	nix->inb_sa_sz = inb_sa_sz;
+	nix->inb_sa_base = plt_zmalloc(inb_sa_sz * ipsec_in_max_spi,
+				       ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!nix->inb_sa_base) {
+		plt_err("Failed to allocate memory for Inbound SA");
+		return -ENOMEM;
+	}
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.sa_size = inb_sa_sz;
+	cfg.iova = (uintptr_t)nix->inb_sa_base;
+	cfg.max_sa = ipsec_in_max_spi + 1;
+	cfg.tt = SSO_TT_ORDERED;
+
+	/* Setup device specific inb SA table */
+	rc = roc_nix_lf_inl_ipsec_cfg(roc_nix, &cfg, true);
+	if (rc) {
+		plt_err("Failed to setup NIX Inbound SA conf, rc=%d", rc);
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	plt_free(nix->inb_sa_base);
+	nix->inb_sa_base = NULL;
+	return rc;
+}
+
+static int
+nix_inl_sa_tbl_release(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	int rc;
+
+	rc = roc_nix_lf_inl_ipsec_cfg(roc_nix, NULL, false);
+	if (rc) {
+		plt_err("Failed to disable Inbound inline ipsec, rc=%d", rc);
+		return rc;
+	}
+
+	plt_free(nix->inb_sa_base);
+	nix->inb_sa_base = NULL;
+	return 0;
+}
+
+struct roc_cpt_lf *
+roc_nix_inl_outb_lf_base_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	/* NIX Inline config needs to be done */
+	if (!nix->inl_outb_ena || !nix->cpt_lf_base)
+		return NULL;
+
+	return (struct roc_cpt_lf *)nix->cpt_lf_base;
+}
+
+uintptr_t
+roc_nix_inl_outb_sa_base_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return (uintptr_t)nix->outb_sa_base;
+}
+
+uintptr_t
+roc_nix_inl_inb_sa_base_get(struct roc_nix *roc_nix, bool inb_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inb_inl_dev) {
+		/* Return inline dev sa base */
+		if (inl_dev)
+			return (uintptr_t)inl_dev->inb_sa_base;
+		return 0;
+	}
+
+	return (uintptr_t)nix->inb_sa_base;
+}
+
+uint32_t
+roc_nix_inl_inb_sa_max_spi(struct roc_nix *roc_nix, bool inb_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inb_inl_dev) {
+		if (inl_dev)
+			return inl_dev->ipsec_in_max_spi;
+		return 0;
+	}
+
+	return roc_nix->ipsec_in_max_spi;
+}
+
+uint32_t
+roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix, bool inl_dev_sa)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!inl_dev_sa)
+		return nix->inb_sa_sz;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inl_dev_sa && inl_dev)
+		return inl_dev->inb_sa_sz;
+
+	/* On error */
+	return 0;
+}
+
+uintptr_t
+roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix, bool inb_inl_dev, uint32_t spi)
+{
+	uintptr_t sa_base;
+	uint32_t max_spi;
+	uint64_t sz;
+
+	sa_base = roc_nix_inl_inb_sa_base_get(roc_nix, inb_inl_dev);
+	/* Check if SA base exists */
+	if (!sa_base)
+		return 0;
+
+	/* Check if SPI is in range */
+	max_spi = roc_nix_inl_inb_sa_max_spi(roc_nix, inb_inl_dev);
+	if (spi > max_spi) {
+		plt_err("Inbound SA SPI %u exceeds max %u", spi, max_spi);
+		return 0;
+	}
+
+	/* Get SA size */
+	sz = roc_nix_inl_inb_sa_sz(roc_nix, inb_inl_dev);
+	if (!sz)
+		return 0;
+
+	/* Basic logic of SPI->SA for now */
+	return (sa_base + (spi * sz));
+}
+
+int
+roc_nix_inl_inb_init(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct roc_cpt *roc_cpt;
+	uint16_t param1;
+	int rc;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	/* Unless we have another mechanism to trigger
+	 * onetime Inline config in CPTPF, we cannot
+	 * support without CPT being probed.
+	 */
+	roc_cpt = idev->cpt;
+	if (!roc_cpt) {
+		plt_err("Cannot support inline inbound, cryptodev not probed");
+		return -ENOTSUP;
+	}
+
+	if (roc_model_is_cn9k()) {
+		param1 = ROC_ONF_IPSEC_INB_MAX_L2_SZ;
+	} else {
+		union roc_ot_ipsec_inb_param1 u;
+
+		u.u16 = 0;
+		u.s.esp_trailer_disable = 1;
+		param1 = u.u16;
+	}
+
+	/* Do onetime Inbound Inline config in CPTPF */
+	rc = roc_cpt_inline_ipsec_inb_cfg(roc_cpt, param1, 0);
+	if (rc && rc != -EEXIST) {
+		plt_err("Failed to setup inbound lf, rc=%d", rc);
+		return rc;
+	}
+
+	/* Setup Inbound SA table */
+	rc = nix_inl_inb_sa_tbl_setup(roc_nix);
+	if (rc)
+		return rc;
+
+	nix->inl_inb_ena = true;
+	return 0;
+}
+
+int
+roc_nix_inl_inb_fini(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	nix->inl_inb_ena = false;
+
+	/* Disable Inbound SA */
+	return nix_inl_sa_tbl_release(roc_nix);
+}
+
+int
+roc_nix_inl_outb_init(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct roc_cpt_lf *lf_base, *lf;
+	struct dev *dev = &nix->dev;
+	struct msix_offset_rsp *rsp;
+	struct nix_inl_dev *inl_dev;
+	uint16_t sso_pffunc;
+	uint8_t eng_grpmask;
+	uint64_t blkaddr;
+	uint16_t nb_lf;
+	void *sa_base;
+	size_t sa_sz;
+	int i, j, rc;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	nb_lf = roc_nix->outb_nb_crypto_qs;
+	blkaddr = nix->is_nix1 ? RVU_BLOCK_ADDR_CPT1 : RVU_BLOCK_ADDR_CPT0;
+
+	/* Retrieve inline device if present */
+	inl_dev = idev->nix_inl_dev;
+	sso_pffunc = inl_dev ? inl_dev->dev.pf_func : idev_sso_pffunc_get();
+	if (!sso_pffunc) {
+		plt_err("Failed to setup inline outb, need either "
+			"inline device or sso device");
+		return -ENOTSUP;
+	}
+
+	/* Attach CPT LF for outbound */
+	rc = cpt_lfs_attach(dev, blkaddr, true, nb_lf);
+	if (rc) {
+		plt_err("Failed to attach CPT LF for inline outb, rc=%d", rc);
+		return rc;
+	}
+
+	/* Alloc CPT LF */
+	eng_grpmask = (1ULL << ROC_CPT_DFLT_ENG_GRP_SE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
+	rc = cpt_lfs_alloc(dev, eng_grpmask, blkaddr, true);
+	if (rc) {
+		plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
+		goto lf_detach;
+	}
+
+	/* Get msix offsets */
+	rc = cpt_get_msix_offset(dev, &rsp);
+	if (rc) {
+		plt_err("Failed to get CPT LF msix offset, rc=%d", rc);
+		goto lf_free;
+	}
+
+	mbox_memcpy(nix->cpt_msixoff,
+		    nix->is_nix1 ? rsp->cpt1_lf_msixoff : rsp->cptlf_msixoff,
+		    sizeof(nix->cpt_msixoff));
+
+	/* Alloc required num of cpt lfs */
+	lf_base = plt_zmalloc(nb_lf * sizeof(struct roc_cpt_lf), 0);
+	if (!lf_base) {
+		plt_err("Failed to alloc cpt lf memory");
+		rc = -ENOMEM;
+		goto lf_free;
+	}
+
+	/* Initialize CPT LF's */
+	for (i = 0; i < nb_lf; i++) {
+		lf = &lf_base[i];
+
+		lf->lf_id = i;
+		lf->nb_desc = roc_nix->outb_nb_desc;
+		lf->dev = &nix->dev;
+		lf->msixoff = nix->cpt_msixoff[i];
+		lf->pci_dev = nix->pci_dev;
+
+		/* Setup CPT LF instruction queue */
+		rc = cpt_lf_init(lf);
+		if (rc) {
+			plt_err("Failed to initialize CPT LF, rc=%d", rc);
+			goto lf_fini;
+		}
+
+		/* Associate this CPT LF with NIX PFFUNC */
+		rc = cpt_lf_outb_cfg(dev, sso_pffunc, nix->dev.pf_func, i,
+				     true);
+		if (rc) {
+			plt_err("Failed to setup CPT LF->(NIX,SSO) link, rc=%d",
+				rc);
+			goto lf_fini;
+		}
+
+		/* Enable IQ */
+		roc_cpt_iq_enable(lf);
+	}
+
+	if (!roc_nix->ipsec_out_max_sa)
+		goto skip_sa_alloc;
+
+	/* CN9K SA size is different */
+	if (roc_model_is_cn9k())
+		sa_sz = ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ;
+	else
+		sa_sz = ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ;
+	/* Alloc contiguous memory of outbound SA */
+	sa_base = plt_zmalloc(sa_sz * roc_nix->ipsec_out_max_sa,
+			      ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!sa_base) {
+		plt_err("Outbound SA base alloc failed");
+		goto lf_fini;
+	}
+	nix->outb_sa_base = sa_base;
+	nix->outb_sa_sz = sa_sz;
+	nix->cpt_lf_base = lf_base;
+	nix->nb_cpt_lf = nb_lf;
+	nix->outb_err_sso_pffunc = sso_pffunc;
+
+skip_sa_alloc:
+	nix->inl_outb_ena = true;
+	return 0;
+lf_fini:
+	for (j = i - 1; j >= 0; j--)
+		cpt_lf_fini(&lf_base[j]);
+	plt_free(lf_base);
+lf_free:
+	rc |= cpt_lfs_free(dev);
+lf_detach:
+	rc |= cpt_lfs_detach(dev);
+	return rc;
+}
+
+int
+roc_nix_inl_outb_fini(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_cpt_lf *lf_base = nix->cpt_lf_base;
+	struct dev *dev = &nix->dev;
+	int i, rc, ret = 0;
+
+	if (!nix->inl_outb_ena)
+		return 0;
+
+	nix->inl_outb_ena = false;
+
+	/* Cleanup CPT LF instruction queue */
+	for (i = 0; i < nix->nb_cpt_lf; i++)
+		cpt_lf_fini(&lf_base[i]);
+
+	/* Free LF resources */
+	rc = cpt_lfs_free(dev);
+	if (rc)
+		plt_err("Failed to free CPT LF resources, rc=%d", rc);
+	ret |= rc;
+
+	/* Detach LF */
+	rc = cpt_lfs_detach(dev);
+	if (rc)
+		plt_err("Failed to detach CPT LF, rc=%d", rc);
+
+	/* Free LF memory */
+	plt_free(lf_base);
+	nix->cpt_lf_base = NULL;
+	nix->nb_cpt_lf = 0;
+
+	/* Free outbound SA base */
+	plt_free(nix->outb_sa_base);
+	nix->outb_sa_base = NULL;
+
+	ret |= rc;
+	return ret;
+}
+
+bool
+roc_nix_inl_dev_is_probed(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev == NULL)
+		return 0;
+
+	return !!idev->nix_inl_dev;
+}
+
+bool
+roc_nix_inl_inb_is_enabled(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inl_inb_ena;
+}
+
+bool
+roc_nix_inl_outb_is_enabled(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inl_outb_ena;
+}
+
+int
+roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	struct dev *dev;
+	int rc;
+
+	if (idev == NULL)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	/* Nothing to do if no inline device */
+	if (!inl_dev)
+		return 0;
+
+	/* Just take reference if already inited */
+	if (inl_dev->rq_refs) {
+		inl_dev->rq_refs++;
+		rq->inl_dev_ref = true;
+		return 0;
+	}
+
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rq;
+	memset(inl_rq, 0, sizeof(struct roc_nix_rq));
+
+	/* Take RQ pool attributes from the first ethdev RQ */
+	inl_rq->qid = 0;
+	inl_rq->aura_handle = rq->aura_handle;
+	inl_rq->first_skip = rq->first_skip;
+	inl_rq->later_skip = rq->later_skip;
+	inl_rq->lpb_size = rq->lpb_size;
+
+	/* Enable IPSec */
+	inl_rq->ipsech_ena = true;
+
+	inl_rq->flow_tag_width = 20;
+	/* Special tag mask */
+	inl_rq->tag_mask = 0xFFF00000;
+	inl_rq->tt = SSO_TT_ORDERED;
+	inl_rq->hwgrp = 0;
+	inl_rq->wqe_skip = 1;
+	inl_rq->sso_ena = true;
+
+	/* Prepare and send RQ init mbox */
+	if (roc_model_is_cn9k())
+		rc = nix_rq_cn9k_cfg(dev, inl_rq, inl_dev->qints, false, true);
+	else
+		rc = nix_rq_cfg(dev, inl_rq, inl_dev->qints, false, true);
+	if (rc) {
+		plt_err("Failed to prepare aq_enq msg, rc=%d", rc);
+		return rc;
+	}
+
+	rc = mbox_process(dev->mbox);
+	if (rc) {
+		plt_err("Failed to send aq_enq msg, rc=%d", rc);
+		return rc;
+	}
+
+	inl_dev->rq_refs++;
+	rq->inl_dev_ref = true;
+	return 0;
+}
+
+int
+roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	struct dev *dev;
+	int rc;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!rq->inl_dev_ref)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	/* Inline device should be there if we have ref */
+	if (!inl_dev) {
+		plt_err("Failed to find inline device with refs");
+		return -EFAULT;
+	}
+
+	rq->inl_dev_ref = false;
+	inl_dev->rq_refs--;
+	if (inl_dev->rq_refs)
+		return 0;
+
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rq;
+	/* There are no more references, disable RQ */
+	rc = nix_rq_ena_dis(dev, inl_rq, false);
+	if (rc)
+		plt_err("Failed to disable inline device rq, rc=%d", rc);
+
+	/* Flush NIX LF for CN10K */
+	if (roc_model_is_cn10k())
+		plt_write64(0, inl_dev->nix_base + NIX_LF_OP_VWQE_FLUSH);
+
+	return rc;
+}
+
+void
+roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	/* Info used by NPC flow rule add */
+	nix->inb_inl_dev = use_inl_dev;
+}
+
+bool
+roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inb_inl_dev;
+}
+
+struct roc_nix_rq *
+roc_nix_inl_dev_rq(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev != NULL) {
+		inl_dev = idev->nix_inl_dev;
+		if (inl_dev != NULL)
+			return &inl_dev->rq;
+	}
+
+	return NULL;
+}
+
+uint16_t __roc_api
+roc_nix_inl_outb_sso_pffunc_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->outb_err_sso_pffunc;
+}
+
+int
+roc_nix_inl_cb_register(roc_nix_inl_sso_work_cb_t cb, void *args)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return -EIO;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev)
+		return -EIO;
+
+	/* Be silent if registration called with same cb and args */
+	if (inl_dev->work_cb == cb && inl_dev->cb_args == args)
+		return 0;
+
+	/* Don't allow registration again if registered with different cb */
+	if (inl_dev->work_cb)
+		return -EBUSY;
+
+	inl_dev->work_cb = cb;
+	inl_dev->cb_args = args;
+	return 0;
+}
+
+int
+roc_nix_inl_cb_unregister(roc_nix_inl_sso_work_cb_t cb, void *args)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return -ENOENT;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev)
+		return -ENOENT;
+
+	if (inl_dev->work_cb != cb || inl_dev->cb_args != args)
+		return -EINVAL;
+
+	inl_dev->work_cb = NULL;
+	inl_dev->cb_args = NULL;
+	return 0;
+}
+
+int
+roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix, uint32_t tag_const,
+			   uint8_t tt)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_nix_ipsec_cfg cfg;
+
+	/* Be silent if inline inbound not enabled */
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.sa_size = nix->inb_sa_sz;
+	cfg.iova = (uintptr_t)nix->inb_sa_base;
+	cfg.max_sa = roc_nix->ipsec_in_max_spi + 1;
+	cfg.tt = tt;
+	cfg.tag_const = tag_const;
+
+	return roc_nix_lf_inl_ipsec_cfg(roc_nix, &cfg, true);
+}
+
+int
+roc_nix_inl_sa_sync(struct roc_nix *roc_nix, void *sa, bool inb,
+		    enum roc_nix_inl_sa_sync_op op)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_cpt_lf *outb_lf = nix->cpt_lf_base;
+	union cpt_lf_ctx_reload reload;
+	union cpt_lf_ctx_flush flush;
+	uintptr_t rbase;
+
+	/* Nothing much to do on cn9k */
+	if (roc_model_is_cn9k()) {
+		plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+		return 0;
+	}
+
+	if (!inb && !outb_lf)
+		return -EINVAL;
+
+	/* Performing op via outbound lf is enough
+	 * when inline dev is not in use.
+	 */
+	if (outb_lf && !nix->inb_inl_dev) {
+		rbase = outb_lf->rbase;
+
+		flush.u = 0;
+		reload.u = 0;
+		switch (op) {
+		case ROC_NIX_INL_SA_OP_FLUSH_INVAL:
+			flush.s.inval = 1;
+			/* fall through */
+		case ROC_NIX_INL_SA_OP_FLUSH:
+			flush.s.cptr = ((uintptr_t)sa) >> 7;
+			plt_write64(flush.u, rbase + CPT_LF_CTX_FLUSH);
+			break;
+		case ROC_NIX_INL_SA_OP_RELOAD:
+			reload.s.cptr = ((uintptr_t)sa) >> 7;
+			plt_write64(reload.u, rbase + CPT_LF_CTX_RELOAD);
+			break;
+		default:
+			return -EINVAL;
+		}
+		return 0;
+	}
+
+	return -ENOTSUP;
+}
+
+void
+roc_nix_inl_dev_lock(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev != NULL)
+		plt_spinlock_lock(&idev->nix_inl_dev_lock);
+}
+
+void
+roc_nix_inl_dev_unlock(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev != NULL)
+		plt_spinlock_unlock(&idev->nix_inl_dev_lock);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index f1fe4a2..efc5a19 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -43,6 +43,62 @@
 /* Alignment of SA Base */
 #define ROC_NIX_INL_SA_BASE_ALIGN BIT_ULL(16)
 
+static inline struct roc_onf_ipsec_inb_sa *
+roc_nix_inl_onf_ipsec_inb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline struct roc_onf_ipsec_outb_sa *
+roc_nix_inl_onf_ipsec_outb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline void *
+roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ);
+}
+
+static inline void *
+roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ);
+}
+
+static inline struct roc_ot_ipsec_inb_sa *
+roc_nix_inl_ot_ipsec_inb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline struct roc_ot_ipsec_outb_sa *
+roc_nix_inl_ot_ipsec_outb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline void *
+roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
+}
+
+static inline void *
+roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
+}
+
 /* Inline device SSO Work callback */
 typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
 
@@ -61,5 +117,49 @@ struct roc_nix_inl_dev {
 int __roc_api roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev);
 int __roc_api roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev);
 void __roc_api roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev);
+bool __roc_api roc_nix_inl_dev_is_probed(void);
+void __roc_api roc_nix_inl_dev_lock(void);
+void __roc_api roc_nix_inl_dev_unlock(void);
+
+/* NIX Inline Inbound API */
+int __roc_api roc_nix_inl_inb_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_inb_fini(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_inl_inb_is_enabled(struct roc_nix *roc_nix);
+uintptr_t __roc_api roc_nix_inl_inb_sa_base_get(struct roc_nix *roc_nix,
+						bool inl_dev_sa);
+uint32_t __roc_api roc_nix_inl_inb_sa_max_spi(struct roc_nix *roc_nix,
+					      bool inl_dev_sa);
+uint32_t __roc_api roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix,
+					 bool inl_dev_sa);
+uintptr_t __roc_api roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix,
+					   bool inl_dev_sa, uint32_t spi);
+void __roc_api roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev);
+int __roc_api roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq);
+int __roc_api roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq);
+bool __roc_api roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix);
+struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(void);
+int __roc_api roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix,
+					 uint32_t tag_const, uint8_t tt);
+
+/* NIX Inline Outbound API */
+int __roc_api roc_nix_inl_outb_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_outb_fini(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_inl_outb_is_enabled(struct roc_nix *roc_nix);
+uintptr_t __roc_api roc_nix_inl_outb_sa_base_get(struct roc_nix *roc_nix);
+struct roc_cpt_lf *__roc_api
+roc_nix_inl_outb_lf_base_get(struct roc_nix *roc_nix);
+uint16_t __roc_api roc_nix_inl_outb_sso_pffunc_get(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_cb_register(roc_nix_inl_sso_work_cb_t cb, void *args);
+int __roc_api roc_nix_inl_cb_unregister(roc_nix_inl_sso_work_cb_t cb,
+					void *args);
+/* NIX Inline/Outbound API */
+enum roc_nix_inl_sa_sync_op {
+	ROC_NIX_INL_SA_OP_FLUSH,
+	ROC_NIX_INL_SA_OP_FLUSH_INVAL,
+	ROC_NIX_INL_SA_OP_RELOAD,
+};
+
+int __roc_api roc_nix_inl_sa_sync(struct roc_nix *roc_nix, void *sa, bool inb,
+				  enum roc_nix_inl_sa_sync_op op);
 
 #endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 79c15ea..2cd5a72 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -162,6 +162,21 @@ struct nix {
 	uint16_t tm_link_cfg_lvl;
 	uint16_t contig_rsvd[NIX_TXSCH_LVL_CNT];
 	uint16_t discontig_rsvd[NIX_TXSCH_LVL_CNT];
+
+	/* Ipsec info */
+	uint16_t cpt_msixoff[MAX_RVU_BLKLF_CNT];
+	bool inl_inb_ena;
+	bool inl_outb_ena;
+	void *inb_sa_base;
+	size_t inb_sa_sz;
+	void *outb_sa_base;
+	size_t outb_sa_sz;
+	uint16_t outb_err_sso_pffunc;
+	struct roc_cpt_lf *cpt_lf_base;
+	uint16_t nb_cpt_lf;
+	/* Mode provided by driver */
+	bool inb_inl_dev;
+
 } __plt_cache_aligned;
 
 enum nix_err_status {
diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
index aff4eef..f13331f 100644
--- a/drivers/common/cnxk/roc_npc.c
+++ b/drivers/common/cnxk/roc_npc.c
@@ -340,10 +340,11 @@ roc_npc_fini(struct roc_npc *roc_npc)
 }
 
 static int
-npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr,
+npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 		  const struct roc_npc_action actions[],
 		  struct roc_npc_flow *flow)
 {
+	struct npc *npc = roc_npc_to_npc_priv(roc_npc);
 	const struct roc_npc_action_mark *act_mark;
 	const struct roc_npc_action_queue *act_q;
 	const struct roc_npc_action_vf *vf_act;
@@ -425,15 +426,16 @@ npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr,
 			 *    NPC_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
 			 *  session_protocol ==
 			 *    NPC_SECURITY_PROTOCOL_IPSEC
-			 *
-			 * RSS is not supported with inline ipsec. Get the
-			 * rq from associated conf, or make
-			 * ROC_NPC_ACTION_TYPE_QUEUE compulsory with this
-			 * action.
-			 * Currently, rq = 0 is assumed.
 			 */
 			req_act |= ROC_NPC_ACTION_TYPE_SEC;
 			rq = 0;
+
+			/* Special processing when with inline device */
+			if (roc_nix_inb_is_with_inl_dev(roc_npc->roc_nix) &&
+			    roc_nix_inl_dev_is_probed()) {
+				rq = 0;
+				pf_func = nix_inl_dev_pffunc_get();
+			}
 			break;
 		case ROC_NPC_ACTION_TYPE_VLAN_STRIP:
 			req_act |= ROC_NPC_ACTION_TYPE_VLAN_STRIP;
@@ -660,11 +662,12 @@ npc_parse_attr(struct npc *npc, const struct roc_npc_attr *attr,
 }
 
 static int
-npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr,
+npc_parse_rule(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	       const struct roc_npc_item_info pattern[],
 	       const struct roc_npc_action actions[], struct roc_npc_flow *flow,
 	       struct npc_parse_state *pst)
 {
+	struct npc *npc = roc_npc_to_npc_priv(roc_npc);
 	int err;
 
 	/* Check attr */
@@ -678,7 +681,7 @@ npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr,
 		return err;
 
 	/* Check action */
-	err = npc_parse_actions(npc, attr, actions, flow);
+	err = npc_parse_actions(roc_npc, attr, actions, flow);
 	if (err)
 		return err;
 	return 0;
@@ -694,7 +697,8 @@ roc_npc_flow_parse(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	struct npc_parse_state parse_state = {0};
 	int rc;
 
-	rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state);
+	rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow,
+			    &parse_state);
 	if (rc)
 		return rc;
 
@@ -1018,7 +1022,8 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	}
 	memset(flow, 0, sizeof(*flow));
 
-	rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state);
+	rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow,
+			    &parse_state);
 	if (rc != 0) {
 		*errcode = rc;
 		goto err_exit;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 3a35233..9fcc677 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -100,9 +100,34 @@ INTERNAL {
 	roc_nix_get_pf_func;
 	roc_nix_get_vf;
 	roc_nix_get_vwqe_interval;
+	roc_nix_inl_cb_register;
+	roc_nix_inl_cb_unregister;
 	roc_nix_inl_dev_dump;
 	roc_nix_inl_dev_fini;
 	roc_nix_inl_dev_init;
+	roc_nix_inl_dev_is_probed;
+	roc_nix_inl_dev_lock;
+	roc_nix_inl_dev_unlock;
+	roc_nix_inl_dev_rq;
+	roc_nix_inl_dev_rq_get;
+	roc_nix_inl_dev_rq_put;
+	roc_nix_inl_inb_is_enabled;
+	roc_nix_inl_inb_init;
+	roc_nix_inl_inb_sa_base_get;
+	roc_nix_inl_inb_sa_get;
+	roc_nix_inl_inb_sa_max_spi;
+	roc_nix_inl_inb_sa_sz;
+	roc_nix_inl_inb_tag_update;
+	roc_nix_inl_inb_fini;
+	roc_nix_inb_is_with_inl_dev;
+	roc_nix_inb_mode_set;
+	roc_nix_inl_outb_fini;
+	roc_nix_inl_outb_init;
+	roc_nix_inl_outb_lf_base_get;
+	roc_nix_inl_outb_sa_base_get;
+	roc_nix_inl_outb_sso_pffunc_get;
+	roc_nix_inl_outb_is_enabled;
+	roc_nix_inl_sa_sync;
 	roc_nix_is_lbk;
 	roc_nix_is_pf;
 	roc_nix_is_sdp;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 08/27] common/cnxk: dump cpt lf registers on error intr
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (6 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 07/27] common/cnxk: add nix inline inbound and outbound support API Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 09/27] common/cnxk: align cpt lf enable/disable sequence Nithin Dabilpuram
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Dump CPT LF registers on error interrupt.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_cpt.c       |  5 ++++-
 drivers/common/cnxk/roc_cpt_debug.c | 32 ++++++++++++++++++++++++++++++--
 drivers/common/cnxk/roc_cpt_priv.h  |  1 +
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 3222b3e..f08b5d0 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -51,6 +51,9 @@ cpt_lf_misc_irq(void *param)
 
 	plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
 
+	/* Dump lf registers */
+	cpt_lf_print(lf);
+
 	/* Clear interrupt */
 	plt_write64(intr, lf->rbase + CPT_LF_MISC_INT);
 }
@@ -203,7 +206,7 @@ cpt_lf_dump(struct roc_cpt_lf *lf)
 	plt_cpt_dbg("CPT LF REG:");
 	plt_cpt_dbg("LF_CTL[0x%016llx]: 0x%016" PRIx64, CPT_LF_CTL,
 		    plt_read64(lf->rbase + CPT_LF_CTL));
-	plt_cpt_dbg("Q_SIZE[0x%016llx]: 0x%016" PRIx64, CPT_LF_INPROG,
+	plt_cpt_dbg("LF_INPROG[0x%016llx]: 0x%016" PRIx64, CPT_LF_INPROG,
 		    plt_read64(lf->rbase + CPT_LF_INPROG));
 
 	plt_cpt_dbg("Q_BASE[0x%016llx]: 0x%016" PRIx64, CPT_LF_Q_BASE,
diff --git a/drivers/common/cnxk/roc_cpt_debug.c b/drivers/common/cnxk/roc_cpt_debug.c
index a6c9004..847d969 100644
--- a/drivers/common/cnxk/roc_cpt_debug.c
+++ b/drivers/common/cnxk/roc_cpt_debug.c
@@ -157,11 +157,40 @@ roc_cpt_afs_print(struct roc_cpt *roc_cpt)
 	return 0;
 }
 
-static void
+void
 cpt_lf_print(struct roc_cpt_lf *lf)
 {
 	uint64_t reg_val;
 
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_BASE);
+	plt_print("    CPT_LF_Q_BASE:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_SIZE);
+	plt_print("    CPT_LF_Q_SIZE:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_INST_PTR);
+	plt_print("    CPT_LF_Q_INST_PTR:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_GRP_PTR);
+	plt_print("    CPT_LF_Q_GRP_PTR:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_CTL);
+	plt_print("    CPT_LF_CTL:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_MISC_INT_ENA_W1S);
+	plt_print("    CPT_LF_MISC_INT_ENA_W1S:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_MISC_INT);
+	plt_print("    CPT_LF_MISC_INT:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_INPROG);
+	plt_print("    CPT_LF_INPROG:\t%016lx", reg_val);
+
+	if (roc_model_is_cn9k())
+		return;
+
+	plt_print("Count registers for CPT LF%d:", lf->lf_id);
+
 	reg_val = plt_read64(lf->rbase + CPT_LF_CTX_ENC_BYTE_CNT);
 	plt_print("    Encrypted byte count:\t%" PRIu64, reg_val);
 
@@ -190,7 +219,6 @@ roc_cpt_lfs_print(struct roc_cpt *roc_cpt)
 		if (lf == NULL)
 			continue;
 
-		plt_print("Count registers for CPT LF%d:", lf_id);
 		cpt_lf_print(lf);
 	}
 
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 21911e5..61dec9a 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -31,5 +31,6 @@ int cpt_lf_outb_cfg(struct dev *dev, uint16_t sso_pf_func, uint16_t nix_pf_func,
 		    uint8_t lf_id, bool ena);
 int cpt_get_msix_offset(struct dev *dev, struct msix_offset_rsp **msix_rsp);
 uint64_t cpt_get_blkaddr(struct dev *dev);
+void cpt_lf_print(struct roc_cpt_lf *lf);
 
 #endif /* _ROC_CPT_PRIV_H_ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 09/27] common/cnxk: align cpt lf enable/disable sequence
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (7 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 08/27] common/cnxk: dump cpt lf registers on error intr Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 10/27] common/cnxk: restore nix sqb pool limit before destroy Nithin Dabilpuram
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

For CPT LF IQ enable, set CPT_LF_CTL[ENA] before setting
CPT_LF_INPROG[EENA] to true.

For CPT LF IQ disable, align sequence to that of HRM.

Also this patch aligns space for instructions in CPT LF
to ROC_ALIGN to make complete memory cache aligned and
has other minor fixes/additions.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/hw/cpt.h  | 11 +++++++++++
 drivers/common/cnxk/roc_cpt.c | 42 ++++++++++++++++++++++++++++++++++--------
 drivers/common/cnxk/roc_cpt.h |  8 ++++++++
 3 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 975139f..4d9df59 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -124,6 +124,17 @@ union cpt_lf_misc_int {
 	} s;
 };
 
+union cpt_lf_q_grp_ptr {
+	uint64_t u;
+	struct {
+		uint64_t dq_ptr : 15;
+		uint64_t reserved_31_15 : 17;
+		uint64_t nq_ptr : 15;
+		uint64_t reserved_47_62 : 16;
+		uint64_t xq_xor : 1;
+	} s;
+};
+
 union cpt_inst_w4 {
 	uint64_t u64;
 	struct {
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index f08b5d0..b30f44e 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -437,8 +437,10 @@ cpt_lf_iq_mem_calc(uint32_t nb_desc)
 	len += CPT_IQ_FC_LEN;
 
 	/* For instruction queues */
-	len += CPT_IQ_NB_DESC_SIZE_DIV40(nb_desc) * CPT_IQ_NB_DESC_MULTIPLIER *
-	       sizeof(struct cpt_inst_s);
+	len += PLT_ALIGN(CPT_IQ_NB_DESC_SIZE_DIV40(nb_desc) *
+				 CPT_IQ_NB_DESC_MULTIPLIER *
+				 sizeof(struct cpt_inst_s),
+			 ROC_ALIGN);
 
 	return len;
 }
@@ -550,6 +552,7 @@ cpt_lf_init(struct roc_cpt_lf *lf)
 	iq_mem = plt_zmalloc(cpt_lf_iq_mem_calc(lf->nb_desc), ROC_ALIGN);
 	if (iq_mem == NULL)
 		return -ENOMEM;
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
 
 	blkaddr = cpt_get_blkaddr(dev);
 	lf->rbase = dev->bar2 + ((blkaddr << 20) | (lf->lf_id << 12));
@@ -634,7 +637,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
 	}
 
 	/* Reserve 1 CPT LF for inline inbound */
-	nb_lf_avail = PLT_MIN(nb_lf_avail, ROC_CPT_MAX_LFS - 1);
+	nb_lf_avail = PLT_MIN(nb_lf_avail, (uint16_t)(ROC_CPT_MAX_LFS - 1));
 
 	roc_cpt->nb_lf_avail = nb_lf_avail;
 
@@ -770,8 +773,10 @@ void
 roc_cpt_iq_disable(struct roc_cpt_lf *lf)
 {
 	union cpt_lf_ctl lf_ctl = {.u = 0x0};
+	union cpt_lf_q_grp_ptr grp_ptr;
 	union cpt_lf_inprog lf_inprog;
 	int timeout = 20;
+	int cnt;
 
 	/* Disable instructions enqueuing */
 	plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL);
@@ -795,6 +800,27 @@ roc_cpt_iq_disable(struct roc_cpt_lf *lf)
 	 */
 	lf_inprog.s.eena = 0x0;
 	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
+
+	/* Wait for instruction queue to become empty */
+	cnt = 0;
+	do {
+		lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+		if (lf_inprog.s.grb_partial)
+			cnt = 0;
+		else
+			cnt++;
+		grp_ptr.u = plt_read64(lf->rbase + CPT_LF_Q_GRP_PTR);
+	} while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
+
+	cnt = 0;
+	do {
+		lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+		if ((lf_inprog.s.inflight == 0) && (lf_inprog.s.gwb_cnt < 40) &&
+		    ((lf_inprog.s.grb_cnt == 0) || (lf_inprog.s.grb_cnt == 40)))
+			cnt++;
+		else
+			cnt = 0;
+	} while (cnt < 10);
 }
 
 void
@@ -806,11 +832,6 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf)
 	/* Disable command queue */
 	roc_cpt_iq_disable(lf);
 
-	/* Enable command queue execution */
-	lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
-	lf_inprog.s.eena = 1;
-	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
-
 	/* Enable instruction queue enqueuing */
 	lf_ctl.u = plt_read64(lf->rbase + CPT_LF_CTL);
 	lf_ctl.s.ena = 1;
@@ -819,6 +840,11 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf)
 	lf_ctl.s.fc_hyst_bits = lf->fc_hyst_bits;
 	plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL);
 
+	/* Enable command queue execution */
+	lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+	lf_inprog.s.eena = 1;
+	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
+
 	cpt_lf_dump(lf);
 }
 
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 9b55303..2ac3197 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -75,6 +75,14 @@
 #define ROC_CPT_TUNNEL_IPV4_HDR_LEN 20
 #define ROC_CPT_TUNNEL_IPV6_HDR_LEN 40
 
+#define ROC_CPT_CCM_AAD_DATA 1
+#define ROC_CPT_CCM_MSG_LEN  4
+#define ROC_CPT_CCM_ICV_LEN  16
+#define ROC_CPT_CCM_FLAGS                                                      \
+	((ROC_CPT_CCM_AAD_DATA << 6) |                                         \
+	 (((ROC_CPT_CCM_ICV_LEN - 2) / 2) << 3) | (ROC_CPT_CCM_MSG_LEN - 1))
+#define ROC_CPT_CCM_SALT_LEN 3
+
 struct roc_cpt_lmtline {
 	uint64_t io_addr;
 	uint64_t *fc_addr;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 10/27] common/cnxk: restore nix sqb pool limit before destroy
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (8 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 09/27] common/cnxk: align cpt lf enable/disable sequence Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 11/27] common/cnxk: add cq enable support in nix Tx path Nithin Dabilpuram
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Restore SQB aura/pool limit before destroying SQB to be
able to drain all the buffers from the aura.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_queue.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index de63361..03a3a06 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -921,6 +921,11 @@ roc_nix_sq_fini(struct roc_nix_sq *sq)
 		rc |= NIX_ERR_NDC_SYNC;
 
 	rc |= nix_tm_sq_flush_post(sq);
+
+	/* Restore limit to max SQB count that the pool was created
+	 * for aura drain to succeed.
+	 */
+	roc_npa_aura_limit_modify(sq->aura_handle, NIX_MAX_SQB);
 	rc |= roc_npa_pool_destroy(sq->aura_handle);
 	plt_free(sq->fc);
 	plt_free(sq->sqe_mem);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 11/27] common/cnxk: add cq enable support in nix Tx path
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (9 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 10/27] common/cnxk: restore nix sqb pool limit before destroy Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 12/27] common/cnxk: setup aura bp conf based on nix Nithin Dabilpuram
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev, Kommula Shiva Shankar

From: Kommula Shiva Shankar <kshankar@marvell.com>

This patch provides applications to add
cq support in Tx path

Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
---
 drivers/common/cnxk/roc_nix.h       | 2 ++
 drivers/common/cnxk/roc_nix_queue.c | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index ed6e721..95ea4e0 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -194,7 +194,9 @@ struct roc_nix_sq {
 	enum roc_nix_sq_max_sqe_sz max_sqe_sz;
 	uint32_t nb_desc;
 	uint16_t qid;
+	uint16_t cqid;
 	bool sso_ena;
+	bool cq_ena;
 	/* End of Input parameters */
 	uint16_t sqes_per_sqb_log2;
 	struct roc_nix *roc_nix;
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 03a3a06..8fbb13e 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -648,6 +648,8 @@ sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
 	aq->sq.sqe_stype = NIX_STYPE_STF;
 	aq->sq.ena = 1;
 	aq->sq.sso_ena = !!sq->sso_ena;
+	aq->sq.cq_ena = !!sq->cq_ena;
+	aq->sq.cq = sq->cqid;
 	if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
 		aq->sq.sqe_stype = NIX_STYPE_STP;
 	aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
@@ -746,6 +748,8 @@ sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
 	aq->sq.sqe_stype = NIX_STYPE_STF;
 	aq->sq.ena = 1;
 	aq->sq.sso_ena = !!sq->sso_ena;
+	aq->sq.cq_ena = !!sq->cq_ena;
+	aq->sq.cq = sq->cqid;
 	if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
 		aq->sq.sqe_stype = NIX_STYPE_STP;
 	aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 12/27] common/cnxk: setup aura bp conf based on nix
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (10 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 11/27] common/cnxk: add cq enable support in nix Tx path Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 13/27] common/cnxk: add anti-replay check implementation for cn9k Nithin Dabilpuram
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Currently only nix0 conf is setup in Aura for backpressure.
This patch adds support for nix1 as well.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_fc.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index f17eba4..7eac7d0 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -284,8 +284,18 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena,
 	limit = rsp->aura.limit;
 	/* BP is already enabled. */
 	if (rsp->aura.bp_ena) {
+		uint16_t bpid;
+		bool nix1;
+
+		nix1 = !!(rsp->aura.bp_ena & 0x2);
+		if (nix1)
+			bpid = rsp->aura.nix1_bpid;
+		else
+			bpid = rsp->aura.nix0_bpid;
+
 		/* If BP ids don't match disable BP. */
-		if ((rsp->aura.nix0_bpid != nix->bpid[0]) && !force) {
+		if (((nix1 != nix->is_nix1) || (bpid != nix->bpid[0])) &&
+		    !force) {
 			req = mbox_alloc_msg_npa_aq_enq(mbox);
 			if (req == NULL)
 				return;
@@ -315,14 +325,19 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena,
 	req->op = NPA_AQ_INSTOP_WRITE;
 
 	if (ena) {
-		req->aura.nix0_bpid = nix->bpid[0];
-		req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
+		if (nix->is_nix1) {
+			req->aura.nix1_bpid = nix->bpid[0];
+			req->aura_mask.nix1_bpid = ~(req->aura_mask.nix1_bpid);
+		} else {
+			req->aura.nix0_bpid = nix->bpid[0];
+			req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
+		}
 		req->aura.bp = NIX_RQ_AURA_THRESH(
 			limit > 128 ? 256 : limit); /* 95% of size*/
 		req->aura_mask.bp = ~(req->aura_mask.bp);
 	}
 
-	req->aura.bp_ena = !!ena;
+	req->aura.bp_ena = (!!ena << nix->is_nix1);
 	req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
 
 	mbox_process(mbox);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 13/27] common/cnxk: add anti-replay check implementation for cn9k
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (11 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 12/27] common/cnxk: setup aura bp conf based on nix Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 14/27] common/cnxk: add inline IPsec support in rte flow Nithin Dabilpuram
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Adds anti replay helper for cn9k platform.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/common/cnxk/cnxk_security_ar.h | 184 +++++++++++++++++++++++++++++++++
 1 file changed, 184 insertions(+)
 create mode 100644 drivers/common/cnxk/cnxk_security_ar.h

diff --git a/drivers/common/cnxk/cnxk_security_ar.h b/drivers/common/cnxk/cnxk_security_ar.h
new file mode 100644
index 0000000..6bc517c
--- /dev/null
+++ b/drivers/common/cnxk/cnxk_security_ar.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_SECURITY_AR_H__
+#define __CNXK_SECURITY_AR_H__
+
+#include <rte_mbuf.h>
+
+#include "cnxk_security.h"
+
+#define CNXK_ON_AR_WIN_SIZE_MAX 1024
+
+/* u64 array size to fit anti replay window bits */
+#define AR_WIN_ARR_SZ                                                          \
+	(PLT_ALIGN_CEIL(CNXK_ON_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) /        \
+	 BITS_PER_LONG_LONG)
+
+#define WORD_SHIFT 6
+#define WORD_SIZE  (1 << WORD_SHIFT)
+#define WORD_MASK  (WORD_SIZE - 1)
+
+#define IPSEC_ANTI_REPLAY_FAILED (-1)
+
+struct cnxk_on_ipsec_ar {
+	rte_spinlock_t lock;
+	uint32_t winb;
+	uint32_t wint;
+	uint64_t base;			/**< base of the anti-replay window */
+	uint64_t window[AR_WIN_ARR_SZ]; /**< anti-replay window */
+};
+
+static inline int
+cnxk_on_anti_replay_check(uint64_t seq, struct cnxk_on_ipsec_ar *ar,
+			  uint32_t winsz)
+{
+	uint64_t ex_winsz = winsz + WORD_SIZE;
+	uint64_t *window = &ar->window[0];
+	uint64_t seqword, shiftwords;
+	uint64_t base = ar->base;
+	uint32_t winb = ar->winb;
+	uint32_t wint = ar->wint;
+	uint64_t winwords;
+	uint64_t bit_pos;
+	uint64_t shift;
+	uint64_t *wptr;
+	uint64_t tmp;
+
+	winwords = ex_winsz >> WORD_SHIFT;
+	if (winsz > 64)
+		goto slow_shift;
+	/* Check if the seq is the biggest one yet */
+	if (likely(seq > base)) {
+		shift = seq - base;
+		if (shift < winsz) { /* In window */
+			/*
+			 * If more than 64-bit anti-replay window,
+			 * use slow shift routine
+			 */
+			wptr = window + (shift >> WORD_SHIFT);
+			*wptr <<= shift;
+			*wptr |= 1ull;
+		} else {
+			/* No special handling of window size > 64 */
+			wptr = window + ((winsz - 1) >> WORD_SHIFT);
+			/*
+			 * Zero out the whole window (especially for
+			 * bigger than 64b window) till the last 64b word
+			 * as the incoming sequence number minus
+			 * base sequence is more than the window size.
+			 */
+			while (window != wptr)
+				*window++ = 0ull;
+			/*
+			 * Set the last bit (of the window) to 1
+			 * as that corresponds to the base sequence number.
+			 * Now any incoming sequence number which is
+			 * (base - window size - 1) will pass anti-replay check
+			 */
+			*wptr = 1ull;
+		}
+		/*
+		 * Set the base to incoming sequence number as
+		 * that is the biggest sequence number seen yet
+		 */
+		ar->base = seq;
+		return 0;
+	}
+
+	bit_pos = base - seq;
+
+	/* If seq falls behind the window, return failure */
+	if (bit_pos >= winsz)
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* seq is within anti-replay window */
+	wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
+	bit_pos &= WORD_MASK;
+
+	/* Check if this is a replayed packet */
+	if (*wptr & ((1ull) << bit_pos))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* mark as seen */
+	*wptr |= ((1ull) << bit_pos);
+	return 0;
+
+slow_shift:
+	if (likely(seq > base)) {
+		uint32_t i;
+
+		shift = seq - base;
+		if (unlikely(shift >= winsz)) {
+			/*
+			 * shift is bigger than the window,
+			 * so just zero out everything
+			 */
+			for (i = 0; i < winwords; i++)
+				window[i] = 0;
+winupdate:
+			/* Find out the word */
+			seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
+
+			/* Find out the bit in the word */
+			bit_pos = (seq - 1) & WORD_MASK;
+
+			/*
+			 * Set the bit corresponding to sequence number
+			 * in window to mark it as received
+			 */
+			window[seqword] |= (1ull << (63 - bit_pos));
+
+			/* wint and winb range from 1 to ex_winsz */
+			ar->wint = ((wint + shift - 1) % ex_winsz) + 1;
+			ar->winb = ((winb + shift - 1) % ex_winsz) + 1;
+
+			ar->base = seq;
+			return 0;
+		}
+
+		/*
+		 * New sequence number is bigger than the base but
+		 * it's not bigger than base + window size
+		 */
+
+		shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
+			     ((wint - 1) >> WORD_SHIFT);
+		if (unlikely(shiftwords)) {
+			tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
+			for (i = 0; i < shiftwords; i++) {
+				tmp %= winwords;
+				window[tmp++] = 0;
+			}
+		}
+
+		goto winupdate;
+	}
+
+	/* Sequence number is before the window */
+	if (unlikely((seq + winsz) <= base))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* Sequence number is within the window */
+
+	/* Find out the word */
+	seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
+
+	/* Find out the bit in the word */
+	bit_pos = (seq - 1) & WORD_MASK;
+
+	/* Check if this is a replayed packet */
+	if (window[seqword] & (1ull << (63 - bit_pos)))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/*
+	 * Set the bit corresponding to sequence number
+	 * in window to mark it as received
+	 */
+	window[seqword] |= (1ull << (63 - bit_pos));
+
+	return 0;
+}
+
+#endif /* __CNXK_SECURITY_AR_H__ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 14/27] common/cnxk: add inline IPsec support in rte flow
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (12 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 13/27] common/cnxk: add anti-replay check implementation for cn9k Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 15/27] net/cnxk: add inline security support for cn9k Nithin Dabilpuram
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev, Satheesh Paul

From: Satheesh Paul <psatheesh@marvell.com>

Add support to configure flow rules with inline IPsec action.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
---
 drivers/common/cnxk/roc_nix_inl.h      |  4 ++++
 drivers/common/cnxk/roc_nix_inl_dev.c  |  3 +++
 drivers/common/cnxk/roc_nix_inl_priv.h |  3 +++
 drivers/common/cnxk/roc_npc_mcam.c     | 28 ++++++++++++++++++++++++++--
 4 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index efc5a19..1f7ec4f 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -107,6 +107,10 @@ struct roc_nix_inl_dev {
 	struct plt_pci_device *pci_dev;
 	uint16_t ipsec_in_max_spi;
 	bool selftest;
+	bool is_multi_channel;
+	uint16_t channel;
+	uint16_t chan_mask;
+
 	/* End of input parameters */
 
 #define ROC_NIX_INL_MEM_SZ (1024)
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index 214f183..2dc2188 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -461,6 +461,9 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
 	inl_dev->pci_dev = pci_dev;
 	inl_dev->ipsec_in_max_spi = roc_inl_dev->ipsec_in_max_spi;
 	inl_dev->selftest = roc_inl_dev->selftest;
+	inl_dev->is_multi_channel = roc_inl_dev->is_multi_channel;
+	inl_dev->channel = roc_inl_dev->channel;
+	inl_dev->chan_mask = roc_inl_dev->chan_mask;
 
 	/* Initialize base device */
 	rc = dev_init(&inl_dev->dev, pci_dev);
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index ab38062..a302118 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -45,6 +45,9 @@ struct nix_inl_dev {
 
 	/* Device arguments */
 	uint8_t selftest;
+	uint16_t channel;
+	uint16_t chan_mask;
+	bool is_multi_channel;
 	uint16_t ipsec_in_max_spi;
 };
 
diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c
index 8ccaaad..4985d22 100644
--- a/drivers/common/cnxk/roc_npc_mcam.c
+++ b/drivers/common/cnxk/roc_npc_mcam.c
@@ -503,8 +503,11 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
 {
 	int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
 	struct npc_mcam_write_entry_req *req;
+	struct nix_inl_dev *inl_dev = NULL;
 	struct mbox *mbox = npc->mbox;
 	struct mbox_msghdr *rsp;
+	struct idev_cfg *idev;
+	uint16_t pf_func = 0;
 	uint16_t ctr = ~(0);
 	int rc, idx;
 	int entry;
@@ -553,9 +556,30 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
 		req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
 	}
 
+	idev = idev_get_cfg();
+	if (idev)
+		inl_dev = idev->nix_inl_dev;
+
 	if (flow->nix_intf == NIX_INTF_RX) {
-		req->entry_data.kw[0] |= (uint64_t)npc->channel;
-		req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+		if (inl_dev && inl_dev->is_multi_channel &&
+		    (flow->npc_action & NIX_RX_ACTIONOP_UCAST_IPSEC)) {
+			req->entry_data.kw[0] |= (uint64_t)inl_dev->channel;
+			req->entry_data.kw_mask[0] |=
+				(uint64_t)inl_dev->chan_mask;
+			pf_func = nix_inl_dev_pffunc_get();
+			req->entry_data.action &= ~(GENMASK(19, 4));
+			req->entry_data.action |= (uint64_t)pf_func << 4;
+
+			flow->npc_action &= ~(GENMASK(19, 4));
+			flow->npc_action |= (uint64_t)pf_func << 4;
+			flow->mcam_data[0] |= (uint64_t)inl_dev->channel;
+			flow->mcam_mask[0] |= (uint64_t)inl_dev->chan_mask;
+		} else {
+			req->entry_data.kw[0] |= (uint64_t)npc->channel;
+			req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+			flow->mcam_data[0] |= (uint64_t)npc->channel;
+			flow->mcam_mask[0] |= (BIT_ULL(12) - 1);
+		}
 	} else {
 		uint16_t pf_func = (flow->npc_action >> 4) & 0xffff;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 15/27] net/cnxk: add inline security support for cn9k
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (13 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 14/27] common/cnxk: add inline IPsec support in rte flow Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 16/27] net/cnxk: add inline security support for cn10k Nithin Dabilpuram
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Ray Kinsella, Anatoly Burakov
  Cc: jerinj, schalla, dev

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/cnxk/cn9k_ethdev.c         |  23 +++
 drivers/net/cnxk/cn9k_ethdev.h         |  61 +++++++
 drivers/net/cnxk/cn9k_ethdev_sec.c     | 313 +++++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn9k_rx.h             |   1 +
 drivers/net/cnxk/cn9k_tx.h             |   1 +
 drivers/net/cnxk/cnxk_ethdev.c         | 214 +++++++++++++++++++++-
 drivers/net/cnxk/cnxk_ethdev.h         | 121 ++++++++++++-
 drivers/net/cnxk/cnxk_ethdev_devargs.c |  88 ++++++++-
 drivers/net/cnxk/cnxk_ethdev_sec.c     | 278 +++++++++++++++++++++++++++++
 drivers/net/cnxk/cnxk_lookup.c         |  50 +++++-
 drivers/net/cnxk/meson.build           |   2 +
 drivers/net/cnxk/version.map           |   5 +
 12 files changed, 1146 insertions(+), 11 deletions(-)
 create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c

diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678..08c86f9 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -179,8 +185,10 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_cpt_lf *inl_lf;
 	struct cn9k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -200,6 +208,19 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(BIT_ULL(16) == ROC_NIX_INL_SA_BASE_ALIGN);
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -508,6 +529,8 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn9k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index 3d4a206..f8818b8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN9K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn9k_eth_txq {
 	uint64_t cmd[8];
@@ -15,6 +16,10 @@ struct cn9k_eth_txq {
 	uint64_t lso_tun_fmt;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 } __plt_cache_aligned;
 
 struct cn9k_eth_rxq {
@@ -32,8 +37,64 @@ struct cn9k_eth_rxq {
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */
+struct cn9k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_onf_ipsec_outb_sa */
+struct cn9k_outb_priv_data {
+	union {
+		uint64_t esn;
+		struct {
+			uint32_t seq;
+			uint32_t esn_hi;
+		};
+	};
+
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+
+	/* IP identifier */
+	uint16_t ip_id;
+
+	/* SA index */
+	uint32_t sa_idx;
+
+	/* Flags */
+	uint16_t copy_salt : 1;
+
+	/* Salt */
+	uint32_t nonce;
+
+	/* User data pointer */
+	void *userdata;
+
+	/* Back pointer to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+struct cn9k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn9k_eth_sec_ops_override(void);
+
 #endif /* __CN9K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
new file mode 100644
index 0000000..3ec7497
--- /dev/null
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -0,0 +1,313 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn9k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn9k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn9k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static int
+cn9k_eth_sec_session_create(void *device,
+			    struct rte_security_session_conf *conf,
+			    struct rte_security_session *sess,
+			    struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn9k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+
+	/* Search if a session already exists */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	if (inbound) {
+		struct cn9k_inb_priv_data *inb_priv;
+		struct roc_onf_ipsec_inb_sa *inb_sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn9k_inb_priv_data) <
+				  ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE. Assume no inline
+		 * device always for CN9K.
+		 */
+		inb_sa = (struct roc_onf_ipsec_inb_sa *)
+			roc_nix_inl_inb_sa_get(&dev->nix, false, ipsec->spi);
+		if (!inb_sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		/* Check if SA is already in use */
+		if (inb_sa->ctl.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_onf_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_onf_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn9k_outb_priv_data *outb_priv;
+		struct roc_onf_ipsec_outb_sa *outb_sa;
+		uintptr_t sa_base = dev->outb.sa_base;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn9k_outb_priv_data) <
+				  ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_onf_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_onf_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_onf_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+		/* Start sequence number with 1 */
+		outb_priv->seq = 1;
+
+		memcpy(&outb_priv->nonce, outb_sa->nonce, 4);
+		if (outb_sa->ctl.enc_type == ROC_IE_ON_SA_ENC_AES_GCM)
+			outb_priv->copy_salt = 1;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync SA content */
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn9k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_onf_ipsec_outb_sa *outb_sa;
+	struct roc_onf_ipsec_inb_sa *inb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->ctl.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->ctl.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync SA content */
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn9k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn9k_eth_sec_capabilities;
+}
+
+void
+cn9k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn9k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn9k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn9k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index a3bf4e0..59545af 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -17,6 +17,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index ed65cd3..a27ff76 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0e3652e..60a4df5 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -38,6 +38,159 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	return speed_capa;
 }
 
+int
+cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev)
+{
+	struct roc_nix *nix = &dev->nix;
+
+	if (dev->inb.inl_dev == use_inl_dev)
+		return 0;
+
+	plt_nix_dbg("Security sessions(%u) still active, inl=%u!!!",
+		    dev->inb.nb_sess, !!dev->inb.inl_dev);
+
+	/* Change the mode */
+	dev->inb.inl_dev = use_inl_dev;
+
+	/* Update RoC for NPC rule insertion */
+	roc_nix_inb_mode_set(nix, use_inl_dev);
+
+	/* Setup lookup mem */
+	return cnxk_nix_lookup_mem_sa_base_set(dev);
+}
+
+static int
+nix_security_setup(struct cnxk_eth_dev *dev)
+{
+	struct roc_nix *nix = &dev->nix;
+	int i, rc = 0;
+
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Setup Inline Inbound */
+		rc = roc_nix_inl_inb_init(nix);
+		if (rc) {
+			plt_err("Failed to initialize nix inline inb, rc=%d",
+				rc);
+			return rc;
+		}
+
+		/* By default pick using inline device for poll mode.
+		 * Will be overridden when event mode rq's are setup.
+		 */
+		cnxk_nix_inb_mode_set(dev, true);
+	}
+
+	/* Setup Inline outbound */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+		struct plt_bitmap *bmap;
+		size_t bmap_sz;
+		void *mem;
+
+		/* Cannot ask for Tx Inline without SAs */
+		if (!dev->outb.max_sa)
+			return -EINVAL;
+
+		/* Setup enough descriptors for all tx queues */
+		nix->outb_nb_desc = dev->outb.nb_desc;
+		nix->outb_nb_crypto_qs = dev->outb.nb_crypto_qs;
+
+		/* Setup Inline Outbound */
+		rc = roc_nix_inl_outb_init(nix);
+		if (rc) {
+			plt_err("Failed to initialize nix inline outb, rc=%d",
+				rc);
+			goto cleanup;
+		}
+
+		rc = -ENOMEM;
+		/* Allocate a bitmap to alloc and free sa indexes */
+		bmap_sz = plt_bitmap_get_memory_footprint(dev->outb.max_sa);
+		mem = plt_zmalloc(bmap_sz, PLT_CACHE_LINE_SIZE);
+		if (mem == NULL) {
+			plt_err("Outbound SA bmap alloc failed");
+
+			rc |= roc_nix_inl_outb_fini(nix);
+			goto cleanup;
+		}
+
+		rc = -EIO;
+		bmap = plt_bitmap_init(dev->outb.max_sa, mem, bmap_sz);
+		if (!bmap) {
+			plt_err("Outbound SA bmap init failed");
+
+			rc |= roc_nix_inl_outb_fini(nix);
+			plt_free(mem);
+			goto cleanup;
+		}
+
+		for (i = 0; i < dev->outb.max_sa; i++)
+			plt_bitmap_set(bmap, i);
+
+		dev->outb.sa_base = roc_nix_inl_outb_sa_base_get(nix);
+		dev->outb.sa_bmap_mem = mem;
+		dev->outb.sa_bmap = bmap;
+		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
+	}
+
+	return 0;
+cleanup:
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		rc |= roc_nix_inl_inb_fini(nix);
+	return rc;
+}
+
+static int
+nix_security_release(struct cnxk_eth_dev *dev)
+{
+	struct rte_eth_dev *eth_dev = dev->eth_dev;
+	struct cnxk_eth_sec_sess *eth_sec, *tvar;
+	struct roc_nix *nix = &dev->nix;
+	int rc, ret = 0;
+
+	/* Cleanup Inline inbound */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Destroy inbound sessions */
+		tvar = NULL;
+		TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
+			cnxk_eth_sec_ops.session_destroy(eth_dev,
+							 eth_sec->sess);
+
+		/* Clear lookup mem */
+		cnxk_nix_lookup_mem_sa_base_clear(dev);
+
+		rc = roc_nix_inl_inb_fini(nix);
+		if (rc)
+			plt_err("Failed to cleanup nix inline inb, rc=%d", rc);
+		ret |= rc;
+	}
+
+	/* Cleanup Inline outbound */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+		/* Destroy outbound sessions */
+		tvar = NULL;
+		TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
+			cnxk_eth_sec_ops.session_destroy(eth_dev,
+							 eth_sec->sess);
+
+		rc = roc_nix_inl_outb_fini(nix);
+		if (rc)
+			plt_err("Failed to cleanup nix inline outb, rc=%d", rc);
+		ret |= rc;
+
+		plt_bitmap_free(dev->outb.sa_bmap);
+		plt_free(dev->outb.sa_bmap_mem);
+		dev->outb.sa_bmap = NULL;
+		dev->outb.sa_bmap_mem = NULL;
+	}
+
+	dev->inb.inl_dev = false;
+	roc_nix_inb_mode_set(nix, false);
+	dev->nb_rxq_sso = 0;
+	dev->inb.nb_sess = 0;
+	dev->outb.nb_sess = 0;
+	return ret;
+}
+
 static void
 nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 {
@@ -194,6 +347,12 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 		eth_dev->data->tx_queues[qid] = NULL;
 	}
 
+	/* When Tx Security offload is enabled, increase tx desc count by
+	 * max possible outbound desc count.
+	 */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+		nb_desc += dev->outb.nb_desc;
+
 	/* Setup ROC SQ */
 	sq = &dev->sqs[qid];
 	sq->qid = qid;
@@ -266,6 +425,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
 	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct rte_mempool_ops *ops;
 	const char *platform_ops;
@@ -328,6 +488,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rq->later_skip = sizeof(struct rte_mbuf);
 	rq->lpb_size = mp->elt_size;
 
+	/* Enable Inline IPSec on RQ, will not be used for Poll mode */
+	if (roc_nix_inl_inb_is_enabled(nix))
+		rq->ipsech_ena = true;
+
 	rc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started);
 	if (rc) {
 		plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc);
@@ -350,6 +514,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Setup rq reference for inline dev if present */
+		rc = roc_nix_inl_dev_rq_get(rq);
+		if (rc)
+			goto free_mem;
+	}
+
 	plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, mp->name, nb_desc,
 		    cq->nb_desc);
 
@@ -370,6 +541,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	}
 
 	return 0;
+free_mem:
+	plt_free(rxq_sp);
 rq_fini:
 	rc |= roc_nix_rq_fini(rq);
 cq_fini:
@@ -394,11 +567,15 @@ cnxk_nix_rx_queue_release(void *rxq)
 	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
 	dev = rxq_sp->dev;
 	qid = rxq_sp->qid;
+	rq = &dev->rqs[qid];
 
 	plt_nix_dbg("Releasing rxq %u", qid);
 
+	/* Release rq reference for inline dev if present */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		roc_nix_inl_dev_rq_put(rq);
+
 	/* Cleanup ROC RQ */
-	rq = &dev->rqs[qid];
 	rc = roc_nix_rq_fini(rq);
 	if (rc)
 		plt_err("Failed to cleanup rq, rc=%d", rc);
@@ -804,6 +981,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		rc = nix_store_queue_cfg_and_then_release(eth_dev);
 		if (rc)
 			goto fail_configure;
+
+		/* Cleanup security support */
+		rc = nix_security_release(dev);
+		if (rc)
+			goto fail_configure;
+
 		roc_nix_tm_fini(nix);
 		roc_nix_lf_free(nix);
 	}
@@ -958,6 +1141,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		plt_err("Failed to initialize flow control rc=%d", rc);
 		goto cq_fini;
 	}
+
+	/* Setup Inline security support */
+	rc = nix_security_setup(dev);
+	if (rc)
+		goto cq_fini;
+
 	/*
 	 * Restore queue config when reconfigure followed by
 	 * reconfigure and no queue configure invoked from application case.
@@ -965,7 +1154,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	if (dev->configured == 1) {
 		rc = nix_restore_queue_cfg(eth_dev);
 		if (rc)
-			goto cq_fini;
+			goto sec_release;
 	}
 
 	/* Update the mac address */
@@ -987,6 +1176,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	dev->nb_txq = data->nb_tx_queues;
 	return 0;
 
+sec_release:
+	rc |= nix_security_release(dev);
 cq_fini:
 	roc_nix_unregister_cq_irqs(nix);
 q_irq_fini:
@@ -1282,12 +1473,25 @@ static int
 cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ctx *sec_ctx;
 	struct roc_nix *nix = &dev->nix;
 	struct rte_pci_device *pci_dev;
 	int rc, max_entries;
 
 	eth_dev->dev_ops = &cnxk_eth_dev_ops;
 
+	/* Alloc security context */
+	sec_ctx = plt_zmalloc(sizeof(struct rte_security_ctx), 0);
+	if (!sec_ctx)
+		return -ENOMEM;
+	sec_ctx->device = eth_dev;
+	sec_ctx->ops = &cnxk_eth_sec_ops;
+	sec_ctx->flags =
+		(RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
+	eth_dev->security_ctx = sec_ctx;
+	TAILQ_INIT(&dev->inb.list);
+	TAILQ_INIT(&dev->outb.list);
+
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -1400,6 +1604,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 	struct roc_nix *nix = &dev->nix;
 	int rc, i;
 
+	plt_free(eth_dev->security_ctx);
+	eth_dev->security_ctx = NULL;
+
 	/* Nothing to be done for secondary processes */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -1429,6 +1636,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 	}
 	eth_dev->data->nb_rx_queues = 0;
 
+	/* Free security resources */
+	nix_security_release(dev);
+
 	/* Free tm resources */
 	roc_nix_tm_fini(nix);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2528b3c..5ae791f 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -13,6 +13,9 @@
 #include <rte_mbuf.h>
 #include <rte_mbuf_pool_ops.h>
 #include <rte_mempool.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+#include <rte_tailq.h>
 #include <rte_time.h>
 
 #include "roc_api.h"
@@ -70,14 +73,14 @@
 	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
 	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
 	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM)
+	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
 	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
 	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
 	 DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
 	 DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |                  \
-	 DEV_RX_OFFLOAD_VLAN_STRIP)
+	 DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
 	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
@@ -112,6 +115,11 @@
 #define PTYPE_TUNNEL_ARRAY_SZ	  BIT(PTYPE_TUNNEL_WIDTH)
 #define PTYPE_ARRAY_SZ                                                         \
 	((PTYPE_NON_TUNNEL_ARRAY_SZ + PTYPE_TUNNEL_ARRAY_SZ) * sizeof(uint16_t))
+
+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
+#define ERRCODE_ERRLEN_WIDTH 12
+#define ERR_ARRAY_SZ	     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
+
 /* Fastpath lookup */
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
@@ -119,6 +127,9 @@
 	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
 	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
 
+/* Subtype from inline outbound error event */
+#define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
+
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
 	uint8_t rx_pause;
@@ -144,6 +155,82 @@ struct cnxk_timesync_info {
 	uint64_t *tx_tstamp;
 } __plt_cache_aligned;
 
+/* Security session private data */
+struct cnxk_eth_sec_sess {
+	/* List entry */
+	TAILQ_ENTRY(cnxk_eth_sec_sess) entry;
+
+	/* Inbound SA is from NIX_RX_IPSEC_SA_BASE or
+	 * Outbound SA from roc_nix_inl_outb_sa_base_get()
+	 */
+	void *sa;
+
+	/* SA index */
+	uint32_t sa_idx;
+
+	/* SPI */
+	uint32_t spi;
+
+	/* Back pointer to session */
+	struct rte_security_session *sess;
+
+	/* Inbound */
+	bool inb;
+
+	/* Inbound session on inl dev */
+	bool inl_dev;
+};
+
+TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess);
+
+/* Inbound security data */
+struct cnxk_eth_dev_sec_inb {
+	/* IPSec inbound max SPI */
+	uint16_t max_spi;
+
+	/* Using inbound with inline device */
+	bool inl_dev;
+
+	/* Device argument to force inline device for inb */
+	bool force_inl_dev;
+
+	/* Active sessions */
+	uint16_t nb_sess;
+
+	/* List of sessions */
+	struct cnxk_eth_sec_sess_list list;
+};
+
+/* Outbound security data */
+struct cnxk_eth_dev_sec_outb {
+	/* IPSec outbound max SA */
+	uint16_t max_sa;
+
+	/* Per CPT LF descriptor count */
+	uint32_t nb_desc;
+
+	/* SA Bitmap */
+	struct plt_bitmap *sa_bmap;
+
+	/* SA bitmap memory */
+	void *sa_bmap_mem;
+
+	/* SA base */
+	uint64_t sa_base;
+
+	/* CPT LF base */
+	struct roc_cpt_lf *lf_base;
+
+	/* Crypto queues => CPT lf count */
+	uint16_t nb_crypto_qs;
+
+	/* Active sessions */
+	uint16_t nb_sess;
+
+	/* List of sessions */
+	struct cnxk_eth_sec_sess_list list;
+};
+
 struct cnxk_eth_dev {
 	/* ROC NIX */
 	struct roc_nix nix;
@@ -159,6 +246,7 @@ struct cnxk_eth_dev {
 	/* Configured queue count */
 	uint16_t nb_rxq;
 	uint16_t nb_txq;
+	uint16_t nb_rxq_sso;
 	uint8_t configured;
 
 	/* Max macfilter entries */
@@ -223,6 +311,10 @@ struct cnxk_eth_dev {
 	/* Per queue statistics counters */
 	uint32_t txq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
 	uint32_t rxq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+
+	/* Security data */
+	struct cnxk_eth_dev_sec_inb inb;
+	struct cnxk_eth_dev_sec_outb outb;
 };
 
 struct cnxk_eth_rxq_sp {
@@ -261,6 +353,9 @@ extern struct eth_dev_ops cnxk_eth_dev_ops;
 /* Common flow ops */
 extern struct rte_flow_ops cnxk_flow_ops;
 
+/* Common security ops */
+extern struct rte_security_ops cnxk_eth_sec_ops;
+
 /* Ops */
 int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
 		   struct rte_pci_device *pci_dev);
@@ -383,6 +478,18 @@ int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs,
 /* Debug */
 int cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
+/* Security */
+int cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p);
+int cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx);
+int cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev);
+int cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev);
+__rte_internal
+int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);
+struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
+						       uint32_t spi, bool inb);
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
+			      struct rte_security_session *sess);
 
 /* Other private functions */
 int nix_recalc_mtu(struct rte_eth_dev *eth_dev);
@@ -493,4 +600,14 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 	}
 }
 
+static __rte_always_inline uintptr_t
+cnxk_nix_sa_base_get(uint16_t port, const void *lookup_mem)
+{
+	uintptr_t sa_base_tbl;
+
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	return *((const uintptr_t *)sa_base_tbl + port);
+}
+
 #endif /* __CNXK_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb..c0b949e 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -8,6 +8,61 @@
 #include "cnxk_ethdev.h"
 
 static int
+parse_outb_nb_desc(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_outb_nb_crypto_qs(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	if (val < 1 || val > 64)
+		return -EINVAL;
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_ipsec_out_max_sa(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
 parse_flow_max_priority(const char *key, const char *value, void *extra_args)
 {
 	RTE_SET_USED(key);
@@ -117,15 +172,25 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
 #define CNXK_SWITCH_HEADER_TYPE "switch_header"
 #define CNXK_RSS_TAG_AS_XOR	"tag_as_xor"
 #define CNXK_LOCK_RX_CTX	"lock_rx_ctx"
+#define CNXK_IPSEC_IN_MAX_SPI	"ipsec_in_max_spi"
+#define CNXK_IPSEC_OUT_MAX_SA	"ipsec_out_max_sa"
+#define CNXK_OUTB_NB_DESC	"outb_nb_desc"
+#define CNXK_FORCE_INB_INL_DEV	"force_inb_inl_dev"
+#define CNXK_OUTB_NB_CRYPTO_QS	"outb_nb_crypto_qs"
 
 int
 cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
 {
 	uint16_t reta_sz = ROC_NIX_RSS_RETA_SZ_64;
 	uint16_t sqb_count = CNXK_NIX_TX_MAX_SQB;
+	uint16_t ipsec_in_max_spi = BIT(8) - 1;
+	uint16_t ipsec_out_max_sa = BIT(12);
 	uint16_t flow_prealloc_size = 1;
 	uint16_t switch_header_type = 0;
 	uint16_t flow_max_priority = 3;
+	uint16_t force_inb_inl_dev = 0;
+	uint16_t outb_nb_crypto_qs = 1;
+	uint16_t outb_nb_desc = 8200;
 	uint16_t rss_tag_as_xor = 0;
 	uint16_t scalar_enable = 0;
 	uint8_t lock_rx_ctx = 0;
@@ -153,10 +218,27 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
 	rte_kvargs_process(kvlist, CNXK_RSS_TAG_AS_XOR, &parse_flag,
 			   &rss_tag_as_xor);
 	rte_kvargs_process(kvlist, CNXK_LOCK_RX_CTX, &parse_flag, &lock_rx_ctx);
+	rte_kvargs_process(kvlist, CNXK_IPSEC_IN_MAX_SPI,
+			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_process(kvlist, CNXK_IPSEC_OUT_MAX_SA,
+			   &parse_ipsec_out_max_sa, &ipsec_out_max_sa);
+	rte_kvargs_process(kvlist, CNXK_OUTB_NB_DESC, &parse_outb_nb_desc,
+			   &outb_nb_desc);
+	rte_kvargs_process(kvlist, CNXK_OUTB_NB_CRYPTO_QS,
+			   &parse_outb_nb_crypto_qs, &outb_nb_crypto_qs);
+	rte_kvargs_process(kvlist, CNXK_FORCE_INB_INL_DEV, &parse_flag,
+			   &force_inb_inl_dev);
 	rte_kvargs_free(kvlist);
 
 null_devargs:
 	dev->scalar_ena = !!scalar_enable;
+	dev->inb.force_inl_dev = !!force_inb_inl_dev;
+	dev->inb.max_spi = ipsec_in_max_spi;
+	dev->outb.max_sa = ipsec_out_max_sa;
+	dev->outb.nb_desc = outb_nb_desc;
+	dev->outb.nb_crypto_qs = outb_nb_crypto_qs;
+	dev->nix.ipsec_in_max_spi = ipsec_in_max_spi;
+	dev->nix.ipsec_out_max_sa = ipsec_out_max_sa;
 	dev->nix.rss_tag_as_xor = !!rss_tag_as_xor;
 	dev->nix.max_sqb_count = sqb_count;
 	dev->nix.reta_sz = reta_sz;
@@ -177,4 +259,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk,
 			      CNXK_FLOW_PREALLOC_SIZE "=<1-32>"
 			      CNXK_FLOW_MAX_PRIORITY "=<1-32>"
 			      CNXK_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b>"
-			      CNXK_RSS_TAG_AS_XOR "=1");
+			      CNXK_RSS_TAG_AS_XOR "=1"
+			      CNXK_IPSEC_IN_MAX_SPI "=<1-65535>"
+			      CNXK_OUTB_NB_DESC "=<1-65535>"
+			      CNXK_OUTB_NB_CRYPTO_QS "=<1-64>"
+			      CNXK_FORCE_INB_INL_DEV "=1");
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
new file mode 100644
index 0000000..c002c30
--- /dev/null
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <cnxk_ethdev.h>
+
+#define CNXK_NIX_INL_SELFTEST	      "selftest"
+#define CNXK_NIX_INL_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
+
+#define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)
+#define CNXK_NIX_INL_DEV_NAME_LEN                                              \
+	(sizeof(CNXK_NIX_INL_DEV_NAME) + PCI_PRI_STR_SIZE)
+
+static inline int
+bitmap_ctzll(uint64_t slab)
+{
+	if (slab == 0)
+		return 0;
+
+	return __builtin_ctzll(slab);
+}
+
+int
+cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p)
+{
+	uint32_t pos, idx;
+	uint64_t slab;
+	int rc;
+
+	if (!dev->outb.max_sa)
+		return -ENOTSUP;
+
+	pos = 0;
+	slab = 0;
+	/* Scan from the beginning */
+	plt_bitmap_scan_init(dev->outb.sa_bmap);
+	/* Scan bitmap to get the free sa index */
+	rc = plt_bitmap_scan(dev->outb.sa_bmap, &pos, &slab);
+	/* Empty bitmap */
+	if (rc == 0) {
+		plt_err("Outbound SA' exhausted, use 'ipsec_out_max_sa' "
+			"devargs to increase");
+		return -ERANGE;
+	}
+
+	/* Get free SA index */
+	idx = pos + bitmap_ctzll(slab);
+	plt_bitmap_clear(dev->outb.sa_bmap, idx);
+	*idx_p = idx;
+	return 0;
+}
+
+int
+cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx)
+{
+	if (idx >= dev->outb.max_sa)
+		return -EINVAL;
+
+	/* Check if it is already free */
+	if (plt_bitmap_get(dev->outb.sa_bmap, idx))
+		return -EINVAL;
+
+	/* Mark index as free */
+	plt_bitmap_set(dev->outb.sa_bmap, idx);
+	return 0;
+}
+
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev, uint32_t spi, bool inb)
+{
+	struct cnxk_eth_sec_sess_list *list;
+	struct cnxk_eth_sec_sess *eth_sec;
+
+	list = inb ? &dev->inb.list : &dev->outb.list;
+	TAILQ_FOREACH(eth_sec, list, entry) {
+		if (eth_sec->spi == spi)
+			return eth_sec;
+	}
+
+	return NULL;
+}
+
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
+			      struct rte_security_session *sess)
+{
+	struct cnxk_eth_sec_sess *eth_sec = NULL;
+
+	/* Search in inbound list */
+	TAILQ_FOREACH(eth_sec, &dev->inb.list, entry) {
+		if (eth_sec->sess == sess)
+			return eth_sec;
+	}
+
+	/* Search in outbound list */
+	TAILQ_FOREACH(eth_sec, &dev->outb.list, entry) {
+		if (eth_sec->sess == sess)
+			return eth_sec;
+	}
+
+	return NULL;
+}
+
+static unsigned int
+cnxk_eth_sec_session_get_size(void *device __rte_unused)
+{
+	return sizeof(struct cnxk_eth_sec_sess);
+}
+
+struct rte_security_ops cnxk_eth_sec_ops = {
+	.session_get_size = cnxk_eth_sec_session_get_size
+};
+
+static int
+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_selftest(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint8_t *)extra_args = !!(val == 1);
+	return 0;
+}
+
+static int
+nix_inl_parse_devargs(struct rte_devargs *devargs,
+		      struct roc_nix_inl_dev *inl_dev)
+{
+	uint32_t ipsec_in_max_spi = BIT(8) - 1;
+	struct rte_kvargs *kvlist;
+	uint8_t selftest = 0;
+
+	if (devargs == NULL)
+		goto null_devargs;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (kvlist == NULL)
+		goto exit;
+
+	rte_kvargs_process(kvlist, CNXK_NIX_INL_SELFTEST, &parse_selftest,
+			   &selftest);
+	rte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,
+			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_free(kvlist);
+
+null_devargs:
+	inl_dev->ipsec_in_max_spi = ipsec_in_max_spi;
+	inl_dev->selftest = selftest;
+	return 0;
+exit:
+	return -EINVAL;
+}
+
+static inline char *
+nix_inl_dev_to_name(struct rte_pci_device *pci_dev, char *name)
+{
+	snprintf(name, CNXK_NIX_INL_DEV_NAME_LEN,
+		 CNXK_NIX_INL_DEV_NAME PCI_PRI_FMT, pci_dev->addr.domain,
+		 pci_dev->addr.bus, pci_dev->addr.devid,
+		 pci_dev->addr.function);
+
+	return name;
+}
+
+static int
+cnxk_nix_inl_dev_remove(struct rte_pci_device *pci_dev)
+{
+	char name[CNXK_NIX_INL_DEV_NAME_LEN];
+	const struct rte_memzone *mz;
+	struct roc_nix_inl_dev *dev;
+	int rc;
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	mz = rte_memzone_lookup(nix_inl_dev_to_name(pci_dev, name));
+	if (!mz)
+		return 0;
+
+	dev = mz->addr;
+
+	/* Cleanup inline dev */
+	rc = roc_nix_inl_dev_fini(dev);
+	if (rc) {
+		plt_err("Failed to cleanup inl dev, rc=%d(%s)", rc,
+			roc_error_msg_get(rc));
+		return rc;
+	}
+
+	rte_memzone_free(mz);
+	return 0;
+}
+
+static int
+cnxk_nix_inl_dev_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	char name[CNXK_NIX_INL_DEV_NAME_LEN];
+	struct roc_nix_inl_dev *inl_dev;
+	const struct rte_memzone *mz;
+	int rc = -ENOMEM;
+
+	RTE_SET_USED(pci_drv);
+
+	rc = roc_plt_init();
+	if (rc) {
+		plt_err("Failed to initialize platform model, rc=%d", rc);
+		return rc;
+	}
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	mz = rte_memzone_reserve_aligned(nix_inl_dev_to_name(pci_dev, name),
+					 sizeof(*inl_dev), SOCKET_ID_ANY, 0,
+					 RTE_CACHE_LINE_SIZE);
+	if (mz == NULL)
+		return rc;
+
+	inl_dev = mz->addr;
+	inl_dev->pci_dev = pci_dev;
+
+	/* Parse devargs string */
+	rc = nix_inl_parse_devargs(pci_dev->device.devargs, inl_dev);
+	if (rc) {
+		plt_err("Failed to parse devargs rc=%d", rc);
+		goto free_mem;
+	}
+
+	rc = roc_nix_inl_dev_init(inl_dev);
+	if (rc) {
+		plt_err("Failed to init nix inl device, rc=%d(%s)", rc,
+			roc_error_msg_get(rc));
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	rte_memzone_free(mz);
+	return rc;
+}
+
+static const struct rte_pci_id cnxk_nix_inl_pci_map[] = {
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_PF)},
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_VF)},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct rte_pci_driver cnxk_nix_inl_pci = {
+	.id_table = cnxk_nix_inl_pci_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+	.probe = cnxk_nix_inl_dev_probe,
+	.remove = cnxk_nix_inl_dev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(cnxk_nix_inl, cnxk_nix_inl_pci);
+RTE_PMD_REGISTER_PCI_TABLE(cnxk_nix_inl, cnxk_nix_inl_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, "vfio-pci");
+
+RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,
+			      CNXK_NIX_INL_SELFTEST "=1"
+			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>");
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index 0152ad9..f6ec768 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -7,12 +7,8 @@
 
 #include "cnxk_ethdev.h"
 
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ	     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
-
-#define SA_TBL_SZ	(RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_TBL_SZ)
+#define SA_BASE_TBL_SZ	(RTE_MAX_ETHPORTS * sizeof(uintptr_t))
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ)
 const uint32_t *
 cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 {
@@ -324,3 +320,45 @@ cnxk_nix_fastpath_lookup_mem_get(void)
 	}
 	return NULL;
 }
+
+int
+cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev)
+{
+	void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+	uint16_t port = dev->eth_dev->data->port_id;
+	uintptr_t sa_base_tbl;
+	uintptr_t sa_base;
+	uint8_t sa_w;
+
+	if (!lookup_mem)
+		return -EIO;
+
+	sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix, dev->inb.inl_dev);
+	if (!sa_base)
+		return -ENOTSUP;
+
+	sa_w = plt_log2_u32(dev->nix.ipsec_in_max_spi + 1);
+
+	/* Set SA Base in lookup mem */
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	*((uintptr_t *)sa_base_tbl + port) = sa_base | sa_w;
+	return 0;
+}
+
+int
+cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev)
+{
+	void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+	uint16_t port = dev->eth_dev->data->port_id;
+	uintptr_t sa_base_tbl;
+
+	if (!lookup_mem)
+		return -EIO;
+
+	/* Set SA Base in lookup mem */
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	*((uintptr_t *)sa_base_tbl + port) = 0;
+	return 0;
+}
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index d4cdd17..6cc30c3 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files(
         'cnxk_ethdev.c',
         'cnxk_ethdev_devargs.c',
         'cnxk_ethdev_ops.c',
+        'cnxk_ethdev_sec.c',
         'cnxk_link.c',
         'cnxk_lookup.c',
         'cnxk_ptp.c',
@@ -22,6 +23,7 @@ sources = files(
 # CN9K
 sources += files(
         'cn9k_ethdev.c',
+        'cn9k_ethdev_sec.c',
         'cn9k_rte_flow.c',
         'cn9k_rx.c',
         'cn9k_rx_mseg.c',
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index c2e0723..b9da6b1 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -1,3 +1,8 @@
 DPDK_22 {
 	local: *;
 };
+
+INTERNAL {
+	global:
+	cnxk_nix_inb_mode_set;
+};
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 16/27] net/cnxk: add inline security support for cn10k
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (14 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 15/27] net/cnxk: add inline security support for cn9k Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 17/27] net/cnxk: add cn9k Rx support for security offload Nithin Dabilpuram
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov
  Cc: jerinj, schalla, dev

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

This patch also changes dpdk-devbind.py to list new inline
device as misc device.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 doc/guides/nics/cnxk.rst                 | 102 ++++++++
 drivers/event/cnxk/cnxk_eventdev_adptr.c |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.c          |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.h          |  43 ++++
 drivers/net/cnxk/cn10k_ethdev_sec.c      | 426 +++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn10k_rx.h              |   1 +
 drivers/net/cnxk/cn10k_tx.h              |   1 +
 drivers/net/cnxk/meson.build             |   1 +
 usertools/dpdk-devbind.py                |   8 +-
 9 files changed, 649 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 90d27db..b542437 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -34,6 +34,7 @@ Features of the CNXK Ethdev PMD are:
 - Vector Poll mode driver
 - Debug utilities - Context dump and error interrupt support
 - Support Rx interrupt
+- Inline IPsec processing support
 
 Prerequisites
 -------------
@@ -185,6 +186,74 @@ Runtime Config Options
 
       -a 0002:02:00.0,tag_as_xor=1
 
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127).
+
+- ``Max SA's for outbound inline IPsec`` (default ``4096``)
+
+   Max number of SA's supported for outbound inline IPsec processing can be
+   specified by ``ipsec_out_max_sa`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_out_max_sa=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 outbound SAs.
+
+- ``Outbound CPT LF queue size`` (default ``8200``)
+
+   Size of Outbound CPT LF queue in number of descriptors can be specified by
+   ``outb_nb_desc`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_desc=16384
+
+    With the above configuration, Outbound CPT LF will be created to accommodate
+    at max 16384 descriptors at any given time.
+
+- ``Outbound CPT LF count`` (default ``1``)
+
+   Number of CPT LF's to attach for Outbound processing can be specified by
+   ``outb_nb_crypto_qs`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_crypto_qs=2
+
+   With the above confiuration, two CPT LF's are setup and distributed among
+   all the Tx queues for outbound processing.
+
+- ``Force using inline ipsec device for inbound`` (default ``0``)
+
+   In CN10K, in event mode, driver can work in two modes,
+
+   1. Inbound encrypted traffic received by probed ipsec inline device while
+      plain traffic post decryption is received by ethdev.
+
+   2. Both Inbound encrypted traffic and plain traffic post decryption are
+      received by ethdev.
+
+   By default event mode works without using inline device i.e mode ``2``.
+   This behaviour can be changed to pick mode ``1`` by using
+   ``force_inb_inl_dev`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,force_inb_inl_dev=1 -a 0002:03:00.0,force_inb_inl_dev=1
+
+   With the above configuration, inbound encrypted traffic from both the ports
+   is received by ipsec inline device.
 
 .. note::
 
@@ -250,6 +319,39 @@ Example usage in testpmd::
    testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
           spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
 
+Inline device support for CN10K
+-------------------------------
+
+CN10K HW provides a misc device Inline device that supports ethernet devices in
+providing following features.
+
+  - Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet
+    devices to be processed by the single inline IPSec device. This allows
+    single rte security session to accept traffic from multiple ports.
+
+  - Support for event generation on outbound inline IPsec processing errors.
+
+  - Support CN106xx poll mode of operation for inline IPSec inbound processing.
+
+Inline IPsec device is identified by PCI PF vendid:devid ``177D:A0F0`` or
+VF ``177D:A0F1``.
+
+Runtime Config Options for inline device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127) for traffic aggregated on inline device.
+
+
 Debugging Options
 -----------------
 
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index baf2f2a..a34efbb 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -123,7 +123,9 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		    uint16_t port_id, const struct rte_event *ev,
 		    uint8_t custom_flowid)
 {
+	struct roc_nix *nix = &cnxk_eth_dev->nix;
 	struct roc_nix_rq *rq;
+	int rc;
 
 	rq = &cnxk_eth_dev->rqs[rq_id];
 	rq->sso_ena = 1;
@@ -140,7 +142,24 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		rq->tag_mask |= ev->flow_id;
 	}
 
-	return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	rc = roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	if (rc)
+		return rc;
+
+	if (rq_id == 0 && roc_nix_inl_inb_is_enabled(nix)) {
+		uint32_t sec_tag_const;
+
+		/* IPSec tag const is 8-bit left shifted value of tag_mask
+		 * as it applies to bit 32:8 of tag only.
+		 */
+		sec_tag_const = rq->tag_mask >> 8;
+		rc = roc_nix_inl_inb_tag_update(nix, sec_tag_const,
+						ev->sched_type);
+		if (rc)
+			plt_err("Failed to set tag conf for ipsec, rc=%d", rc);
+	}
+
+	return rc;
 }
 
 static int
@@ -186,6 +205,7 @@ cnxk_sso_rx_adapter_queue_add(
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, true,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso++;
 	}
 
 	if (rc < 0) {
@@ -196,6 +216,14 @@ cnxk_sso_rx_adapter_queue_add(
 
 	dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags;
 
+	/* Switch to use PF/VF's NIX LF instead of inline device for inbound
+	 * when all the RQ's are switched to event dev mode. We do this only
+	 * when using inline device is not forced by dev args.
+	 */
+	if (!cnxk_eth_dev->inb.force_inl_dev &&
+	    cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq)
+		cnxk_nix_inb_mode_set(cnxk_eth_dev, false);
+
 	return 0;
 }
 
@@ -220,12 +248,18 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, false,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso--;
 	}
 
 	if (rc < 0)
 		plt_err("Failed to clear Rx adapter config port=%d, q=%d",
 			eth_dev->data->port_id, rx_queue_id);
 
+	/* Removing RQ from Rx adapter implies need to use
+	 * inline device for CQ/Poll mode.
+	 */
+	cnxk_nix_inb_mode_set(cnxk_eth_dev, true);
+
 	return rc;
 }
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6c..fa2343c 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (conf & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -181,8 +187,11 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
+	struct roc_cpt_lf *inl_lf;
 	struct cn10k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -198,11 +207,24 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq = eth_dev->data->tx_queues[qid];
 	txq->fc_mem = sq->fc;
 	/* Store lmt base in tx queue for easy access */
-	txq->lmt_base = dev->nix.lmt_base;
+	txq->lmt_base = nix->lmt_base;
 	txq->io_addr = sq->io_addr;
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(ROC_NIX_INL_SA_BASE_ALIGN == BIT_ULL(16));
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -215,6 +237,7 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct cn10k_eth_rxq *rxq;
 	struct roc_nix_rq *rq;
 	struct roc_nix_cq *cq;
@@ -250,6 +273,15 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq->data_off = rq->first_skip;
 	rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
 
+	/* Setup security related info */
+	if (dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		rxq->lmt_base = dev->nix.lmt_base;
+		rxq->sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix,
+							   dev->inb.inl_dev);
+	}
+	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
+	rxq->aura_handle = rxq_sp->qconf.mp->pool_id;
+
 	/* Lookup mem */
 	rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
 	return 0;
@@ -500,6 +532,8 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn10k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 8b6e0f2..a888364 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN10K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn10k_eth_txq {
 	uint64_t send_hdr_w0;
@@ -15,6 +16,10 @@ struct cn10k_eth_txq {
 	rte_iova_t io_addr;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 	uint64_t cmd[4];
 	uint64_t lso_tun_fmt;
 } __plt_cache_aligned;
@@ -30,12 +35,50 @@ struct cn10k_eth_rxq {
 	uint32_t qmask;
 	uint32_t available;
 	uint16_t data_off;
+	uint64_t sa_base;
+	uint64_t lmt_base;
+	uint64_t aura_handle;
 	uint16_t rq;
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_ot_ipsec_inb_sa */
+struct cn10k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_ot_ipsec_outb_sa */
+struct cn10k_outb_priv_data {
+	void *userdata;
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+	/* Back pinter to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+	/* SA index */
+	uint32_t sa_idx;
+};
+
+struct cn10k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn10k_eth_sec_ops_override(void);
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
new file mode 100644
index 0000000..3ffd824
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_eventdev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn10k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static void
+cn10k_eth_sec_sso_work_cb(uint64_t *gw, void *args)
+{
+	struct rte_eth_event_ipsec_desc desc;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct cn10k_outb_priv_data *priv;
+	struct roc_ot_ipsec_outb_sa *sa;
+	struct cpt_cn10k_res_s *res;
+	struct rte_eth_dev *eth_dev;
+	struct cnxk_eth_dev *dev;
+	uint16_t dlen_adj, rlen;
+	struct rte_mbuf *mbuf;
+	uintptr_t sa_base;
+	uintptr_t nixtx;
+	uint8_t port;
+
+	RTE_SET_USED(args);
+
+	switch ((gw[0] >> 28) & 0xF) {
+	case RTE_EVENT_TYPE_ETHDEV:
+		/* Event from inbound inline dev due to IPSEC packet bad L4 */
+		mbuf = (struct rte_mbuf *)(gw[1] - sizeof(struct rte_mbuf));
+		plt_nix_dbg("Received mbuf %p from inline dev inbound", mbuf);
+		rte_pktmbuf_free(mbuf);
+		return;
+	case RTE_EVENT_TYPE_CPU:
+		/* Check for subtype */
+		if (((gw[0] >> 20) & 0xFF) == CNXK_ETHDEV_SEC_OUTB_EV_SUB) {
+			/* Event from outbound inline error */
+			mbuf = (struct rte_mbuf *)gw[1];
+			break;
+		}
+		/* Fall through */
+	default:
+		plt_err("Unknown event gw[0] = 0x%016lx, gw[1] = 0x%016lx",
+			gw[0], gw[1]);
+		return;
+	}
+
+	/* Get ethdev port from tag */
+	port = gw[0] & 0xFF;
+	eth_dev = &rte_eth_devices[port];
+	dev = cnxk_eth_pmd_priv(eth_dev);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+	/* Calculate dlen adj */
+	dlen_adj = mbuf->pkt_len - mbuf->l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Find the res area residing on next cacheline after end of data */
+	nixtx = rte_pktmbuf_mtod(mbuf, uintptr_t) + mbuf->pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+	res = (struct cpt_cn10k_res_s *)nixtx;
+
+	plt_nix_dbg("Outbound error, mbuf %p, sa_index %u, compcode %x uc %x",
+		    mbuf, sess_priv.sa_idx, res->compcode, res->uc_compcode);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+
+	sa_base = dev->outb.sa_base;
+	sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(sa);
+
+	memset(&desc, 0, sizeof(desc));
+
+	switch (res->uc_compcode) {
+	case ROC_IE_OT_UCC_ERR_SA_OVERFLOW:
+		desc.subtype = RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW;
+		break;
+	default:
+		plt_warn("Outbound error, mbuf %p, sa_index %u, "
+			 "compcode %x uc %x", mbuf, sess_priv.sa_idx,
+			 res->compcode, res->uc_compcode);
+		desc.subtype = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+		break;
+	}
+
+	desc.metadata = (uint64_t)priv->userdata;
+	rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_IPSEC, &desc);
+	rte_pktmbuf_free(mbuf);
+}
+
+static int
+cn10k_eth_sec_session_create(void *device,
+			     struct rte_security_session_conf *conf,
+			     struct rte_security_session *sess,
+			     struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound, inl_dev;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		roc_nix_inl_cb_register(cn10k_eth_sec_sso_work_cb, NULL);
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+	inl_dev = !!dev->inb.inl_dev;
+
+	/* Search if a session already exits */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	/* Acquire lock on inline dev for inbound */
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (inbound) {
+		struct cn10k_inb_priv_data *inb_priv;
+		struct roc_ot_ipsec_inb_sa *inb_sa;
+		uintptr_t sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_inb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE */
+		sa = roc_nix_inl_inb_sa_get(&dev->nix, inl_dev, ipsec->spi);
+		if (!sa && dev->inb.inl_dev) {
+			plt_err("Failed to create ingress sa, inline dev "
+				"not found or spi not in range");
+			rc = -ENOTSUP;
+			goto mempool_put;
+		} else if (!sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		inb_sa = (struct roc_ot_ipsec_inb_sa *)sa;
+
+		/* Check if SA is already in use */
+		if (inb_sa->w2.s.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_ot_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		/* Save SA index/SPI in cookie for now */
+		inb_sa->w1.s.cookie = rte_cpu_to_be_32(ipsec->spi);
+
+		/* Prepare session priv */
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inl_dev = !!dev->inb.inl_dev;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn10k_outb_priv_data *outb_priv;
+		struct roc_ot_ipsec_outb_sa *outb_sa;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint64_t sa_base = dev->outb.sa_base;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_outb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_ot_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		/* Prepare session priv */
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u inl_dev=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_ot_ipsec_inb_sa *inb_sa;
+	struct roc_ot_ipsec_outb_sa *outb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->w2.s.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->w2.s.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u, inl_dev=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn10k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn10k_eth_sec_capabilities;
+}
+
+void
+cn10k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 68219b8..d27a231 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -16,6 +16,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f75cae0..8577a7b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index 6cc30c3..d1d4b4e 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -37,6 +37,7 @@ sources += files(
 # CN10K
 sources += files(
         'cn10k_ethdev.c',
+        'cn10k_ethdev_sec.c',
         'cn10k_rte_flow.c',
         'cn10k_rx.c',
         'cn10k_rx_mseg.c',
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 74d16e4..5f0e817 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -49,6 +49,8 @@
              'SVendor': None, 'SDevice': None}
 cnxk_bphy_cgx = {'Class': '08', 'Vendor': '177d', 'Device': 'a059,a060',
                  'SVendor': None, 'SDevice': None}
+cnxk_inl_dev = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f0,a0f1',
+                'SVendor': None, 'SDevice': None}
 
 intel_dlb = {'Class': '0b', 'Vendor': '8086', 'Device': '270b,2710,2714',
              'SVendor': None, 'SDevice': None}
@@ -73,9 +75,9 @@
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
 regex_devices = [octeontx2_ree]
-misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
-                intel_ntb_skx, intel_ntb_icx,
-                octeontx2_dma]
+misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ioat_bdw,
+	        intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx,
+		intel_ntb_icx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 17/27] net/cnxk: add cn9k Rx support for security offload
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (15 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 16/27] net/cnxk: add inline security support for cn10k Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 18/27] net/cnxk: add cn9k Tx " Nithin Dabilpuram
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Add support to receive CPT processed packets on Rx.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn9k_eventdev.c              | 153 ++++----
 drivers/event/cnxk/cn9k_worker.h                |   7 +-
 drivers/event/cnxk/cn9k_worker_deq.c            |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_burst.c      |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_ca.c         |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_tmo.c        |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq.c       |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_burst.c |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c    |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c   |   2 +-
 drivers/net/cnxk/cn9k_rx.c                      |  31 +-
 drivers/net/cnxk/cn9k_rx.h                      | 440 +++++++++++++++++++-----
 drivers/net/cnxk/cn9k_rx_mseg.c                 |   2 +-
 drivers/net/cnxk/cn9k_rx_vec.c                  |   2 +-
 drivers/net/cnxk/cn9k_rx_vec_mseg.c             |   2 +-
 drivers/net/cnxk/cnxk_ethdev.h                  |   3 +
 16 files changed, 461 insertions(+), 195 deletions(-)

diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 6601c44..e91234e 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -10,7 +10,8 @@
 #define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
 
 #define CN9K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops)                            \
-	deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]   \
+	deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]     \
+			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]   \
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]  \
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]     \
@@ -329,178 +330,184 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
 	/* Single WS modes */
-	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
+	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
+	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name,
+	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name,
+	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
+	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name,
+		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name,
+		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	/* Dual WS modes */
-	const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
+	const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_dual_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_dual_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name,
+	const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name,
+		sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name,
+	const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name,
+		sso_hws_dual_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
+		sso_hws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_burst_##name,
+		sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_burst_##name,
+		sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 3e8f214..f1d2e47 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -5,6 +5,9 @@
 #ifndef __CN9K_WORKER_H__
 #define __CN9K_WORKER_H__
 
+#include <rte_eventdev.h>
+#include <rte_vect.h>
+
 #include "cnxk_ethdev.h"
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
@@ -380,7 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
 uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
 					    uint16_t nb_events);
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_##name(                            \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name(                      \
@@ -415,7 +418,7 @@ uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
 NIX_RX_FASTPATH_MODES
 #undef R
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name(                 \
diff --git a/drivers/event/cnxk/cn9k_worker_deq.c b/drivers/event/cnxk/cn9k_worker_deq.c
index 51ccaf4..d65c72a 100644
--- a/drivers/event/cnxk/cn9k_worker_deq.c
+++ b/drivers/event/cnxk/cn9k_worker_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_##name(                            \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_burst.c b/drivers/event/cnxk/cn9k_worker_deq_burst.c
index 4e28014..42dc59b 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_burst.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name(                      \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c
index dde8288..6c5325f 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_ca.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name(                         \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_deq_tmo.c
index 9713d1e..b41a590 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_tmo.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_tmo_##name(                        \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq.c b/drivers/event/cnxk/cn9k_worker_dual_deq.c
index 709fa2d..440b66e 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
index d50e1cf..4d913f9 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name(                 \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
index 26cc60f..74116a9 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name(                    \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
index a0508fd..78a4b3d 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_##name(                   \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd..5c4387e 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(	       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -17,12 +17,13 @@ NIX_RX_FASTPATH_MODES
 
 static inline void
 pick_rx_func(struct rte_eth_dev *eth_dev,
-	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [TSP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
 	eth_dev->rx_pkt_burst = rx_burst
+		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
@@ -38,33 +39,33 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
+	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
@@ -73,7 +74,7 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 	/* Copy multi seg version with no offload for tear down sequence */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		dev->rx_pkt_burst_no_offload =
-			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
+			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
 		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 59545af..bdedeab 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -166,24 +166,104 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 	mbuf->next = NULL;
 }
 
+static __rte_always_inline uint64_t
+nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
+		       uintptr_t sa_base, uint64_t *rearm_val, uint16_t *len)
+{
+	uintptr_t res_sg0 = ((uintptr_t)cq + ROC_ONF_IPSEC_INB_RES_OFF - 8);
+	const union nix_rx_parse_u *rx =
+		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+	struct cn9k_inb_priv_data *sa_priv;
+	struct roc_onf_ipsec_inb_sa *sa;
+	uint8_t lcptr = rx->lcptr;
+	struct rte_ipv4_hdr *ipv4;
+	uint16_t data_off, res;
+	uint32_t spi_mask;
+	uint32_t spi;
+	uintptr_t data;
+	__uint128_t dw;
+	uint8_t sa_w;
+
+	res = *(uint64_t *)(res_sg0 + 8);
+	data_off = *rearm_val & (BIT_ULL(16) - 1);
+	data = (uintptr_t)m->buf_addr;
+	data += data_off;
+
+	rte_prefetch0((void *)data);
+
+	if (unlikely(res != (CPT_COMP_GOOD | ROC_IE_ONF_UCC_SUCCESS << 8)))
+		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+
+	data += lcptr;
+	/* 20 bits of tag would have the SPI */
+	spi = cq->tag & CNXK_ETHDEV_SPI_TAG_MASK;
+
+	/* Get SA */
+	sa_w = sa_base & (ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	spi_mask = (1ULL << sa_w) - 1;
+	sa = roc_nix_inl_onf_ipsec_inb_sa(sa_base, spi & spi_mask);
+
+	/* Update dynamic field with userdata */
+	sa_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(sa);
+	dw = *(__uint128_t *)sa_priv;
+	*rte_security_dynfield(m) = (uint64_t)dw;
+
+	/* Get total length from IPv4 header. We can assume only IPv4 */
+	ipv4 = (struct rte_ipv4_hdr *)(data + ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
+				       ROC_ONF_IPSEC_INB_MAX_L2_SZ);
+
+	/* Update data offset */
+	data_off += (ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
+		     ROC_ONF_IPSEC_INB_MAX_L2_SZ);
+	*rearm_val = *rearm_val & ~(BIT_ULL(16) - 1);
+	*rearm_val |= data_off;
+
+	*len = rte_be_to_cpu_16(ipv4->total_length) + lcptr;
+	return PKT_RX_SEC_OFFLOAD;
+}
+
 static __rte_always_inline void
 cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		     struct rte_mbuf *mbuf, const void *lookup_mem,
-		     const uint64_t val, const uint16_t flag)
+		     uint64_t val, const uint16_t flag)
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
-	const uint16_t len = rx->cn9k.pkt_lenm1 + 1;
+	uint16_t len = rx->cn9k.pkt_lenm1 + 1;
 	const uint64_t w1 = *(const uint64_t *)rx;
+	uint32_t packet_type;
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
 	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
-		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
+		packet_type = nix_ptype_get(lookup_mem, w1);
 	else
-		mbuf->packet_type = 0;
+		packet_type = 0;
+
+	if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
+	    cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
+		uint16_t port = val >> 48;
+		uintptr_t sa_base;
+
+		/* Get SA Base from lookup mem */
+		sa_base = cnxk_nix_sa_base_get(port, lookup_mem);
+
+		ol_flags |= nix_rx_sec_mbuf_update(cq, mbuf, sa_base, &val,
+						   &len);
+
+		/* Only Tunnel inner IPv4 is supported */
+		packet_type = (packet_type &
+			       ~(RTE_PTYPE_L3_MASK | RTE_PTYPE_TUNNEL_MASK));
+		packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+		mbuf->packet_type = packet_type;
+		goto skip_parse;
+	}
+
+	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
+		mbuf->packet_type = packet_type;
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
@@ -193,6 +273,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
 		ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
 
+skip_parse:
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
 			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
@@ -208,11 +289,12 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		ol_flags =
 			nix_update_match_id(rx->cn9k.match_id, ol_flags, mbuf);
 
-	mbuf->ol_flags = ol_flags;
 	mbuf->pkt_len = len;
 	mbuf->data_len = len;
 	*(uint64_t *)(&mbuf->rearm_data) = val;
 
+	mbuf->ol_flags = ol_flags;
+
 	if (flag & NIX_RX_MULTI_SEG_F)
 		nix_cqe_xtract_mseg(rx, mbuf, val, flag);
 	else
@@ -670,98 +752,268 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 #define MARK_F	  NIX_RX_OFFLOAD_MARK_UPDATE_F
 #define TS_F	  NIX_RX_OFFLOAD_TSTAMP_F
 #define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define R_SEC_F   NIX_RX_OFFLOAD_SECURITY_F
 
-/* [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
+/* [R_SEC_F] [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
 #define NIX_RX_FASTPATH_MODES						       \
-R(no_offload,			0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE)	       \
-R(rss,				0, 0, 0, 0, 0, 1, RSS_F)		       \
-R(ptype,			0, 0, 0, 0, 1, 0, PTYPE_F)		       \
-R(ptype_rss,			0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F)	       \
-R(cksum,			0, 0, 0, 1, 0, 0, CKSUM_F)		       \
-R(cksum_rss,			0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F)	       \
-R(cksum_ptype,			0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F)	       \
-R(cksum_ptype_rss,		0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)   \
-R(mark,				0, 0, 1, 0, 0, 0, MARK_F)		       \
-R(mark_rss,			0, 0, 1, 0, 0, 1, MARK_F | RSS_F)	       \
-R(mark_ptype,			0, 0, 1, 0, 1, 0, MARK_F | PTYPE_F)	       \
-R(mark_ptype_rss,		0, 0, 1, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)    \
-R(mark_cksum,			0, 0, 1, 1, 0, 0, MARK_F | CKSUM_F)	       \
-R(mark_cksum_rss,		0, 0, 1, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)    \
-R(mark_cksum_ptype,		0, 0, 1, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)  \
-R(mark_cksum_ptype_rss,		0, 0, 1, 1, 1, 1,			       \
-			MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts,				0, 1, 0, 0, 0, 0, TS_F)			       \
-R(ts_rss,			0, 1, 0, 0, 0, 1, TS_F | RSS_F)		       \
-R(ts_ptype,			0, 1, 0, 0, 1, 0, TS_F | PTYPE_F)	       \
-R(ts_ptype_rss,			0, 1, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)      \
-R(ts_cksum,			0, 1, 0, 1, 0, 0, TS_F | CKSUM_F)	       \
-R(ts_cksum_rss,			0, 1, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)      \
-R(ts_cksum_ptype,		0, 1, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)    \
-R(ts_cksum_ptype_rss,		0, 1, 0, 1, 1, 1,			       \
-			TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts_mark,			0, 1, 1, 0, 0, 0, TS_F | MARK_F)	       \
-R(ts_mark_rss,			0, 1, 1, 0, 0, 1, TS_F | MARK_F | RSS_F)       \
-R(ts_mark_ptype,		0, 1, 1, 0, 1, 0, TS_F | MARK_F | PTYPE_F)     \
-R(ts_mark_ptype_rss,		0, 1, 1, 0, 1, 1,			       \
-			TS_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(ts_mark_cksum,		0, 1, 1, 1, 0, 0, TS_F | MARK_F | CKSUM_F)     \
-R(ts_mark_cksum_rss,		0, 1, 1, 1, 0, 1,			       \
-			TS_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(ts_mark_cksum_ptype,		0, 1, 1, 1, 1, 0,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan,				1, 0, 0, 0, 0, 0, RX_VLAN_F)		       \
-R(vlan_rss,			1, 0, 0, 0, 0, 1, RX_VLAN_F | RSS_F)	       \
-R(vlan_ptype,			1, 0, 0, 0, 1, 0, RX_VLAN_F | PTYPE_F)	       \
-R(vlan_ptype_rss,		1, 0, 0, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum,			1, 0, 0, 1, 0, 0, RX_VLAN_F | CKSUM_F)	       \
-R(vlan_cksum_rss,		1, 0, 0, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype,		1, 0, 0, 1, 1, 0,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F)			       \
-R(vlan_cksum_ptype_rss,		1, 0, 0, 1, 1, 1,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark,			1, 0, 1, 0, 0, 0, RX_VLAN_F | MARK_F)	       \
-R(vlan_mark_rss,		1, 0, 1, 0, 0, 1, RX_VLAN_F | MARK_F | RSS_F)  \
-R(vlan_mark_ptype,		1, 0, 1, 0, 1, 0, RX_VLAN_F | MARK_F | PTYPE_F)\
-R(vlan_mark_ptype_rss,		1, 0, 1, 0, 1, 1,			       \
-			RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark_cksum,		1, 0, 1, 1, 0, 0, RX_VLAN_F | MARK_F | CKSUM_F)\
-R(vlan_mark_cksum_rss,		1, 0, 1, 1, 0, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(vlan_mark_cksum_ptype,	1, 0, 1, 1, 1, 0,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts,			1, 1, 0, 0, 0, 0, RX_VLAN_F | TS_F)	       \
-R(vlan_ts_rss,			1, 1, 0, 0, 0, 1, RX_VLAN_F | TS_F | RSS_F)    \
-R(vlan_ts_ptype,		1, 1, 0, 0, 1, 0, RX_VLAN_F | TS_F | PTYPE_F)  \
-R(vlan_ts_ptype_rss,		1, 1, 0, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
-R(vlan_ts_cksum,		1, 1, 0, 1, 0, 0, RX_VLAN_F | TS_F | CKSUM_F)  \
-R(vlan_ts_cksum_rss,		1, 1, 0, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
-R(vlan_ts_cksum_ptype,		1, 1, 0, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_ts_cksum_ptype_rss,	1, 1, 0, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark,			1, 1, 1, 0, 0, 0, RX_VLAN_F | TS_F | MARK_F)   \
-R(vlan_ts_mark_rss,		1, 1, 1, 0, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
-R(vlan_ts_mark_ptype,		1, 1, 1, 0, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
-R(vlan_ts_mark_ptype_rss,	1, 1, 1, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark_cksum,		1, 1, 1, 1, 0, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
-R(vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
-R(vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)	       \
-R(vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
+R(no_offload,			0, 0, 0, 0, 0, 0, 0,			       \
+		NIX_RX_OFFLOAD_NONE)					       \
+R(rss,				0, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(ptype,			0, 0, 0, 0, 0, 1, 0,			       \
+		PTYPE_F)						       \
+R(ptype_rss,			0, 0, 0, 0, 0, 1, 1,			       \
+		PTYPE_F | RSS_F)					       \
+R(cksum,			0, 0, 0, 0, 1, 0, 0,			       \
+		CKSUM_F)						       \
+R(cksum_rss,			0, 0, 0, 0, 1, 0, 1,			       \
+		CKSUM_F | RSS_F)					       \
+R(cksum_ptype,			0, 0, 0, 0, 1, 1, 0,			       \
+		CKSUM_F | PTYPE_F)					       \
+R(cksum_ptype_rss,		0, 0, 0, 0, 1, 1, 1,			       \
+		CKSUM_F | PTYPE_F | RSS_F)				       \
+R(mark,				0, 0, 0, 1, 0, 0, 0,			       \
+		MARK_F)							       \
+R(mark_rss,			0, 0, 0, 1, 0, 0, 1,			       \
+		MARK_F | RSS_F)						       \
+R(mark_ptype,			0, 0, 0, 1, 0, 1, 0,			       \
+		MARK_F | PTYPE_F)					       \
+R(mark_ptype_rss,		0, 0, 0, 1, 0, 1, 1,			       \
+		MARK_F | PTYPE_F | RSS_F)				       \
+R(mark_cksum,			0, 0, 0, 1, 1, 0, 0,			       \
+		MARK_F | CKSUM_F)					       \
+R(mark_cksum_rss,		0, 0, 0, 1, 1, 0, 1,			       \
+		MARK_F | CKSUM_F | RSS_F)				       \
+R(mark_cksum_ptype,		0, 0, 0, 1, 1, 1, 0,			       \
+		MARK_F | CKSUM_F | PTYPE_F)				       \
+R(mark_cksum_ptype_rss,		0, 0, 0, 1, 1, 1, 1,			       \
+		MARK_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts,				0, 0, 1, 0, 0, 0, 0,			       \
+		TS_F)							       \
+R(ts_rss,			0, 0, 1, 0, 0, 0, 1,			       \
+		TS_F | RSS_F)						       \
+R(ts_ptype,			0, 0, 1, 0, 0, 1, 0,			       \
+		TS_F | PTYPE_F)						       \
+R(ts_ptype_rss,			0, 0, 1, 0, 0, 1, 1,			       \
+		TS_F | PTYPE_F | RSS_F)					       \
+R(ts_cksum,			0, 0, 1, 0, 1, 0, 0,			       \
+		TS_F | CKSUM_F)						       \
+R(ts_cksum_rss,			0, 0, 1, 0, 1, 0, 1,			       \
+		TS_F | CKSUM_F | RSS_F)					       \
+R(ts_cksum_ptype,		0, 0, 1, 0, 1, 1, 0,			       \
+		TS_F | CKSUM_F | PTYPE_F)				       \
+R(ts_cksum_ptype_rss,		0, 0, 1, 0, 1, 1, 1,			       \
+		TS_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts_mark,			0, 0, 1, 1, 0, 0, 0,			       \
+		TS_F | MARK_F)						       \
+R(ts_mark_rss,			0, 0, 1, 1, 0, 0, 1,			       \
+		TS_F | MARK_F | RSS_F)					       \
+R(ts_mark_ptype,		0, 0, 1, 1, 0, 1, 0,			       \
+		TS_F | MARK_F | PTYPE_F)				       \
+R(ts_mark_ptype_rss,		0, 0, 1, 1, 0, 1, 1,			       \
+		TS_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(ts_mark_cksum,		0, 0, 1, 1, 1, 0, 0,			       \
+		TS_F | MARK_F | CKSUM_F)				       \
+R(ts_mark_cksum_rss,		0, 0, 1, 1, 1, 0, 1,			       \
+		TS_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(ts_mark_cksum_ptype,		0, 0, 1, 1, 1, 1, 0,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(ts_mark_cksum_ptype_rss,	0, 0, 1, 1, 1, 1, 1,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan,				0, 1, 0, 0, 0, 0, 0,			       \
+		RX_VLAN_F)						       \
+R(vlan_rss,			0, 1, 0, 0, 0, 0, 1,			       \
+		RX_VLAN_F | RSS_F)					       \
+R(vlan_ptype,			0, 1, 0, 0, 0, 1, 0,			       \
+		RX_VLAN_F | PTYPE_F)					       \
+R(vlan_ptype_rss,		0, 1, 0, 0, 0, 1, 1,			       \
+		RX_VLAN_F | PTYPE_F | RSS_F)				       \
+R(vlan_cksum,			0, 1, 0, 0, 1, 0, 0,			       \
+		RX_VLAN_F | CKSUM_F)					       \
+R(vlan_cksum_rss,		0, 1, 0, 0, 1, 0, 1,			       \
+		RX_VLAN_F | CKSUM_F | RSS_F)				       \
+R(vlan_cksum_ptype,		0, 1, 0, 0, 1, 1, 0,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F)				       \
+R(vlan_cksum_ptype_rss,		0, 1, 0, 0, 1, 1, 1,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark,			0, 1, 0, 1, 0, 0, 0,			       \
+		RX_VLAN_F | MARK_F)					       \
+R(vlan_mark_rss,		0, 1, 0, 1, 0, 0, 1,			       \
+		RX_VLAN_F | MARK_F | RSS_F)				       \
+R(vlan_mark_ptype,		0, 1, 0, 1, 0, 1, 0,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F)				       \
+R(vlan_mark_ptype_rss,		0, 1, 0, 1, 0, 1, 1,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark_cksum,		0, 1, 0, 1, 1, 0, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F)				       \
+R(vlan_mark_cksum_rss,		0, 1, 0, 1, 1, 0, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(vlan_mark_cksum_ptype,	0, 1, 0, 1, 1, 1, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_mark_cksum_ptype_rss,	0, 1, 0, 1, 1, 1, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts,			0, 1, 1, 0, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F)					       \
+R(vlan_ts_rss,			0, 1, 1, 0, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | RSS_F)				       \
+R(vlan_ts_ptype,		0, 1, 1, 0, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | PTYPE_F)				       \
+R(vlan_ts_ptype_rss,		0, 1, 1, 0, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | PTYPE_F | RSS_F)			       \
+R(vlan_ts_cksum,		0, 1, 1, 0, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F)				       \
+R(vlan_ts_cksum_rss,		0, 1, 1, 0, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | RSS_F)			       \
+R(vlan_ts_cksum_ptype,		0, 1, 1, 0, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_ts_cksum_ptype_rss,	0, 1, 1, 0, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark,			0, 1, 1, 1, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F)				       \
+R(vlan_ts_mark_rss,		0, 1, 1, 1, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | RSS_F)			       \
+R(vlan_ts_mark_ptype,		0, 1, 1, 1, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F)			       \
+R(vlan_ts_mark_ptype_rss,	0, 1, 1, 1, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark_cksum,		0, 1, 1, 1, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F)			       \
+R(vlan_ts_mark_cksum_rss,	0, 1, 1, 1, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(vlan_ts_mark_cksum_ptype,	0, 1, 1, 1, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(vlan_ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec,				1, 0, 0, 0, 0, 0, 0,			       \
+		R_SEC_F)						       \
+R(sec_rss,			1, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(sec_ptype,			1, 0, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | PTYPE_F)					       \
+R(sec_ptype_rss,		1, 0, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | PTYPE_F | RSS_F)				       \
+R(sec_cksum,			1, 0, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | CKSUM_F)					       \
+R(sec_cksum_rss,		1, 0, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | CKSUM_F | RSS_F)				       \
+R(sec_cksum_ptype,		1, 0, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F)				       \
+R(sec_cksum_ptype_rss,		1, 0, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(sec_mark,			1, 0, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | MARK_F)					       \
+R(sec_mark_rss,			1, 0, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | MARK_F | RSS_F)				       \
+R(sec_mark_ptype,		1, 0, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | MARK_F | PTYPE_F)				       \
+R(sec_mark_ptype_rss,		1, 0, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(sec_mark_cksum,		1, 0, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F)				       \
+R(sec_mark_cksum_rss,		1, 0, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(sec_mark_cksum_ptype,		1, 0, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(sec_mark_cksum_ptype_rss,	1, 0, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts,			1, 0, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | TS_F)						       \
+R(sec_ts_rss,			1, 0, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | TS_F | RSS_F)					       \
+R(sec_ts_ptype,			1, 0, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | TS_F | PTYPE_F)				       \
+R(sec_ts_ptype_rss,		1, 0, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | TS_F | PTYPE_F | RSS_F)			       \
+R(sec_ts_cksum,			1, 0, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F)				       \
+R(sec_ts_cksum_rss,		1, 0, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | RSS_F)			       \
+R(sec_ts_cksum_ptype,		1, 0, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(sec_ts_cksum_ptype_rss,	1, 0, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark,			1, 0, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F)				       \
+R(sec_ts_mark_rss,		1, 0, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | RSS_F)			       \
+R(sec_ts_mark_ptype,		1, 0, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F)			       \
+R(sec_ts_mark_ptype_rss,	1, 0, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark_cksum,		1, 0, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F)			       \
+R(sec_ts_mark_cksum_rss,	1, 0, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_ts_mark_cksum_ptype,	1, 0, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(sec_ts_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan,			1, 1, 0, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F)					       \
+R(sec_vlan_rss,			1, 1, 0, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | RSS_F)				       \
+R(sec_vlan_ptype,		1, 1, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F)				       \
+R(sec_vlan_ptype_rss,		1, 1, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F)			       \
+R(sec_vlan_cksum,		1, 1, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F)				       \
+R(sec_vlan_cksum_rss,		1, 1, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F)			       \
+R(sec_vlan_cksum_ptype,		1, 1, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_cksum_ptype_rss,	1, 1, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_mark,		1, 1, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F)				       \
+R(sec_vlan_mark_rss,		1, 1, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | RSS_F)			       \
+R(sec_vlan_mark_ptype,		1, 1, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F)			       \
+R(sec_vlan_mark_ptype_rss,	1, 1, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_mark_cksum,		1, 1, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F)			       \
+R(sec_vlan_mark_cksum_rss,	1, 1, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_mark_cksum_ptype,	1, 1, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)	       \
+R(sec_vlan_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)      \
+R(sec_vlan_ts,			1, 1, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F)				       \
+R(sec_vlan_ts_rss,		1, 1, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | RSS_F)			       \
+R(sec_vlan_ts_ptype,		1, 1, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F)			       \
+R(sec_vlan_ts_ptype_rss,	1, 1, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_ts_cksum,		1, 1, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F)			       \
+R(sec_vlan_ts_cksum_rss,	1, 1, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_ts_cksum_ptype,	1, 1, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_ts_cksum_ptype_rss,	1, 1, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark,		1, 1, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F)			       \
+R(sec_vlan_ts_mark_rss,		1, 1, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
+R(sec_vlan_ts_mark_ptype,	1, 1, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
+R(sec_vlan_ts_mark_ptype_rss,	1, 1, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum,	1, 1, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
+R(sec_vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)       \
+R(sec_vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1, 1,		       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(           \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn9k_rx_mseg.c b/drivers/net/cnxk/cn9k_rx_mseg.c
index d7e19b1..06509e8 100644
--- a/drivers/net/cnxk/cn9k_rx_mseg.c
+++ b/drivers/net/cnxk/cn9k_rx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_mseg_##name(      \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx_vec.c b/drivers/net/cnxk/cn9k_rx_vec.c
index ef5f771..c96f61c 100644
--- a/drivers/net/cnxk/cn9k_rx_vec.c
+++ b/drivers/net/cnxk/cn9k_rx_vec.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_vec_##name(       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx_vec_mseg.c b/drivers/net/cnxk/cn9k_rx_vec_mseg.c
index e46d8a4..938b1c0 100644
--- a/drivers/net/cnxk/cn9k_rx_vec_mseg.c
+++ b/drivers/net/cnxk/cn9k_rx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_vec_mseg_##name(  \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 5ae791f..cfdc493 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -130,6 +130,9 @@
 /* Subtype from inline outbound error event */
 #define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
 
+/* SPI will be in 20 bits of tag */
+#define CNXK_ETHDEV_SPI_TAG_MASK 0xFFFFFUL
+
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
 	uint8_t rx_pause;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 18/27] net/cnxk: add cn9k Tx support for security offload
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (16 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 17/27] net/cnxk: add cn9k Rx support for security offload Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 19/27] net/cnxk: add cn10k Rx " Nithin Dabilpuram
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Add support to create and submit CPT instructions on Tx.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 drivers/event/cnxk/cn9k_eventdev.c               |  27 +-
 drivers/event/cnxk/cn9k_worker.h                 | 164 +++++++++-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |   2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq.c          |   2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |   2 +-
 drivers/net/cnxk/cn9k_tx.c                       |  29 +-
 drivers/net/cnxk/cn9k_tx.h                       | 392 +++++++++++++++--------
 drivers/net/cnxk/cn9k_tx_mseg.c                  |   2 +-
 drivers/net/cnxk/cn9k_tx_vec.c                   |   2 +-
 drivers/net/cnxk/cn9k_tx_vec_mseg.c              |   2 +-
 11 files changed, 459 insertions(+), 167 deletions(-)

diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index e91234e..0c7206c 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -19,7 +19,8 @@
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]
 
 #define CN9K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                            \
-	enq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
+	enq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]     \
+			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \
@@ -514,33 +515,33 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	/* Tx modes */
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
+		sso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
+		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
+		sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
+		sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index f1d2e47..6b4837e 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -478,6 +478,146 @@ cn9k_sso_hws_prepare_pkt(const struct cn9k_eth_txq *txq, struct rte_mbuf *m,
 	cn9k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline void
+cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base,
+			  struct rte_mbuf *m, uint64_t *cmd,
+			  uint32_t flags)
+{
+	struct cn9k_outb_priv_data *outb_priv;
+	rte_iova_t io_addr = txq->cpt_io_addr;
+	uint64_t *lmt_addr = txq->lmt_addr;
+	struct cn9k_sec_sess_priv mdata;
+	struct nix_send_hdr_s *send_hdr;
+	uint64_t sa_base = txq->sa_base;
+	uint32_t pkt_len, dlen_adj, rlen;
+	uint64x2_t cmd01, cmd23;
+	uint64_t lmt_status, sa;
+	union nix_send_sg_s *sg;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t esn, *iv;
+	uint8_t l2_len;
+
+	mdata.u64 = *rte_security_dynfield(m);
+	send_hdr = (struct nix_send_hdr_s *)cmd;
+	if (flags & NIX_TX_NEED_EXT_HDR) {
+		sg = (union nix_send_sg_s *)&cmd[4];
+	} else {
+		sg = (union nix_send_sg_s *)&cmd[2];
+	}
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = cmd[1] & 0xFF;
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = *(uint64_t *)(sg + 1);
+	pkt_len = send_hdr->w0.total;
+
+	/* Calculate rlen */
+	rlen = pkt_len - l2_len;
+	rlen = (rlen + mdata.roundup_len) + (mdata.roundup_byte - 1);
+	rlen &= ~(uint64_t)(mdata.roundup_byte - 1);
+	rlen += mdata.partial_len;
+	dlen_adj = rlen - pkt_len + l2_len;
+
+	/* Update send descriptors. Security is single segment only */
+	send_hdr->w0.total = pkt_len + dlen_adj;
+	sg->seg1_size = pkt_len + dlen_adj;
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	roc_lmt_mov((void *)(nixtx + 16), cmd, cn9k_nix_tx_ext_subs(flags));
+
+	/* Load opcode and cptr already prepared at pkt metadata set */
+	pkt_len -= l2_len;
+	pkt_len += sizeof(struct roc_onf_ipsec_outb_hdr) +
+		    ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ;
+	sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+
+	sa = (uintptr_t)roc_nix_inl_onf_ipsec_outb_sa(sa_base, mdata.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | sa);
+	ucode_cmd[0] = (ROC_IE_ONF_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
+			0x40UL << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1 */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn9k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len - ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ -
+		sizeof(struct roc_onf_ipsec_outb_hdr);
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Update IV to zero and l2 sz */
+	*(uint16_t *)(dptr + sizeof(struct roc_onf_ipsec_outb_hdr)) =
+		rte_cpu_to_be_16(ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ);
+	iv = (uint64_t *)(dptr + 8);
+	iv[0] = 0;
+	iv[1] = 0;
+
+	/* Head wait if needed */
+	if (base)
+		roc_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);
+
+	/* ESN */
+	outb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd((void *)sa);
+	esn = outb_priv->esn;
+	outb_priv->esn = esn + 1;
+
+	ucode_cmd[0] |= (esn >> 32) << 16;
+	esn = rte_cpu_to_be_32(esn & (BIT_ULL(32) - 1));
+
+	/* Update ESN and IPID and IV */
+	*(uint64_t *)dptr = esn << 32 | esn;
+
+	rte_io_wmb();
+	cn9k_sso_txq_fc_wait(txq);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(lmt_addr, cmd01);
+	vst1q_u64(lmt_addr + 2, cmd23);
+
+	roc_lmt_mov_seg(lmt_addr + 4, ucode_cmd, 2);
+
+	if (roc_lmt_submit_ldeor(io_addr) == 0) {
+		do {
+			vst1q_u64(lmt_addr, cmd01);
+			vst1q_u64(lmt_addr + 2, cmd23);
+			roc_lmt_mov_seg(lmt_addr + 4, ucode_cmd, 2);
+
+			lmt_status = roc_lmt_submit_ldeor(io_addr);
+		} while (lmt_status == 0);
+	}
+}
+#else
+
+static inline void
+cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base,
+			  struct rte_mbuf *m, uint64_t *cmd,
+			  uint32_t flags)
+{
+	RTE_SET_USED(txq);
+	RTE_SET_USED(base);
+	RTE_SET_USED(m);
+	RTE_SET_USED(cmd);
+	RTE_SET_USED(flags);
+}
+#endif
+
 static __rte_always_inline uint16_t
 cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 		      const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
@@ -494,11 +634,30 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	 * In case of fast free is not set, both cn9k_nix_prepare_mseg()
 	 * and cn9k_nix_xmit_prepare() has a barrier after refcnt update.
 	 */
-	if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+	if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
+	    !(flags & NIX_TX_OFFLOAD_SECURITY_F))
 		rte_io_wmb();
 	txq = cn9k_sso_hws_xtract_meta(m, txq_data);
 	cn9k_sso_hws_prepare_pkt(txq, m, cmd, flags);
 
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		uint64_t ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+			uintptr_t ssow_base = base;
+
+			if (ev->sched_type)
+				ssow_base = 0;
+
+			cn9k_sso_hws_xmit_sec_one(txq, ssow_base, m, cmd,
+						  flags);
+			goto done;
+		}
+
+		if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+			rte_io_wmb();
+	}
+
 	if (flags & NIX_TX_MULTI_SEG_F) {
 		const uint16_t segdw = cn9k_nix_prepare_mseg(m, cmd, flags);
 		if (!CNXK_TT_FROM_EVENT(ev->event)) {
@@ -526,6 +685,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 		}
 	}
 
+done:
 	if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
 		if (ref_cnt > 1)
 			return 1;
@@ -537,7 +697,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	return 1;
 }
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \
 		void *port, struct rte_event ev[], uint16_t nb_events);        \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
index 92e2981..db045d0 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name(              \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
index dfb574c..95d711f 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name(          \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq.c b/drivers/event/cnxk/cn9k_worker_tx_enq.c
index 3df649c..026cef8 100644
--- a/drivers/event/cnxk/cn9k_worker_tx_enq.c
+++ b/drivers/event/cnxk/cn9k_worker_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
index 0efe291..97cd7c7 100644
--- a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a1..e5691a2 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(	       \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -23,12 +23,13 @@ NIX_TX_FASTPATH_MODES
 
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
-	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])
+	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [TS] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
 	eth_dev->tx_pkt_burst = tx_burst
+		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
@@ -42,33 +43,33 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
+	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index a27ff76..44273ec 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -1819,139 +1819,269 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 #define NOFF_F	     NIX_TX_OFFLOAD_MBUF_NOFF_F
 #define TSO_F	     NIX_TX_OFFLOAD_TSO_F
 #define TSP_F	     NIX_TX_OFFLOAD_TSTAMP_F
+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F
 
-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES						       \
-T(no_offload,				0, 0, 0, 0, 0, 0,	4,	       \
-		NIX_TX_OFFLOAD_NONE)					       \
-T(l3l4csum,				0, 0, 0, 0, 0, 1,	4,	       \
-		L3L4CSUM_F)						       \
-T(ol3ol4csum,				0, 0, 0, 0, 1, 0,	4,	       \
-		OL3OL4CSUM_F)						       \
-T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 1, 1,	4,	       \
-		OL3OL4CSUM_F | L3L4CSUM_F)				       \
-T(vlan,					0, 0, 0, 1, 0, 0,	6,	       \
-		VLAN_F)							       \
-T(vlan_l3l4csum,			0, 0, 0, 1, 0, 1,	6,	       \
-		VLAN_F | L3L4CSUM_F)					       \
-T(vlan_ol3ol4csum,			0, 0, 0, 1, 1, 0,	6,	       \
-		VLAN_F | OL3OL4CSUM_F)					       \
-T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 1, 1,	6,	       \
-		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			       \
-T(noff,					0, 0, 1, 0, 0, 0,	4,	       \
-		NOFF_F)							       \
-T(noff_l3l4csum,			0, 0, 1, 0, 0, 1,	4,	       \
-		NOFF_F | L3L4CSUM_F)					       \
-T(noff_ol3ol4csum,			0, 0, 1, 0, 1, 0,	4,	       \
-		NOFF_F | OL3OL4CSUM_F)					       \
-T(noff_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1,	4,	       \
-		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			       \
-T(noff_vlan,				0, 0, 1, 1, 0, 0,	6,	       \
-		NOFF_F | VLAN_F)					       \
-T(noff_vlan_l3l4csum,			0, 0, 1, 1, 0, 1,	6,	       \
-		NOFF_F | VLAN_F | L3L4CSUM_F)				       \
-T(noff_vlan_ol3ol4csum,			0, 0, 1, 1, 1, 0,	6,	       \
-		NOFF_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1,	6,	       \
-		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		       \
-T(tso,					0, 1, 0, 0, 0, 0,	6,	       \
-		TSO_F)							       \
-T(tso_l3l4csum,				0, 1, 0, 0, 0, 1,	6,	       \
-		TSO_F | L3L4CSUM_F)					       \
-T(tso_ol3ol4csum,			0, 1, 0, 0, 1, 0,	6,	       \
-		TSO_F | OL3OL4CSUM_F)					       \
-T(tso_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1,	6,	       \
-		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			       \
-T(tso_vlan,				0, 1, 0, 1, 0, 0,	6,	       \
-		TSO_F | VLAN_F)						       \
-T(tso_vlan_l3l4csum,			0, 1, 0, 1, 0, 1,	6,	       \
-		TSO_F | VLAN_F | L3L4CSUM_F)				       \
-T(tso_vlan_ol3ol4csum,			0, 1, 0, 1, 1, 0,	6,	       \
-		TSO_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(tso_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 1, 1,	6,	       \
-		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(tso_noff,				0, 1, 1, 0, 0, 0,	6,	       \
-		TSO_F | NOFF_F)						       \
-T(tso_noff_l3l4csum,			0, 1, 1, 0, 0, 1,	6,	       \
-		TSO_F | NOFF_F | L3L4CSUM_F)				       \
-T(tso_noff_ol3ol4csum,			0, 1, 1, 0, 1, 0,	6,	       \
-		TSO_F | NOFF_F | OL3OL4CSUM_F)				       \
-T(tso_noff_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 1, 1,	6,	       \
-		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(tso_noff_vlan,			0, 1, 1, 1, 0, 0,	6,	       \
-		TSO_F | NOFF_F | VLAN_F)				       \
-T(tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 0, 1,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			       \
-T(tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 0,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	       \
-T(ts,					1, 0, 0, 0, 0, 0,	8,	       \
-		TSP_F)							       \
-T(ts_l3l4csum,				1, 0, 0, 0, 0, 1,	8,	       \
-		TSP_F | L3L4CSUM_F)					       \
-T(ts_ol3ol4csum,			1, 0, 0, 0, 1, 0,	8,	       \
-		TSP_F | OL3OL4CSUM_F)					       \
-T(ts_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1,	8,	       \
-		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			       \
-T(ts_vlan,				1, 0, 0, 1, 0, 0,	8,	       \
-		TSP_F | VLAN_F)						       \
-T(ts_vlan_l3l4csum,			1, 0, 0, 1, 0, 1,	8,	       \
-		TSP_F | VLAN_F | L3L4CSUM_F)				       \
-T(ts_vlan_ol3ol4csum,			1, 0, 0, 1, 1, 0,	8,	       \
-		TSP_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(ts_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 1, 1,	8,	       \
-		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(ts_noff,				1, 0, 1, 0, 0, 0,	8,	       \
-		TSP_F | NOFF_F)						       \
-T(ts_noff_l3l4csum,			1, 0, 1, 0, 0, 1,	8,	       \
-		TSP_F | NOFF_F | L3L4CSUM_F)				       \
-T(ts_noff_ol3ol4csum,			1, 0, 1, 0, 1, 0,	8,	       \
-		TSP_F | NOFF_F | OL3OL4CSUM_F)				       \
-T(ts_noff_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 1, 1,	8,	       \
-		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(ts_noff_vlan,				1, 0, 1, 1, 0, 0,	8,	       \
-		TSP_F | NOFF_F | VLAN_F)				       \
-T(ts_noff_vlan_l3l4csum,		1, 0, 1, 1, 0, 1,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			       \
-T(ts_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 0,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 1, 1,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	       \
-T(ts_tso,				1, 1, 0, 0, 0, 0,	8,	       \
-		TSP_F | TSO_F)						       \
-T(ts_tso_l3l4csum,			1, 1, 0, 0, 0, 1,	8,	       \
-		TSP_F | TSO_F | L3L4CSUM_F)				       \
-T(ts_tso_ol3ol4csum,			1, 1, 0, 0, 1, 0,	8,	       \
-		TSP_F | TSO_F | OL3OL4CSUM_F)				       \
-T(ts_tso_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 1, 1,	8,	       \
-		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		       \
-T(ts_tso_vlan,				1, 1, 0, 1, 0, 0,	8,	       \
-		TSP_F | TSO_F | VLAN_F)					       \
-T(ts_tso_vlan_l3l4csum,			1, 1, 0, 1, 0, 1,	8,	       \
-		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			       \
-T(ts_tso_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 0,	8,	       \
-		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1,	8,	       \
-		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	       \
-T(ts_tso_noff,				1, 1, 1, 0, 0, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F)					       \
-T(ts_tso_noff_l3l4csum,			1, 1, 1, 0, 0, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			       \
-T(ts_tso_noff_ol3ol4csum,		1, 1, 1, 0, 1, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			       \
-T(ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	       \
-T(ts_tso_noff_vlan,			1, 1, 1, 1, 0, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F)			       \
-T(ts_tso_noff_vlan_l3l4csum,		1, 1, 1, 1, 0, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		       \
-T(ts_tso_noff_vlan_ol3ol4csum,		1, 1, 1, 1, 1, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		       \
-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 1, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+#define NIX_TX_FASTPATH_MODES						\
+T(no_offload,				0, 0, 0, 0, 0, 0, 0,	4,	\
+		NIX_TX_OFFLOAD_NONE)					\
+T(l3l4csum,				0, 0, 0, 0, 0, 0, 1,	4,	\
+		L3L4CSUM_F)						\
+T(ol3ol4csum,				0, 0, 0, 0, 0, 1, 0,	4,	\
+		OL3OL4CSUM_F)						\
+T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 0, 1, 1,	4,	\
+		OL3OL4CSUM_F | L3L4CSUM_F)				\
+T(vlan,					0, 0, 0, 0, 1, 0, 0,	6,	\
+		VLAN_F)							\
+T(vlan_l3l4csum,			0, 0, 0, 0, 1, 0, 1,	6,	\
+		VLAN_F | L3L4CSUM_F)					\
+T(vlan_ol3ol4csum,			0, 0, 0, 0, 1, 1, 0,	6,	\
+		VLAN_F | OL3OL4CSUM_F)					\
+T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 0, 1, 1, 1,	6,	\
+		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
+T(noff,					0, 0, 0, 1, 0, 0, 0,	4,	\
+		NOFF_F)							\
+T(noff_l3l4csum,			0, 0, 0, 1, 0, 0, 1,	4,	\
+		NOFF_F | L3L4CSUM_F)					\
+T(noff_ol3ol4csum,			0, 0, 0, 1, 0, 1, 0,	4,	\
+		NOFF_F | OL3OL4CSUM_F)					\
+T(noff_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 0, 1, 1,	4,	\
+		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
+T(noff_vlan,				0, 0, 0, 1, 1, 0, 0,	6,	\
+		NOFF_F | VLAN_F)					\
+T(noff_vlan_l3l4csum,			0, 0, 0, 1, 1, 0, 1,	6,	\
+		NOFF_F | VLAN_F | L3L4CSUM_F)				\
+T(noff_vlan_ol3ol4csum,			0, 0, 0, 1, 1, 1, 0,	6,	\
+		NOFF_F | VLAN_F | OL3OL4CSUM_F)				\
+T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 0, 1, 1, 1, 1,	6,	\
+		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(tso,					0, 0, 1, 0, 0, 0, 0,	6,	\
+		TSO_F)							\
+T(tso_l3l4csum,				0, 0, 1, 0, 0, 0, 1,	6,	\
+		TSO_F | L3L4CSUM_F)					\
+T(tso_ol3ol4csum,			0, 0, 1, 0, 0, 1, 0,	6,	\
+		TSO_F | OL3OL4CSUM_F)					\
+T(tso_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 0, 1, 1,	6,	\
+		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(tso_vlan,				0, 0, 1, 0, 1, 0, 0,	6,	\
+		TSO_F | VLAN_F)						\
+T(tso_vlan_l3l4csum,			0, 0, 1, 0, 1, 0, 1,	6,	\
+		TSO_F | VLAN_F | L3L4CSUM_F)				\
+T(tso_vlan_ol3ol4csum,			0, 0, 1, 0, 1, 1, 0,	6,	\
+		TSO_F | VLAN_F | OL3OL4CSUM_F)				\
+T(tso_vlan_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1, 1,	6,	\
+		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(tso_noff,				0, 0, 1, 1, 0, 0, 0,	6,	\
+		TSO_F | NOFF_F)						\
+T(tso_noff_l3l4csum,			0, 0, 1, 1, 0, 0, 1,	6,	\
+		TSO_F | NOFF_F | L3L4CSUM_F)				\
+T(tso_noff_ol3ol4csum,			0, 0, 1, 1, 0, 1, 0,	6,	\
+		TSO_F | NOFF_F | OL3OL4CSUM_F)				\
+T(tso_noff_ol3ol4csum_l3l4csum,		0, 0, 1, 1, 0, 1, 1,	6,	\
+		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(tso_noff_vlan,			0, 0, 1, 1, 1, 0, 0,	6,	\
+		TSO_F | NOFF_F | VLAN_F)				\
+T(tso_noff_vlan_l3l4csum,		0, 0, 1, 1, 1, 0, 1,	6,	\
+		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(tso_noff_vlan_ol3ol4csum,		0, 0, 1, 1, 1, 1, 0,	6,	\
+		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
+T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1, 1,	6,	\
+		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(ts,					0, 1, 0, 0, 0, 0, 0,	8,	\
+		TSP_F)							\
+T(ts_l3l4csum,				0, 1, 0, 0, 0, 0, 1,	8,	\
+		TSP_F | L3L4CSUM_F)					\
+T(ts_ol3ol4csum,			0, 1, 0, 0, 0, 1, 0,	8,	\
+		TSP_F | OL3OL4CSUM_F)					\
+T(ts_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 0, 1, 1,	8,	\
+		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(ts_vlan,				0, 1, 0, 0, 1, 0, 0,	8,	\
+		TSP_F | VLAN_F)						\
+T(ts_vlan_l3l4csum,			0, 1, 0, 0, 1, 0, 1,	8,	\
+		TSP_F | VLAN_F | L3L4CSUM_F)				\
+T(ts_vlan_ol3ol4csum,			0, 1, 0, 0, 1, 1, 0,	8,	\
+		TSP_F | VLAN_F | OL3OL4CSUM_F)				\
+T(ts_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1, 1,	8,	\
+		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(ts_noff,				0, 1, 0, 1, 0, 0, 0,	8,	\
+		TSP_F | NOFF_F)						\
+T(ts_noff_l3l4csum,			0, 1, 0, 1, 0, 0, 1,	8,	\
+		TSP_F | NOFF_F | L3L4CSUM_F)				\
+T(ts_noff_ol3ol4csum,			0, 1, 0, 1, 0, 1, 0,	8,	\
+		TSP_F | NOFF_F | OL3OL4CSUM_F)				\
+T(ts_noff_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 0, 1, 1,	8,	\
+		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(ts_noff_vlan,				0, 1, 0, 1, 1, 0, 0,	8,	\
+		TSP_F | NOFF_F | VLAN_F)				\
+T(ts_noff_vlan_l3l4csum,		0, 1, 0, 1, 1, 0, 1,	8,	\
+		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(ts_noff_vlan_ol3ol4csum,		0, 1, 0, 1, 1, 1, 0,	8,	\
+		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
+T(ts_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 0, 1, 1, 1, 1,	8,	\
+		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(ts_tso,				0, 1, 1, 0, 0, 0, 0,	8,	\
+		TSP_F | TSO_F)						\
+T(ts_tso_l3l4csum,			0, 1, 1, 0, 0, 0, 1,	8,	\
+		TSP_F | TSO_F | L3L4CSUM_F)				\
+T(ts_tso_ol3ol4csum,			0, 1, 1, 0, 0, 1, 0,	8,	\
+		TSP_F | TSO_F | OL3OL4CSUM_F)				\
+T(ts_tso_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 0, 1, 1,	8,	\
+		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(ts_tso_vlan,				0, 1, 1, 0, 1, 0, 0,	8,	\
+		TSP_F | TSO_F | VLAN_F)					\
+T(ts_tso_vlan_l3l4csum,			0, 1, 1, 0, 1, 0, 1,	8,	\
+		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(ts_tso_vlan_ol3ol4csum,		0, 1, 1, 0, 1, 1, 0,	8,	\
+		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			\
+T(ts_tso_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 0, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(ts_tso_noff,				0, 1, 1, 1, 0, 0, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F)					\
+T(ts_tso_noff_l3l4csum,			0, 1, 1, 1, 0, 0, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(ts_tso_noff_ol3ol4csum,		0, 1, 1, 1, 0, 1, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			\
+T(ts_tso_noff_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 0, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(ts_tso_noff_vlan,			0, 1, 1, 1, 1, 0, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F)			\
+T(ts_tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 1, 0, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(ts_tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 1, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec,					1, 0, 0, 0, 0, 0, 0,	4,	\
+		T_SEC_F)						\
+T(sec_l3l4csum,				1, 0, 0, 0, 0, 0, 1,	4,	\
+		T_SEC_F | L3L4CSUM_F)					\
+T(sec_ol3ol4csum,			1, 0, 0, 0, 0, 1, 0,	4,	\
+		T_SEC_F | OL3OL4CSUM_F)					\
+T(sec_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 0, 1, 1,	4,	\
+		T_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(sec_vlan,				1, 0, 0, 0, 1, 0, 0,	6,	\
+		T_SEC_F | VLAN_F)					\
+T(sec_vlan_l3l4csum,			1, 0, 0, 0, 1, 0, 1,	6,	\
+		T_SEC_F | VLAN_F | L3L4CSUM_F)				\
+T(sec_vlan_ol3ol4csum,			1, 0, 0, 0, 1, 1, 0,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F)			\
+T(sec_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1, 1,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff,				1, 0, 0, 1, 0, 0, 0,	4,	\
+		T_SEC_F | NOFF_F)					\
+T(sec_noff_l3l4csum,			1, 0, 0, 1, 0, 0, 1,	4,	\
+		T_SEC_F | NOFF_F | L3L4CSUM_F)				\
+T(sec_noff_ol3ol4csum,			1, 0, 0, 1, 0, 1, 0,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F)			\
+T(sec_noff_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 0, 1, 1,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff_vlan,			1, 0, 0, 1, 1, 0, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F)				\
+T(sec_noff_vlan_l3l4csum,		1, 0, 0, 1, 1, 0, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_noff_vlan_ol3ol4csum,		1, 0, 0, 1, 1, 1, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 0, 1, 1, 1, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso,				1, 0, 1, 0, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F)					\
+T(sec_tso_l3l4csum,			1, 0, 1, 0, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | L3L4CSUM_F)				\
+T(sec_tso_ol3ol4csum,			1, 0, 1, 0, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F)				\
+T(sec_tso_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_tso_vlan,				1, 0, 1, 0, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F)				\
+T(sec_tso_vlan_l3l4csum,		1, 0, 1, 0, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_tso_vlan_ol3ol4csum,		1, 0, 1, 0, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_tso_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 0, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff,				1, 0, 1, 1, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F)				\
+T(sec_tso_noff_l3l4csum,		1, 0, 1, 1, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_tso_noff_ol3ol4csum,		1, 0, 1, 1, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_tso_noff_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff_vlan,			1, 0, 1, 1, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F)			\
+T(sec_tso_noff_vlan_l3l4csum,		1, 0, 1, 1, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_tso_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts,				1, 1, 0, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F)					\
+T(sec_ts_l3l4csum,			1, 1, 0, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | L3L4CSUM_F)				\
+T(sec_ts_ol3ol4csum,			1, 1, 0, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F)				\
+T(sec_ts_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_ts_vlan,				1, 1, 0, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F)				\
+T(sec_ts_vlan_l3l4csum,			1, 1, 0, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_ts_vlan_ol3ol4csum,		1, 1, 0, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_ts_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff,				1, 1, 0, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F)				\
+T(sec_ts_noff_l3l4csum,			1, 1, 0, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_ts_noff_ol3ol4csum,		1, 1, 0, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_ts_noff_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff_vlan,			1, 1, 0, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F)			\
+T(sec_ts_noff_vlan_l3l4csum,		1, 1, 0, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_noff_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso,				1, 1, 1, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F)				\
+T(sec_ts_tso_l3l4csum,			1, 1, 1, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum,		1, 1, 1, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_tso_vlan,			1, 1, 1, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F)			\
+T(sec_ts_tso_vlan_l3l4csum,		1, 1, 1, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_tso_vlan_ol3ol4csum,		1, 1, 1, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff,			1, 1, 1, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F)			\
+T(sec_ts_tso_noff_l3l4csum,		1, 1, 1, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)		\
+T(sec_ts_tso_noff_ol3ol4csum,		1, 1, 1, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff_vlan,			1, 1, 1, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)		\
+T(sec_ts_tso_noff_vlan_l3l4csum,	1, 1, 1, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)	\
+T(sec_ts_tso_noff_vlan_ol3ol4csum,	1, 1, 1, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\
+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
+		L3L4CSUM_F)
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(           \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn9k_tx_mseg.c b/drivers/net/cnxk/cn9k_tx_mseg.c
index f3c427c..37cba78 100644
--- a/drivers/net/cnxk/cn9k_tx_mseg.c
+++ b/drivers/net/cnxk/cn9k_tx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn9k_nix_xmit_pkts_mseg_##name(void *tx_queue,                 \
 					       struct rte_mbuf **tx_pkts,      \
diff --git a/drivers/net/cnxk/cn9k_tx_vec.c b/drivers/net/cnxk/cn9k_tx_vec.c
index 56a3e25..b424f95 100644
--- a/drivers/net/cnxk/cn9k_tx_vec.c
+++ b/drivers/net/cnxk/cn9k_tx_vec.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn9k_nix_xmit_pkts_vec_##name(void *tx_queue,                  \
 					      struct rte_mbuf **tx_pkts,       \
diff --git a/drivers/net/cnxk/cn9k_tx_vec_mseg.c b/drivers/net/cnxk/cn9k_tx_vec_mseg.c
index 0256efd..5fdf0a9 100644
--- a/drivers/net/cnxk/cn9k_tx_vec_mseg.c
+++ b/drivers/net/cnxk/cn9k_tx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_vec_mseg_##name(  \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 19/27] net/cnxk: add cn10k Rx support for security offload
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (17 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 18/27] net/cnxk: add cn9k Tx " Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 20/27] net/cnxk: add cn10k Tx " Nithin Dabilpuram
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Add support to receive CPT processed packets on Rx via
second pass.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn10k_eventdev.c         |  80 ++--
 drivers/event/cnxk/cn10k_worker.h           |  73 +++-
 drivers/event/cnxk/cn10k_worker_deq.c       |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_burst.c |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_ca.c    |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_tmo.c   |   2 +-
 drivers/net/cnxk/cn10k_ethdev.h             |   4 +
 drivers/net/cnxk/cn10k_rx.c                 |  31 +-
 drivers/net/cnxk/cn10k_rx.h                 | 648 +++++++++++++++++++++++-----
 drivers/net/cnxk/cn10k_rx_mseg.c            |   2 +-
 drivers/net/cnxk/cn10k_rx_vec.c             |   4 +-
 drivers/net/cnxk/cn10k_rx_vec_mseg.c        |   4 +-
 drivers/net/cnxk/cn10k_tx.h                 |   3 -
 13 files changed, 688 insertions(+), 169 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index bfb6f1a..2f2e7f8 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -7,7 +7,8 @@
 #include "cnxk_worker.h"
 
 #define CN10K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops)                           \
-	deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]   \
+	deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]     \
+			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]   \
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]  \
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]     \
@@ -287,88 +288,91 @@ static void
 cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
+	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                            \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
+	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name,
+	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name,
+	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
+	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name,
+		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name,
+		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
@@ -384,7 +388,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	const event_tx_adapter_enqueue
 		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \
 	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e5ed043..b79bd90 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -106,12 +106,17 @@ cn10k_wqe_to_mbuf(uint64_t wqe, const uint64_t mbuf, uint8_t port_id,
 
 static __rte_always_inline void
 cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
-		   void *lookup_mem, void *tstamp)
+		   void *lookup_mem, void *tstamp, uintptr_t lbase)
 {
 	uint64_t mbuf_init = 0x100010000ULL | RTE_PKTMBUF_HEADROOM |
 			     (flags & NIX_RX_OFFLOAD_TSTAMP_F ? 8 : 0);
 	struct rte_event_vector *vec;
+	uint64_t aura_handle, laddr;
 	uint16_t nb_mbufs, non_vec;
+	uint16_t lmt_id, d_off;
+	struct rte_mbuf *mbuf;
+	uint8_t loff = 0;
+	uint64_t sa_base;
 	uint64_t **wqe;
 
 	mbuf_init |= ((uint64_t)port_id) << 48;
@@ -121,17 +126,41 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
 	nb_mbufs = RTE_ALIGN_FLOOR(vec->nb_elem, NIX_DESCS_PER_LOOP);
 	nb_mbufs = cn10k_nix_recv_pkts_vector(&mbuf_init, vec->mbufs, nb_mbufs,
 					      flags | NIX_RX_VWQE_F, lookup_mem,
-					      tstamp);
+					      tstamp, lbase);
 	wqe += nb_mbufs;
 	non_vec = vec->nb_elem - nb_mbufs;
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && non_vec) {
+		mbuf = (struct rte_mbuf *)((uintptr_t)wqe[0] -
+					   sizeof(struct rte_mbuf));
+		/* Pick first mbuf's aura handle assuming all
+		 * mbufs are from a vec and are from same RQ.
+		 */
+		aura_handle = mbuf->pool->pool_id;
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		laddr = lbase;
+		laddr += 8;
+		d_off = ((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+		d_off += (mbuf_init & 0xFFFF);
+		sa_base = cnxk_nix_sa_base_get(mbuf_init >> 48, lookup_mem);
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	}
+
 	while (non_vec) {
 		struct nix_cqe_hdr_s *cqe = (struct nix_cqe_hdr_s *)wqe[0];
-		struct rte_mbuf *mbuf;
 		uint64_t tstamp_ptr;
 
 		mbuf = (struct rte_mbuf *)((char *)cqe -
 					   sizeof(struct rte_mbuf));
+
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			const uint64_t cq_w1 = *((const uint64_t *)cqe + 1);
+
+			mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, sa_base, laddr,
+						       &loff, mbuf, d_off);
+		}
+
 		cn10k_nix_cqe_to_mbuf(cqe, cqe->tag, mbuf, lookup_mem,
 				      mbuf_init, flags);
 		/* Extracting tstamp, if PTP enabled*/
@@ -145,6 +174,12 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
 		non_vec--;
 		wqe++;
 	}
+
+	/* Free remaining meta buffers if any */
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id, loff, aura_handle);
+		plt_io_wmb();
+	}
 }
 
 static __rte_always_inline uint16_t
@@ -188,6 +223,34 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
 			   RTE_EVENT_TYPE_ETHDEV) {
 			uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
 
+			if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+				struct rte_mbuf *m;
+				uintptr_t sa_base;
+				uint64_t iova = 0;
+				uint8_t loff = 0;
+				uint16_t d_off;
+				uint64_t cq_w1;
+
+				m = (struct rte_mbuf *)mbuf;
+				d_off = (uintptr_t)(m->buf_addr) - (uintptr_t)m;
+				d_off += RTE_PKTMBUF_HEADROOM;
+
+				cq_w1 = *(uint64_t *)(gw.u64[1] + 8);
+
+				sa_base = cnxk_nix_sa_base_get(port,
+							       lookup_mem);
+				sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+
+				mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc(cq_w1,
+						sa_base, (uintptr_t)&iova,
+						&loff, (struct rte_mbuf *)mbuf,
+						d_off);
+				if (loff)
+					roc_npa_aura_op_free(m->pool->pool_id,
+							     0, iova);
+
+			}
+
 			gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
 			cn10k_wqe_to_mbuf(gw.u64[1], mbuf, port,
 					  gw.u64[0] & 0xFFFFF, flags,
@@ -212,7 +275,7 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
 				   ((uint64_t)port << 32);
 			*(uint64_t *)gw.u64[1] = (uint64_t)vwqe_hdr;
 			cn10k_process_vwqe(gw.u64[1], port, flags, lookup_mem,
-					   ws->tstamp);
+					   ws->tstamp, ws->lmt_base);
 		}
 	}
 
@@ -290,7 +353,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
 uint16_t __rte_hot cn10k_sso_hws_ca_enq(void *port, struct rte_event ev[],
 					uint16_t nb_events);
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_##name(                           \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name(                     \
diff --git a/drivers/event/cnxk/cn10k_worker_deq.c b/drivers/event/cnxk/cn10k_worker_deq.c
index 36ec454..6083f69 100644
--- a/drivers/event/cnxk/cn10k_worker_deq.c
+++ b/drivers/event/cnxk/cn10k_worker_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_##name(                           \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_burst.c b/drivers/event/cnxk/cn10k_worker_deq_burst.c
index 29ecc55..8539d5d 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_burst.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name(                     \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_ca.c b/drivers/event/cnxk/cn10k_worker_deq_ca.c
index 508d30f..0d10fc8 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_ca.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_ca_##name(                        \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_tmo.c b/drivers/event/cnxk/cn10k_worker_deq_tmo.c
index c8524a2..537ae37 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_tmo.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_hot cn10k_sso_hws_deq_tmo_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index a888364..200cd93 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -81,4 +81,8 @@ void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 /* Security context setup */
 void cn10k_eth_sec_ops_override(void);
 
+#define LMT_OFF(lmt_addr, lmt_num, offset)                                     \
+	(void *)((uintptr_t)(lmt_addr) +                                       \
+		 ((uint64_t)(lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset))
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767a..d6af54b 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(	       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -17,12 +17,13 @@ NIX_RX_FASTPATH_MODES
 
 static inline void
 pick_rx_func(struct rte_eth_dev *eth_dev,
-	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [VLAN] [TSP] [MARK] [CKSUM] [PTYPE] [RSS] */
 	eth_dev->rx_pkt_burst = rx_burst
+		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
@@ -38,33 +39,33 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
+	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                            \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
@@ -73,7 +74,7 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 	/* Copy multi seg version with no offload for tear down sequence */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		dev->rx_pkt_burst_no_offload =
-			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
+			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
 		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index d27a231..fcc451a 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -65,6 +65,130 @@ nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
 	return (struct rte_mbuf *)(buff - data_off);
 }
 
+static __rte_always_inline void
+nix_sec_flush_meta(uintptr_t laddr, uint16_t lmt_id, uint8_t loff,
+		   uintptr_t aura_handle)
+{
+	uint64_t pa;
+
+	/* laddr is pointing to first pointer */
+	laddr -= 8;
+
+	/* Trigger free either on lmtline full or different aura handle */
+	pa = roc_npa_aura_handle_to_base(aura_handle) + NPA_LF_AURA_BATCH_FREE0;
+
+	/* Update aura handle */
+	*(uint64_t *)laddr = (((uint64_t)(loff & 0x1) << 32) |
+			      roc_npa_aura_handle_to_aura(aura_handle));
+
+	pa |= ((loff >> 1) << 4);
+	roc_lmt_submit_steorl(lmt_id, pa);
+}
+
+static __rte_always_inline struct rte_mbuf *
+nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, const uint64_t sa_base, uintptr_t laddr,
+			uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off)
+{
+	const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
+	const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+	struct cn10k_inb_priv_data *inb_priv;
+	struct rte_mbuf *inner;
+	uint32_t sa_idx;
+	void *inb_sa;
+	uint64_t w0;
+
+	if (cq_w1 & BIT(11)) {
+		inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
+					    sizeof(struct rte_mbuf));
+
+		/* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+		w0 = hdr->w0.u64;
+		sa_idx = w0 >> 32;
+
+		inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+
+		/* Update dynamic field with userdata */
+		*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
+
+		/* Update l2 hdr length first */
+		inner->pkt_len = (hdr->w2.il3_off -
+				  sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7));
+
+		/* Store meta in lmtline to free
+		 * Assume all meta's from same aura.
+		 */
+		*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
+		*loff = *loff + 1;
+
+		return inner;
+	}
+	return mbuf;
+}
+
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline struct rte_mbuf *
+nix_sec_meta_to_mbuf(uint64_t cq_w1, uintptr_t sa_base, uintptr_t laddr,
+		     uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off,
+		     uint8x16_t *rx_desc_field1, uint64_t *ol_flags)
+{
+	const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
+	const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+	struct cn10k_inb_priv_data *inb_priv;
+	struct rte_mbuf *inner;
+	uint64_t *sg, res_w1;
+	uint32_t sa_idx;
+	void *inb_sa;
+	uint16_t len;
+	uint64_t w0;
+
+	if (cq_w1 & BIT(11)) {
+		inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
+					    sizeof(struct rte_mbuf));
+		/* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+		w0 = hdr->w0.u64;
+		sa_idx = w0 >> 32;
+
+		inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+
+		/* Update dynamic field with userdata */
+		*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
+
+		/* CPT result(struct cpt_cn10k_res_s) is at
+		 * after first IOVA in meta
+		 */
+		sg = (uint64_t *)(inner + 1);
+		res_w1 = sg[10];
+
+		/* Clear checksum flags and update security flag */
+		*ol_flags &= ~(PKT_RX_L4_CKSUM_MASK | PKT_RX_IP_CKSUM_MASK);
+		*ol_flags |= (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
+			      PKT_RX_SEC_OFFLOAD :
+			      (PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED));
+		/* Calculate inner packet length */
+		len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
+			sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
+		/* Update pkt_len and data_len */
+		*rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 2);
+		*rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 4);
+
+		/* Store meta in lmtline to free
+		 * Assume all meta's from same aura.
+		 */
+		*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
+		*loff = *loff + 1;
+
+		/* Return inner mbuf */
+		return inner;
+	}
+
+	/* Return same mbuf as it is not a decrypted pkt */
+	return mbuf;
+}
+#endif
+
 static __rte_always_inline uint32_t
 nix_ptype_get(const void *const lookup_mem, const uint64_t in)
 {
@@ -177,8 +301,8 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
-	const uint16_t len = rx->pkt_lenm1 + 1;
 	const uint64_t w1 = *(const uint64_t *)rx;
+	uint16_t len = rx->pkt_lenm1 + 1;
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
@@ -194,8 +318,30 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		ol_flags |= PKT_RX_RSS_HASH;
 	}
 
-	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
-		ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+	/* Process Security packets */
+	if (flag & NIX_RX_OFFLOAD_SECURITY_F) {
+		if (w1 & BIT(11)) {
+			/* CPT result(struct cpt_cn10k_res_s) is at
+			 * after first IOVA in meta
+			 */
+			const uint64_t *sg = (const uint64_t *)(mbuf + 1);
+			const uint64_t res_w1 = sg[10];
+			const uint16_t uc_cc = res_w1 & 0xFF;
+
+			/* Rlen */
+			len = ((res_w1 >> 16) & 0xFFFF) + mbuf->pkt_len;
+			ol_flags |= ((uc_cc == CPT_COMP_WARN) ?
+						   PKT_RX_SEC_OFFLOAD :
+						   (PKT_RX_SEC_OFFLOAD |
+					      PKT_RX_SEC_OFFLOAD_FAILED));
+		} else {
+			if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+				ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+		}
+	} else {
+		if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+			ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+	}
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
@@ -263,13 +409,28 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 	const uintptr_t desc = rxq->desc;
 	const uint64_t wdata = rxq->wdata;
 	const uint32_t qmask = rxq->qmask;
+	uint64_t lbase = rxq->lmt_base;
 	uint16_t packets = 0, nb_pkts;
+	uint8_t loff = 0, lnum = 0;
 	uint32_t head = rxq->head;
 	struct nix_cqe_hdr_s *cq;
 	struct rte_mbuf *mbuf;
+	uint64_t aura_handle;
+	uint64_t sa_base;
+	uint16_t lmt_id;
+	uint64_t laddr;
 
 	nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		aura_handle = rxq->aura_handle;
+		sa_base = rxq->sa_base;
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		laddr = lbase;
+		laddr += 8;
+	}
+
 	while (packets < nb_pkts) {
 		/* Prefetch N desc ahead */
 		rte_prefetch_non_temporal(
@@ -278,6 +439,14 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 
 		mbuf = nix_get_mbuf_from_cqe(cq, data_off);
 
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			const uint64_t cq_w1 = *((const uint64_t *)cq + 1);
+
+			mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, sa_base, laddr,
+						       &loff, mbuf, data_off);
+		}
+
 		cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
 				      flags);
 		cnxk_nix_mbuf_to_tstamp(mbuf, rxq->tstamp,
@@ -289,6 +458,20 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 		roc_prefetch_store_keep(mbuf);
 		head++;
 		head &= qmask;
+
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Flush when we don't have space for 4 meta */
+			if ((15 - loff) < 1) {
+				nix_sec_flush_meta(laddr, lmt_id + lnum, loff,
+						   aura_handle);
+				lnum++;
+				lnum &= BIT_ULL(ROC_LMT_LINES_PER_CORE_LOG2) -
+					1;
+				/* First pointer starts at 8B offset */
+				laddr = (uintptr_t)LMT_OFF(lbase, lnum, 8);
+				loff = 0;
+			}
+		}
 	}
 
 	rxq->head = head;
@@ -297,6 +480,12 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 	/* Free all the CQs that we've processed */
 	plt_write64((wdata | nb_pkts), rxq->cq_door);
 
+	/* Free remaining meta buffers if any */
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
+		plt_io_wmb();
+	}
+
 	return nb_pkts;
 }
 
@@ -327,7 +516,8 @@ nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 static __rte_always_inline uint16_t
 cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			   const uint16_t flags, void *lookup_mem,
-			   struct cnxk_timesync_info *tstamp)
+			   struct cnxk_timesync_info *tstamp,
+			   uintptr_t lmt_base)
 {
 	struct cn10k_eth_rxq *rxq = args;
 	const uint64_t mbuf_initializer = (flags & NIX_RX_VWQE_F) ?
@@ -346,9 +536,13 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 	uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
 	uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
 	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint64_t aura_handle, lbase, laddr;
+	uint8_t loff = 0, lnum = 0;
 	uint8x16_t f0, f1, f2, f3;
+	uint16_t lmt_id, d_off;
 	uint16_t packets = 0;
 	uint16_t pkts_left;
+	uintptr_t sa_base;
 	uint32_t head;
 	uintptr_t cq0;
 
@@ -366,6 +560,38 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		RTE_SET_USED(head);
 	}
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		if (flags & NIX_RX_VWQE_F) {
+			uint16_t port;
+
+			mbuf0 = (struct rte_mbuf *)((uintptr_t)mbufs[0] -
+						    sizeof(struct rte_mbuf));
+			/* Pick first mbuf's aura handle assuming all
+			 * mbufs are from a vec and are from same RQ.
+			 */
+			aura_handle = mbuf0->pool->pool_id;
+			/* Calculate offset from mbuf to actual data area */
+			d_off = ((uintptr_t)mbuf0->buf_addr - (uintptr_t)mbuf0);
+			d_off += (mbuf_initializer & 0xFFFF);
+
+			/* Get SA Base from lookup tbl using port_id */
+			port = mbuf_initializer >> 48;
+			sa_base = cnxk_nix_sa_base_get(port, lookup_mem);
+
+			lbase = lmt_base;
+		} else {
+			aura_handle = rxq->aura_handle;
+			d_off = rxq->data_off;
+			sa_base = rxq->sa_base;
+			lbase = rxq->lmt_base;
+		}
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		lnum = 0;
+		laddr = lbase;
+		laddr += 8;
+	}
+
 	while (packets < pkts) {
 		if (!(flags & NIX_RX_VWQE_F)) {
 			/* Exit loop if head is about to wrap and become
@@ -428,6 +654,14 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
 		f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
 
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Prefetch probable CPT parse header area */
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf0, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf1, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf2, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf3, d_off));
+		}
+
 		/* Load CQE word0 and word 1 */
 		const uint64_t cq0_w0 = *CQE_PTR_OFF(cq0, 0, 0, flags);
 		const uint64_t cq0_w1 = *CQE_PTR_OFF(cq0, 0, 8, flags);
@@ -474,6 +708,30 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
 		}
 
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Checksum ol_flags will be cleared if mbuf is meta */
+			mbuf0 = nix_sec_meta_to_mbuf(cq0_w1, sa_base, laddr,
+						     &loff, mbuf0, d_off, &f0,
+						     &ol_flags0);
+			mbuf01 = vsetq_lane_u64((uint64_t)mbuf0, mbuf01, 0);
+
+			mbuf1 = nix_sec_meta_to_mbuf(cq1_w1, sa_base, laddr,
+						     &loff, mbuf1, d_off, &f1,
+						     &ol_flags1);
+			mbuf01 = vsetq_lane_u64((uint64_t)mbuf1, mbuf01, 1);
+
+			mbuf2 = nix_sec_meta_to_mbuf(cq2_w1, sa_base, laddr,
+						     &loff, mbuf2, d_off, &f2,
+						     &ol_flags2);
+			mbuf23 = vsetq_lane_u64((uint64_t)mbuf2, mbuf23, 0);
+
+			mbuf3 = nix_sec_meta_to_mbuf(cq3_w1, sa_base, laddr,
+						     &loff, mbuf3, d_off, &f3,
+						     &ol_flags3);
+			mbuf23 = vsetq_lane_u64((uint64_t)mbuf3, mbuf23, 1);
+		}
+
 		if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 			uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
 			uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
@@ -659,6 +917,26 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			head += NIX_DESCS_PER_LOOP;
 			head &= qmask;
 		}
+
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Flush when we don't have space for 4 meta */
+			if ((15 - loff) < 4) {
+				nix_sec_flush_meta(laddr, lmt_id + lnum, loff,
+						   aura_handle);
+				lnum++;
+				lnum &= BIT_ULL(ROC_LMT_LINES_PER_CORE_LOG2) -
+					1;
+				/* First pointer starts at 8B offset */
+				laddr = (uintptr_t)LMT_OFF(lbase, lnum, 8);
+				loff = 0;
+			}
+		}
+	}
+
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
+		if (flags & NIX_RX_VWQE_F)
+			plt_io_wmb();
 	}
 
 	if (flags & NIX_RX_VWQE_F)
@@ -681,16 +959,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 #else
 
 static inline uint16_t
-cn10k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
-			   uint16_t pkts, const uint16_t flags,
-			   void *lookup_mem, void *tstamp)
+cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
+			   const uint16_t flags, void *lookup_mem,
+			   struct cnxk_timesync_info *tstamp,
+			   uintptr_t lmt_base)
 {
-	RTE_SET_USED(lookup_mem);
-	RTE_SET_USED(rx_queue);
-	RTE_SET_USED(rx_pkts);
+	RTE_SET_USED(args);
+	RTE_SET_USED(mbufs);
 	RTE_SET_USED(pkts);
 	RTE_SET_USED(flags);
+	RTE_SET_USED(lookup_mem);
 	RTE_SET_USED(tstamp);
+	RTE_SET_USED(lmt_base);
 
 	return 0;
 }
@@ -704,98 +984,268 @@ cn10k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 #define MARK_F	  NIX_RX_OFFLOAD_MARK_UPDATE_F
 #define TS_F      NIX_RX_OFFLOAD_TSTAMP_F
 #define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define R_SEC_F   NIX_RX_OFFLOAD_SECURITY_F
 
-/* [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
+/* [R_SEC_F] [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
 #define NIX_RX_FASTPATH_MODES						       \
-R(no_offload,			0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE)	       \
-R(rss,				0, 0, 0, 0, 0, 1, RSS_F)		       \
-R(ptype,			0, 0, 0, 0, 1, 0, PTYPE_F)		       \
-R(ptype_rss,			0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F)	       \
-R(cksum,			0, 0, 0, 1, 0, 0, CKSUM_F)		       \
-R(cksum_rss,			0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F)	       \
-R(cksum_ptype,			0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F)	       \
-R(cksum_ptype_rss,		0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)   \
-R(mark,				0, 0, 1, 0, 0, 0, MARK_F)		       \
-R(mark_rss,			0, 0, 1, 0, 0, 1, MARK_F | RSS_F)	       \
-R(mark_ptype,			0, 0, 1, 0, 1, 0, MARK_F | PTYPE_F)	       \
-R(mark_ptype_rss,		0, 0, 1, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)    \
-R(mark_cksum,			0, 0, 1, 1, 0, 0, MARK_F | CKSUM_F)	       \
-R(mark_cksum_rss,		0, 0, 1, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)    \
-R(mark_cksum_ptype,		0, 0, 1, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)  \
-R(mark_cksum_ptype_rss,		0, 0, 1, 1, 1, 1,			       \
-			MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts,				0, 1, 0, 0, 0, 0, TS_F)			       \
-R(ts_rss,			0, 1, 0, 0, 0, 1, TS_F | RSS_F)		       \
-R(ts_ptype,			0, 1, 0, 0, 1, 0, TS_F | PTYPE_F)	       \
-R(ts_ptype_rss,			0, 1, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)      \
-R(ts_cksum,			0, 1, 0, 1, 0, 0, TS_F | CKSUM_F)	       \
-R(ts_cksum_rss,			0, 1, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)      \
-R(ts_cksum_ptype,		0, 1, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)    \
-R(ts_cksum_ptype_rss,		0, 1, 0, 1, 1, 1,			       \
-			TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts_mark,			0, 1, 1, 0, 0, 0, TS_F | MARK_F)	       \
-R(ts_mark_rss,			0, 1, 1, 0, 0, 1, TS_F | MARK_F | RSS_F)       \
-R(ts_mark_ptype,		0, 1, 1, 0, 1, 0, TS_F | MARK_F | PTYPE_F)     \
-R(ts_mark_ptype_rss,		0, 1, 1, 0, 1, 1,			       \
-			TS_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(ts_mark_cksum,		0, 1, 1, 1, 0, 0, TS_F | MARK_F | CKSUM_F)     \
-R(ts_mark_cksum_rss,		0, 1, 1, 1, 0, 1,			       \
-			TS_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(ts_mark_cksum_ptype,		0, 1, 1, 1, 1, 0,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan,				1, 0, 0, 0, 0, 0, RX_VLAN_F)		       \
-R(vlan_rss,			1, 0, 0, 0, 0, 1, RX_VLAN_F | RSS_F)	       \
-R(vlan_ptype,			1, 0, 0, 0, 1, 0, RX_VLAN_F | PTYPE_F)	       \
-R(vlan_ptype_rss,		1, 0, 0, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum,			1, 0, 0, 1, 0, 0, RX_VLAN_F | CKSUM_F)	       \
-R(vlan_cksum_rss,		1, 0, 0, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype,		1, 0, 0, 1, 1, 0,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F)			       \
-R(vlan_cksum_ptype_rss,		1, 0, 0, 1, 1, 1,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark,			1, 0, 1, 0, 0, 0, RX_VLAN_F | MARK_F)	       \
-R(vlan_mark_rss,		1, 0, 1, 0, 0, 1, RX_VLAN_F | MARK_F | RSS_F)  \
-R(vlan_mark_ptype,		1, 0, 1, 0, 1, 0, RX_VLAN_F | MARK_F | PTYPE_F)\
-R(vlan_mark_ptype_rss,		1, 0, 1, 0, 1, 1,			       \
-			RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark_cksum,		1, 0, 1, 1, 0, 0, RX_VLAN_F | MARK_F | CKSUM_F)\
-R(vlan_mark_cksum_rss,		1, 0, 1, 1, 0, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(vlan_mark_cksum_ptype,	1, 0, 1, 1, 1, 0,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts,			1, 1, 0, 0, 0, 0, RX_VLAN_F | TS_F)	       \
-R(vlan_ts_rss,			1, 1, 0, 0, 0, 1, RX_VLAN_F | TS_F | RSS_F)    \
-R(vlan_ts_ptype,		1, 1, 0, 0, 1, 0, RX_VLAN_F | TS_F | PTYPE_F)  \
-R(vlan_ts_ptype_rss,		1, 1, 0, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
-R(vlan_ts_cksum,		1, 1, 0, 1, 0, 0, RX_VLAN_F | TS_F | CKSUM_F)  \
-R(vlan_ts_cksum_rss,		1, 1, 0, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
-R(vlan_ts_cksum_ptype,		1, 1, 0, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_ts_cksum_ptype_rss,	1, 1, 0, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark,			1, 1, 1, 0, 0, 0, RX_VLAN_F | TS_F | MARK_F)   \
-R(vlan_ts_mark_rss,		1, 1, 1, 0, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
-R(vlan_ts_mark_ptype,		1, 1, 1, 0, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
-R(vlan_ts_mark_ptype_rss,	1, 1, 1, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark_cksum,		1, 1, 1, 1, 0, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
-R(vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
-R(vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)	       \
-R(vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
+R(no_offload,			0, 0, 0, 0, 0, 0, 0,			       \
+		NIX_RX_OFFLOAD_NONE)					       \
+R(rss,				0, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(ptype,			0, 0, 0, 0, 0, 1, 0,			       \
+		PTYPE_F)						       \
+R(ptype_rss,			0, 0, 0, 0, 0, 1, 1,			       \
+		PTYPE_F | RSS_F)					       \
+R(cksum,			0, 0, 0, 0, 1, 0, 0,			       \
+		CKSUM_F)						       \
+R(cksum_rss,			0, 0, 0, 0, 1, 0, 1,			       \
+		CKSUM_F | RSS_F)					       \
+R(cksum_ptype,			0, 0, 0, 0, 1, 1, 0,			       \
+		CKSUM_F | PTYPE_F)					       \
+R(cksum_ptype_rss,		0, 0, 0, 0, 1, 1, 1,			       \
+		CKSUM_F | PTYPE_F | RSS_F)				       \
+R(mark,				0, 0, 0, 1, 0, 0, 0,			       \
+		MARK_F)							       \
+R(mark_rss,			0, 0, 0, 1, 0, 0, 1,			       \
+		MARK_F | RSS_F)						       \
+R(mark_ptype,			0, 0, 0, 1, 0, 1, 0,			       \
+		MARK_F | PTYPE_F)					       \
+R(mark_ptype_rss,		0, 0, 0, 1, 0, 1, 1,			       \
+		MARK_F | PTYPE_F | RSS_F)				       \
+R(mark_cksum,			0, 0, 0, 1, 1, 0, 0,			       \
+		MARK_F | CKSUM_F)					       \
+R(mark_cksum_rss,		0, 0, 0, 1, 1, 0, 1,			       \
+		MARK_F | CKSUM_F | RSS_F)				       \
+R(mark_cksum_ptype,		0, 0, 0, 1, 1, 1, 0,			       \
+		MARK_F | CKSUM_F | PTYPE_F)				       \
+R(mark_cksum_ptype_rss,		0, 0, 0, 1, 1, 1, 1,			       \
+		MARK_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts,				0, 0, 1, 0, 0, 0, 0,			       \
+		TS_F)							       \
+R(ts_rss,			0, 0, 1, 0, 0, 0, 1,			       \
+		TS_F | RSS_F)						       \
+R(ts_ptype,			0, 0, 1, 0, 0, 1, 0,			       \
+		TS_F | PTYPE_F)						       \
+R(ts_ptype_rss,			0, 0, 1, 0, 0, 1, 1,			       \
+		TS_F | PTYPE_F | RSS_F)					       \
+R(ts_cksum,			0, 0, 1, 0, 1, 0, 0,			       \
+		TS_F | CKSUM_F)						       \
+R(ts_cksum_rss,			0, 0, 1, 0, 1, 0, 1,			       \
+		TS_F | CKSUM_F | RSS_F)					       \
+R(ts_cksum_ptype,		0, 0, 1, 0, 1, 1, 0,			       \
+		TS_F | CKSUM_F | PTYPE_F)				       \
+R(ts_cksum_ptype_rss,		0, 0, 1, 0, 1, 1, 1,			       \
+		TS_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts_mark,			0, 0, 1, 1, 0, 0, 0,			       \
+		TS_F | MARK_F)						       \
+R(ts_mark_rss,			0, 0, 1, 1, 0, 0, 1,			       \
+		TS_F | MARK_F | RSS_F)					       \
+R(ts_mark_ptype,		0, 0, 1, 1, 0, 1, 0,			       \
+		TS_F | MARK_F | PTYPE_F)				       \
+R(ts_mark_ptype_rss,		0, 0, 1, 1, 0, 1, 1,			       \
+		TS_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(ts_mark_cksum,		0, 0, 1, 1, 1, 0, 0,			       \
+		TS_F | MARK_F | CKSUM_F)				       \
+R(ts_mark_cksum_rss,		0, 0, 1, 1, 1, 0, 1,			       \
+		TS_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(ts_mark_cksum_ptype,		0, 0, 1, 1, 1, 1, 0,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(ts_mark_cksum_ptype_rss,	0, 0, 1, 1, 1, 1, 1,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan,				0, 1, 0, 0, 0, 0, 0,			       \
+		RX_VLAN_F)						       \
+R(vlan_rss,			0, 1, 0, 0, 0, 0, 1,			       \
+		RX_VLAN_F | RSS_F)					       \
+R(vlan_ptype,			0, 1, 0, 0, 0, 1, 0,			       \
+		RX_VLAN_F | PTYPE_F)					       \
+R(vlan_ptype_rss,		0, 1, 0, 0, 0, 1, 1,			       \
+		RX_VLAN_F | PTYPE_F | RSS_F)				       \
+R(vlan_cksum,			0, 1, 0, 0, 1, 0, 0,			       \
+		RX_VLAN_F | CKSUM_F)					       \
+R(vlan_cksum_rss,		0, 1, 0, 0, 1, 0, 1,			       \
+		RX_VLAN_F | CKSUM_F | RSS_F)				       \
+R(vlan_cksum_ptype,		0, 1, 0, 0, 1, 1, 0,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F)				       \
+R(vlan_cksum_ptype_rss,		0, 1, 0, 0, 1, 1, 1,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark,			0, 1, 0, 1, 0, 0, 0,			       \
+		RX_VLAN_F | MARK_F)					       \
+R(vlan_mark_rss,		0, 1, 0, 1, 0, 0, 1,			       \
+		RX_VLAN_F | MARK_F | RSS_F)				       \
+R(vlan_mark_ptype,		0, 1, 0, 1, 0, 1, 0,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F)				       \
+R(vlan_mark_ptype_rss,		0, 1, 0, 1, 0, 1, 1,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark_cksum,		0, 1, 0, 1, 1, 0, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F)				       \
+R(vlan_mark_cksum_rss,		0, 1, 0, 1, 1, 0, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(vlan_mark_cksum_ptype,	0, 1, 0, 1, 1, 1, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_mark_cksum_ptype_rss,	0, 1, 0, 1, 1, 1, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts,			0, 1, 1, 0, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F)					       \
+R(vlan_ts_rss,			0, 1, 1, 0, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | RSS_F)				       \
+R(vlan_ts_ptype,		0, 1, 1, 0, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | PTYPE_F)				       \
+R(vlan_ts_ptype_rss,		0, 1, 1, 0, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | PTYPE_F | RSS_F)			       \
+R(vlan_ts_cksum,		0, 1, 1, 0, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F)				       \
+R(vlan_ts_cksum_rss,		0, 1, 1, 0, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | RSS_F)			       \
+R(vlan_ts_cksum_ptype,		0, 1, 1, 0, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_ts_cksum_ptype_rss,	0, 1, 1, 0, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark,			0, 1, 1, 1, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F)				       \
+R(vlan_ts_mark_rss,		0, 1, 1, 1, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | RSS_F)			       \
+R(vlan_ts_mark_ptype,		0, 1, 1, 1, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F)			       \
+R(vlan_ts_mark_ptype_rss,	0, 1, 1, 1, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark_cksum,		0, 1, 1, 1, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F)			       \
+R(vlan_ts_mark_cksum_rss,	0, 1, 1, 1, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(vlan_ts_mark_cksum_ptype,	0, 1, 1, 1, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(vlan_ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec,				1, 0, 0, 0, 0, 0, 0,			       \
+		R_SEC_F)						       \
+R(sec_rss,			1, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(sec_ptype,			1, 0, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | PTYPE_F)					       \
+R(sec_ptype_rss,		1, 0, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | PTYPE_F | RSS_F)				       \
+R(sec_cksum,			1, 0, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | CKSUM_F)					       \
+R(sec_cksum_rss,		1, 0, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | CKSUM_F | RSS_F)				       \
+R(sec_cksum_ptype,		1, 0, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F)				       \
+R(sec_cksum_ptype_rss,		1, 0, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(sec_mark,			1, 0, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | MARK_F)					       \
+R(sec_mark_rss,			1, 0, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | MARK_F | RSS_F)				       \
+R(sec_mark_ptype,		1, 0, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | MARK_F | PTYPE_F)				       \
+R(sec_mark_ptype_rss,		1, 0, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(sec_mark_cksum,		1, 0, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F)				       \
+R(sec_mark_cksum_rss,		1, 0, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(sec_mark_cksum_ptype,		1, 0, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(sec_mark_cksum_ptype_rss,	1, 0, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts,			1, 0, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | TS_F)						       \
+R(sec_ts_rss,			1, 0, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | TS_F | RSS_F)					       \
+R(sec_ts_ptype,			1, 0, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | TS_F | PTYPE_F)				       \
+R(sec_ts_ptype_rss,		1, 0, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | TS_F | PTYPE_F | RSS_F)			       \
+R(sec_ts_cksum,			1, 0, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F)				       \
+R(sec_ts_cksum_rss,		1, 0, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | RSS_F)			       \
+R(sec_ts_cksum_ptype,		1, 0, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(sec_ts_cksum_ptype_rss,	1, 0, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark,			1, 0, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F)				       \
+R(sec_ts_mark_rss,		1, 0, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | RSS_F)			       \
+R(sec_ts_mark_ptype,		1, 0, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F)			       \
+R(sec_ts_mark_ptype_rss,	1, 0, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark_cksum,		1, 0, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F)			       \
+R(sec_ts_mark_cksum_rss,	1, 0, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_ts_mark_cksum_ptype,	1, 0, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(sec_ts_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan,			1, 1, 0, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F)					       \
+R(sec_vlan_rss,			1, 1, 0, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | RSS_F)				       \
+R(sec_vlan_ptype,		1, 1, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F)				       \
+R(sec_vlan_ptype_rss,		1, 1, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F)			       \
+R(sec_vlan_cksum,		1, 1, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F)				       \
+R(sec_vlan_cksum_rss,		1, 1, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F)			       \
+R(sec_vlan_cksum_ptype,		1, 1, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_cksum_ptype_rss,	1, 1, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_mark,		1, 1, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F)				       \
+R(sec_vlan_mark_rss,		1, 1, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | RSS_F)			       \
+R(sec_vlan_mark_ptype,		1, 1, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F)			       \
+R(sec_vlan_mark_ptype_rss,	1, 1, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_mark_cksum,		1, 1, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F)			       \
+R(sec_vlan_mark_cksum_rss,	1, 1, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_mark_cksum_ptype,	1, 1, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)	       \
+R(sec_vlan_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)      \
+R(sec_vlan_ts,			1, 1, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F)				       \
+R(sec_vlan_ts_rss,		1, 1, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | RSS_F)			       \
+R(sec_vlan_ts_ptype,		1, 1, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F)			       \
+R(sec_vlan_ts_ptype_rss,	1, 1, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_ts_cksum,		1, 1, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F)			       \
+R(sec_vlan_ts_cksum_rss,	1, 1, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_ts_cksum_ptype,	1, 1, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_ts_cksum_ptype_rss,	1, 1, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark,		1, 1, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F)			       \
+R(sec_vlan_ts_mark_rss,		1, 1, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
+R(sec_vlan_ts_mark_ptype,	1, 1, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
+R(sec_vlan_ts_mark_ptype_rss,	1, 1, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum,	1, 1, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
+R(sec_vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)       \
+R(sec_vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1, 1,		       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(          \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn10k_rx_mseg.c b/drivers/net/cnxk/cn10k_rx_mseg.c
index 3340771..e7c2321 100644
--- a/drivers/net/cnxk/cn10k_rx_mseg.c
+++ b/drivers/net/cnxk/cn10k_rx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_mseg_##name(     \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_rx_vec.c b/drivers/net/cnxk/cn10k_rx_vec.c
index 166735a..0ccc4df 100644
--- a/drivers/net/cnxk/cn10k_rx_vec.c
+++ b/drivers/net/cnxk/cn10k_rx_vec.c
@@ -5,14 +5,14 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_recv_pkts_vec_##name(void *rx_queue,                 \
 					       struct rte_mbuf **rx_pkts,      \
 					       uint16_t pkts)                  \
 	{                                                                      \
 		return cn10k_nix_recv_pkts_vector(rx_queue, rx_pkts, pkts,     \
-						  (flags), NULL, NULL);        \
+						  (flags), NULL, NULL, 0);     \
 	}
 
 NIX_RX_FASTPATH_MODES
diff --git a/drivers/net/cnxk/cn10k_rx_vec_mseg.c b/drivers/net/cnxk/cn10k_rx_vec_mseg.c
index 1f44ddd..38e0ec3 100644
--- a/drivers/net/cnxk/cn10k_rx_vec_mseg.c
+++ b/drivers/net/cnxk/cn10k_rx_vec_mseg.c
@@ -5,13 +5,13 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_vec_mseg_##name( \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
 		return cn10k_nix_recv_pkts_vector(                             \
 			rx_queue, rx_pkts, pkts, (flags) | NIX_RX_MULTI_SEG_F, \
-			NULL, NULL);                                           \
+			NULL, NULL, 0);                                        \
 	}
 
 NIX_RX_FASTPATH_MODES
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 8577a7b..c81a612 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -51,9 +51,6 @@
 
 #define NIX_NB_SEGS_TO_SEGDW(x) ((NIX_SEGDW_MAGIC >> ((x) << 2)) & 0xF)
 
-#define LMT_OFF(lmt_addr, lmt_num, offset)                                     \
-	(void *)((lmt_addr) + ((lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset))
-
 /* Function to determine no of tx subdesc required in case ext
  * sub desc is enabled.
  */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 20/27] net/cnxk: add cn10k Tx support for security offload
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (18 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 19/27] net/cnxk: add cn10k Rx " Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 21/27] net/cnxk: add cn9k anti replay " Nithin Dabilpuram
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Add support to create and submit CPT instructions on Tx.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 doc/guides/rel_notes/release_21_11.rst       |   5 +
 drivers/event/cnxk/cn10k_eventdev.c          |  15 +-
 drivers/event/cnxk/cn10k_worker.h            |  74 +-
 drivers/event/cnxk/cn10k_worker_tx_enq.c     |   2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq_seg.c |   2 +-
 drivers/net/cnxk/cn10k_tx.c                  |  31 +-
 drivers/net/cnxk/cn10k_tx.h                  | 981 +++++++++++++++++++++++----
 drivers/net/cnxk/cn10k_tx_mseg.c             |   2 +-
 drivers/net/cnxk/cn10k_tx_vec.c              |   2 +-
 drivers/net/cnxk/cn10k_tx_vec_mseg.c         |   2 +-
 10 files changed, 934 insertions(+), 182 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index fb599e5..a87f6cb 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -65,6 +65,11 @@ New Features
 
   * Added event crypto adapter OP_FORWARD mode support.
 
+* **Added support for Inline IPsec on Marvell CN10K and CN9K.**
+
+  * Added support for Inline IPsec in net/cnxk PMD for CN9K event mode
+    and CN10K poll mode and event mode.
+
 Removed Items
 -------------
 
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 2f2e7f8..bd1cf55 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -16,7 +16,8 @@
 			[!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]
 
 #define CN10K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                           \
-	enq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
+	enq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]     \
+			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \
@@ -379,17 +380,17 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	/* Tx modes */
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
+		sso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
+		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b79bd90..1255662 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -423,7 +423,11 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,
 		    ((queue[0] ^ queue[1]) & (queue[2] ^ queue[3]))) {
 
 			for (j = 0; j < 4; j++) {
+				uint8_t lnum = 0, loff = 0, shft = 0;
 				struct rte_mbuf *m = mbufs[i + j];
+				uintptr_t laddr;
+				uint16_t segdw;
+				bool sec;
 
 				txq = (struct cn10k_eth_txq *)
 					txq_data[port[j]][queue[j]];
@@ -434,19 +438,35 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,
 				if (flags & NIX_TX_OFFLOAD_TSO_F)
 					cn10k_nix_xmit_prepare_tso(m, flags);
 
-				cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags,
-						       txq->lso_tun_fmt);
+				cn10k_nix_xmit_prepare(m, cmd, flags,
+						       txq->lso_tun_fmt, &sec);
+
+				laddr = lmt_addr;
+				/* Prepare CPT instruction and get nixtx addr if
+				 * it is for CPT on same lmtline.
+				 */
+				if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+					cn10k_nix_prep_sec(m, cmd, &laddr,
+							   lmt_addr, &lnum,
+							   &loff, &shft,
+							   txq->sa_base, flags);
+
+				/* Move NIX desc to LMT/NIXTX area */
+				cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+
 				if (flags & NIX_TX_MULTI_SEG_F) {
-					const uint16_t segdw =
-						cn10k_nix_prepare_mseg(
-							m, (uint64_t *)lmt_addr,
-							flags);
-					pa = txq->io_addr | ((segdw - 1) << 4);
+					segdw = cn10k_nix_prepare_mseg(m,
+						(uint64_t *)laddr, flags);
 				} else {
-					pa = txq->io_addr |
-					     (cn10k_nix_tx_ext_subs(flags) + 1)
-						     << 4;
+					segdw = cn10k_nix_tx_ext_subs(flags) +
+						2;
 				}
+
+				if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+					pa = txq->cpt_io_addr | 3 << 4;
+				else
+					pa = txq->io_addr | ((segdw - 1) << 4);
+
 				if (!sched_type)
 					roc_sso_hws_head_wait(base +
 							      SSOW_LF_GWS_TAG);
@@ -469,15 +489,19 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 		       const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
 		       const uint32_t flags)
 {
+	uint8_t lnum = 0, loff = 0, shft = 0;
 	struct cn10k_eth_txq *txq;
+	uint16_t ref_cnt, segdw;
 	struct rte_mbuf *m;
 	uintptr_t lmt_addr;
-	uint16_t ref_cnt;
+	uintptr_t c_laddr;
 	uint16_t lmt_id;
 	uintptr_t pa;
+	bool sec;
 
 	lmt_addr = ws->lmt_base;
 	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	c_laddr = lmt_addr;
 
 	if (ev->event_type & RTE_EVENT_TYPE_VECTOR) {
 		struct rte_mbuf **mbufs = ev->vec->mbufs;
@@ -508,14 +532,28 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 	if (flags & NIX_TX_OFFLOAD_TSO_F)
 		cn10k_nix_xmit_prepare_tso(m, flags);
 
-	cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags, txq->lso_tun_fmt);
+	cn10k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt, &sec);
+
+	/* Prepare CPT instruction and get nixtx addr if
+	 * it is for CPT on same lmtline.
+	 */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+		cn10k_nix_prep_sec(m, cmd, &lmt_addr, c_laddr, &lnum, &loff,
+				   &shft, txq->sa_base, flags);
+
+	/* Move NIX desc to LMT/NIXTX area */
+	cn10k_nix_xmit_mv_lmt_base(lmt_addr, cmd, flags);
 	if (flags & NIX_TX_MULTI_SEG_F) {
-		const uint16_t segdw =
-			cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+		segdw = cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+	} else {
+		segdw = cn10k_nix_tx_ext_subs(flags) + 2;
+	}
+
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+		pa = txq->cpt_io_addr | 3 << 4;
+	else
 		pa = txq->io_addr | ((segdw - 1) << 4);
-	} else {
-		pa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4;
-	}
+
 	if (!ev->sched_type)
 		roc_sso_hws_head_wait(ws->tx_base + SSOW_LF_GWS_TAG);
 
@@ -531,7 +569,7 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 	return 1;
 }
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \
 		void *port, struct rte_event ev[], uint16_t nb_events);        \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \
diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq.c b/drivers/event/cnxk/cn10k_worker_tx_enq.c
index f9968ac..f14c7fc 100644
--- a/drivers/event/cnxk/cn10k_worker_tx_enq.c
+++ b/drivers/event/cnxk/cn10k_worker_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn10k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
index a24fc42..2ea61e5 100644
--- a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn10k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c..eb962ef 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(	       \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -24,12 +24,13 @@ NIX_TX_FASTPATH_MODES
 
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
-	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])
+	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	/* [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+	/* [SEC] [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
 	eth_dev->tx_pkt_burst = tx_burst
+		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
@@ -43,33 +44,33 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,
+	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c81a612..70ba929 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -6,6 +6,8 @@
 
 #include <rte_vect.h>
 
+#include <rte_eventdev.h>
+
 #define NIX_TX_OFFLOAD_NONE	      (0)
 #define NIX_TX_OFFLOAD_L3_L4_CSUM_F   BIT(0)
 #define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
@@ -57,12 +59,22 @@
 static __rte_always_inline int
 cn10k_nix_tx_ext_subs(const uint16_t flags)
 {
-	return (flags & NIX_TX_OFFLOAD_TSTAMP_F)
-		       ? 2
-		       : ((flags &
-			   (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F))
-				  ? 1
-				  : 0);
+	return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ?
+			     2 :
+			     ((flags &
+			 (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
+				      1 :
+				      0);
+}
+
+static __rte_always_inline uint8_t
+cn10k_nix_tx_dwords(const uint16_t flags, const uint8_t segdw)
+{
+	if (!(flags & NIX_TX_MULTI_SEG_F))
+		return cn10k_nix_tx_ext_subs(flags) + 2;
+
+	/* Already everything is accounted for in segdw */
+	return segdw;
 }
 
 static __rte_always_inline uint8_t
@@ -144,6 +156,34 @@ cn10k_nix_tx_steor_vec_data(const uint16_t flags)
 	return data;
 }
 
+static __rte_always_inline uint64_t
+cn10k_cpt_tx_steor_data(void)
+{
+	/* We have two CPT instructions per LMTLine */
+	const uint64_t dw_m1 = ROC_CN10K_TWO_CPT_INST_DW_M1;
+	uint64_t data;
+
+	/* This will be moved to addr area */
+	data = dw_m1 << 16;
+	data |= dw_m1 << 19;
+	data |= dw_m1 << 22;
+	data |= dw_m1 << 25;
+	data |= dw_m1 << 28;
+	data |= dw_m1 << 31;
+	data |= dw_m1 << 34;
+	data |= dw_m1 << 37;
+	data |= dw_m1 << 40;
+	data |= dw_m1 << 43;
+	data |= dw_m1 << 46;
+	data |= dw_m1 << 49;
+	data |= dw_m1 << 52;
+	data |= dw_m1 << 55;
+	data |= dw_m1 << 58;
+	data |= dw_m1 << 61;
+
+	return data;
+}
+
 static __rte_always_inline void
 cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,
 		      const uint16_t flags)
@@ -165,6 +205,236 @@ cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,
 }
 
 static __rte_always_inline void
+cn10k_nix_sec_steorl(uintptr_t io_addr, uint32_t lmt_id, uint8_t lnum,
+		     uint8_t loff, uint8_t shft)
+{
+	uint64_t data;
+	uintptr_t pa;
+
+	/* Check if there is any CPT instruction to submit */
+	if (!lnum && !loff)
+		return;
+
+	data = cn10k_cpt_tx_steor_data();
+	/* Update lmtline use for partial end line */
+	if (loff) {
+		data &= ~(0x7ULL << shft);
+		/* Update it to half full i.e 64B */
+		data |= (0x3UL << shft);
+	}
+
+	pa = io_addr | ((data >> 16) & 0x7) << 4;
+	data &= ~(0x7ULL << 16);
+	/* Update lines - 1 that contain valid data */
+	data |= ((uint64_t)(lnum + loff - 1)) << 12;
+	data |= lmt_id;
+
+	/* STEOR */
+	roc_lmt_submit_steorl(data, pa);
+}
+
+#if defined(RTE_ARCH_ARM64)
+static __rte_always_inline void
+cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
+		       uintptr_t *nixtx_addr, uintptr_t lbase, uint8_t *lnum,
+		       uint8_t *loff, uint8_t *shft, uint64_t sa_base,
+		       const uint16_t flags)
+{
+	struct cn10k_sec_sess_priv sess_priv;
+	uint32_t pkt_len, dlen_adj, rlen;
+	uint64x2_t cmd01, cmd23;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t *laddr;
+	uint8_t l2_len;
+	uint16_t tag;
+	uint64_t sa;
+
+	sess_priv.u64 = *rte_security_dynfield(m);
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = vgetq_lane_u8(*cmd0, 8);
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = vgetq_lane_u64(*cmd1, 1);
+	pkt_len = vgetq_lane_u16(*cmd0, 0);
+
+	/* Calculate dlen adj */
+	dlen_adj = pkt_len - l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Update send descriptors. Security is single segment only */
+	*cmd0 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd0, 0);
+	*cmd1 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd1, 0);
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	/* Return nixtx addr */
+	*nixtx_addr = (nixtx + 16);
+
+	/* DLEN passed is excluding L2HDR */
+	pkt_len -= l2_len;
+	tag = sa_base & 0xFFFFUL;
+	sa_base &= ~0xFFFFUL;
+	sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
+	ucode_cmd[0] =
+		(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1 */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len;
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Move to our line */
+	laddr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(laddr, cmd01);
+	vst1q_u64((laddr + 2), cmd23);
+
+	*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;
+	*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);
+
+	/* Move to next line for every other CPT inst */
+	*loff = !(*loff);
+	*lnum = *lnum + (*loff ? 0 : 1);
+	*shft = *shft + (*loff ? 0 : 3);
+}
+
+static __rte_always_inline void
+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+		   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,
+		   uint64_t sa_base, const uint16_t flags)
+{
+	struct cn10k_sec_sess_priv sess_priv;
+	uint32_t pkt_len, dlen_adj, rlen;
+	struct nix_send_hdr_s *send_hdr;
+	uint64x2_t cmd01, cmd23;
+	union nix_send_sg_s *sg;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t *laddr;
+	uint8_t l2_len;
+	uint16_t tag;
+	uint64_t sa;
+
+	/* Move to our line from base */
+	sess_priv.u64 = *rte_security_dynfield(m);
+	send_hdr = (struct nix_send_hdr_s *)cmd;
+	if (flags & NIX_TX_NEED_EXT_HDR)
+		sg = (union nix_send_sg_s *)&cmd[4];
+	else
+		sg = (union nix_send_sg_s *)&cmd[2];
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = cmd[1] & 0xFF;
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = *(uint64_t *)(sg + 1);
+	pkt_len = send_hdr->w0.total;
+
+	/* Calculate dlen adj */
+	dlen_adj = pkt_len - l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Update send descriptors. Security is single segment only */
+	send_hdr->w0.total = pkt_len + dlen_adj;
+	sg->seg1_size = pkt_len + dlen_adj;
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	/* Return nixtx addr */
+	*nixtx_addr = (nixtx + 16);
+
+	/* DLEN passed is excluding L2HDR */
+	pkt_len -= l2_len;
+	tag = sa_base & 0xFFFFUL;
+	sa_base &= ~0xFFFFUL;
+	sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
+	ucode_cmd[0] =
+		(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1. Assume no multi-seg support */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len;
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Move to our line */
+	laddr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(laddr, cmd01);
+	vst1q_u64((laddr + 2), cmd23);
+
+	*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;
+	*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);
+
+	/* Move to next line for every other CPT inst */
+	*loff = !(*loff);
+	*lnum = *lnum + (*loff ? 0 : 1);
+	*shft = *shft + (*loff ? 0 : 3);
+}
+
+#else
+
+static __rte_always_inline void
+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+		   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,
+		   uint64_t sa_base, const uint16_t flags)
+{
+	RTE_SET_USED(m);
+	RTE_SET_USED(cmd);
+	RTE_SET_USED(nixtx_addr);
+	RTE_SET_USED(lbase);
+	RTE_SET_USED(lnum);
+	RTE_SET_USED(loff);
+	RTE_SET_USED(shft);
+	RTE_SET_USED(sa_base);
+	RTE_SET_USED(flags);
+}
+#endif
+
+static __rte_always_inline void
 cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
@@ -217,8 +487,8 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 }
 
 static __rte_always_inline void
-cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
-		       const uint16_t flags, const uint64_t lso_tun_fmt)
+cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
+		       const uint64_t lso_tun_fmt, bool *sec)
 {
 	struct nix_send_ext_s *send_hdr_ext;
 	struct nix_send_hdr_s *send_hdr;
@@ -237,16 +507,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		sg = (union nix_send_sg_s *)(cmd + 2);
 	}
 
-	if (flags & NIX_TX_NEED_SEND_HDR_W1) {
+	if (flags & (NIX_TX_NEED_SEND_HDR_W1 | NIX_TX_OFFLOAD_SECURITY_F)) {
 		ol_flags = m->ol_flags;
 		w1.u = 0;
 	}
 
-	if (!(flags & NIX_TX_MULTI_SEG_F)) {
+	if (!(flags & NIX_TX_MULTI_SEG_F))
 		send_hdr->w0.total = m->data_len;
-		send_hdr->w0.aura =
-			roc_npa_aura_handle_to_aura(m->pool->pool_id);
-	}
+	else
+		send_hdr->w0.total = m->pkt_len;
+	send_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
 
 	/*
 	 * L3type:  2 => IPV4
@@ -376,7 +646,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		send_hdr->w1.u = w1.u;
 
 	if (!(flags & NIX_TX_MULTI_SEG_F)) {
-		sg->seg1_size = m->data_len;
+		sg->seg1_size = send_hdr->w0.total;
 		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
 
 		if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
@@ -389,17 +659,38 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
 			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	} else {
+		sg->seg1_size = m->data_len;
+		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
+
+		/* NOFF is handled later for multi-seg */
 	}
 
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+		*sec = !!(ol_flags & PKT_TX_SEC_OFFLOAD);
+}
+
+static __rte_always_inline void
+cn10k_nix_xmit_mv_lmt_base(uintptr_t lmt_addr, uint64_t *cmd,
+			   const uint16_t flags)
+{
+	struct nix_send_ext_s *send_hdr_ext;
+	union nix_send_sg_s *sg;
+
 	/* With minimal offloads, 'cmd' being local could be optimized out to
 	 * registers. In other cases, 'cmd' will be in stack. Intent is
 	 * 'cmd' stores content from txq->cmd which is copied only once.
 	 */
-	*((struct nix_send_hdr_s *)lmt_addr) = *send_hdr;
+	*((struct nix_send_hdr_s *)lmt_addr) = *(struct nix_send_hdr_s *)cmd;
 	lmt_addr += 16;
 	if (flags & NIX_TX_NEED_EXT_HDR) {
+		send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
 		*((struct nix_send_ext_s *)lmt_addr) = *send_hdr_ext;
 		lmt_addr += 16;
+
+		sg = (union nix_send_sg_s *)(cmd + 4);
+	} else {
+		sg = (union nix_send_sg_s *)(cmd + 2);
 	}
 	/* In case of multi-seg, sg template is stored here */
 	*((union nix_send_sg_s *)lmt_addr) = *sg;
@@ -414,7 +705,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
-					(struct nix_send_ext_s *)lmt_addr + 16;
+			(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
 		uint16_t off = (no_segdw - 1) << 1;
 		struct nix_send_mem_s *send_mem;
@@ -457,8 +748,6 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 	uint8_t off, i;
 
 	send_hdr = (struct nix_send_hdr_s *)cmd;
-	send_hdr->w0.total = m->pkt_len;
-	send_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
 
 	if (flags & NIX_TX_NEED_EXT_HDR)
 		off = 2;
@@ -466,13 +755,27 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		off = 0;
 
 	sg = (union nix_send_sg_s *)&cmd[2 + off];
-	/* Clear sg->u header before use */
-	sg->u &= 0xFC00000000000000;
+
+	/* Start from second segment, first segment is already there */
+	i = 1;
 	sg_u = sg->u;
-	slist = &cmd[3 + off];
+	nb_segs = m->nb_segs - 1;
+	m_next = m->next;
+	slist = &cmd[3 + off + 1];
 
-	i = 0;
-	nb_segs = m->nb_segs;
+	/* Set invert df if buffer is not to be freed by H/W */
+	if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+		sg_u |= (cnxk_nix_prefree_seg(m) << 55);
+
+		/* Mark mempool object as "put" since it is freed by NIX */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (!(sg_u & (1ULL << 55)))
+		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	rte_io_wmb();
+#endif
+	m = m_next;
+	if (!m)
+		goto done;
 
 	/* Fill mbuf segments */
 	do {
@@ -504,6 +807,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		m = m_next;
 	} while (nb_segs);
 
+done:
 	sg->u = sg_u;
 	sg->segs = i;
 	segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
@@ -522,10 +826,17 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 {
 	struct cn10k_eth_txq *txq = tx_queue;
 	const rte_iova_t io_addr = txq->io_addr;
-	uintptr_t pa, lmt_addr = txq->lmt_base;
+	uint8_t lnum, c_lnum, c_shft, c_loff;
+	uintptr_t pa, lbase = txq->lmt_base;
 	uint16_t lmt_id, burst, left, i;
+	uintptr_t c_lbase = lbase;
+	rte_iova_t c_io_addr;
 	uint64_t lso_tun_fmt;
+	uint16_t c_lmt_id;
+	uint64_t sa_base;
+	uintptr_t laddr;
 	uint64_t data;
+	bool sec;
 
 	if (!(flags & NIX_TX_VWQE_F)) {
 		NIX_XMIT_FC_OR_RETURN(txq, pkts);
@@ -540,10 +851,24 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		lso_tun_fmt = txq->lso_tun_fmt;
 
 	/* Get LMT base address and LMT ID as lcore id */
-	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	burst = left > 32 ? 32 : left;
+
+	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		c_lnum = 0;
+		c_loff = 0;
+		c_shft = 16;
+	}
+
 	for (i = 0; i < burst; i++) {
 		/* Perform header writes for TSO, barrier at
 		 * lmt steorl will suffice.
@@ -551,16 +876,39 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		if (flags & NIX_TX_OFFLOAD_TSO_F)
 			cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
 
-		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,
-				       lso_tun_fmt);
-		cn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],
+		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,
+				       &sec);
+
+		laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
+
+		/* Prepare CPT instruction and get nixtx addr */
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+			cn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,
+					   &c_lnum, &c_loff, &c_shft, sa_base,
+					   flags);
+
+		/* Move NIX desc to LMT/NIXTX area */
+		cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+		cn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],
 					      tx_pkts[i]->ol_flags, 4, flags);
-		lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
+		if (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec)
+			lnum++;
 	}
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+	tx_pkts += burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		/* Reduce pkts to be sent to CPT */
+		burst -= ((c_lnum << 1) + c_loff);
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+	}
+
 	/* Trigger LMTST */
 	if (burst > 16) {
 		data = cn10k_nix_tx_steor_data(flags);
@@ -591,16 +939,9 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		roc_lmt_submit_steorl(data, pa);
 	}
 
-	left -= burst;
 	rte_io_wmb();
-	if (left) {
-		/* Start processing another burst */
-		tx_pkts += burst;
-		/* Reset lmt base addr */
-		lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+	if (left)
 		goto again;
-	}
 
 	return pkts;
 }
@@ -611,13 +952,20 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 const uint16_t flags)
 {
 	struct cn10k_eth_txq *txq = tx_queue;
-	uintptr_t pa0, pa1, lmt_addr = txq->lmt_base;
+	uintptr_t pa0, pa1, lbase = txq->lmt_base;
 	const rte_iova_t io_addr = txq->io_addr;
 	uint16_t segdw, lmt_id, burst, left, i;
+	uint8_t lnum, c_lnum, c_loff;
+	uintptr_t c_lbase = lbase;
 	uint64_t data0, data1;
+	rte_iova_t c_io_addr;
 	uint64_t lso_tun_fmt;
+	uint8_t shft, c_shft;
 	__uint128_t data128;
-	uint16_t shft;
+	uint16_t c_lmt_id;
+	uint64_t sa_base;
+	uintptr_t laddr;
+	bool sec;
 
 	NIX_XMIT_FC_OR_RETURN(txq, pkts);
 
@@ -630,12 +978,26 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		lso_tun_fmt = txq->lso_tun_fmt;
 
 	/* Get LMT base address and LMT ID as lcore id */
-	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	burst = left > 32 ? 32 : left;
 	shft = 16;
 	data128 = 0;
+
+	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		c_lnum = 0;
+		c_loff = 0;
+		c_shft = 16;
+	}
+
 	for (i = 0; i < burst; i++) {
 		/* Perform header writes for TSO, barrier at
 		 * lmt steorl will suffice.
@@ -643,22 +1005,47 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		if (flags & NIX_TX_OFFLOAD_TSO_F)
 			cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
 
-		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,
-				       lso_tun_fmt);
+		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,
+				       &sec);
+
+		laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
+
+		/* Prepare CPT instruction and get nixtx addr */
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+			cn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,
+					   &c_lnum, &c_loff, &c_shft, sa_base,
+					   flags);
+
+		/* Move NIX desc to LMT/NIXTX area */
+		cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+
 		/* Store sg list directly on lmt line */
-		segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)lmt_addr,
+		segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)laddr,
 					       flags);
-		cn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],
+		cn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],
 					      tx_pkts[i]->ol_flags, segdw,
 					      flags);
-		lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		data128 |= (((__uint128_t)(segdw - 1)) << shft);
-		shft += 3;
+		if (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec) {
+			lnum++;
+			data128 |= (((__uint128_t)(segdw - 1)) << shft);
+			shft += 3;
+		}
 	}
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+	tx_pkts += burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		/* Reduce pkts to be sent to CPT */
+		burst -= ((c_lnum << 1) + c_loff);
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+	}
+
 	data0 = (uint64_t)data128;
 	data1 = (uint64_t)(data128 >> 64);
 	/* Make data0 similar to data1 */
@@ -695,16 +1082,9 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		roc_lmt_submit_steorl(data0, pa0);
 	}
 
-	left -= burst;
 	rte_io_wmb();
-	if (left) {
-		/* Start processing another burst */
-		tx_pkts += burst;
-		/* Reset lmt base addr */
-		lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+	if (left)
 		goto again;
-	}
 
 	return pkts;
 }
@@ -989,6 +1369,90 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,
 	return lmt_used;
 }
 
+static __rte_always_inline void
+cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,
+		   uint8_t *shift, __uint128_t *data128, uintptr_t *next)
+{
+	/* Go to next line if we are out of space */
+	if ((*loff + (dw << 4)) > 128) {
+		*data128 = *data128 |
+			   (((__uint128_t)((*loff >> 4) - 1)) << *shift);
+		*shift = *shift + 3;
+		*loff = 0;
+		*lnum = *lnum + 1;
+	}
+
+	*next = (uintptr_t)LMT_OFF(laddr, *lnum, *loff);
+	*loff = *loff + (dw << 4);
+}
+
+static __rte_always_inline void
+cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
+		     uint64x2_t cmd0, uint64x2_t cmd1, uint64x2_t cmd2,
+		     uint64x2_t cmd3, const uint16_t flags)
+{
+	uint8_t off;
+
+	/* Handle no fast free when security is enabled without mseg */
+	if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
+	    (flags & NIX_TX_OFFLOAD_SECURITY_F) &&
+	    !(flags & NIX_TX_MULTI_SEG_F)) {
+		union nix_send_sg_s sg;
+
+		sg.u = vgetq_lane_u64(cmd1, 0);
+		sg.u |= (cnxk_nix_prefree_seg(mbuf) << 55);
+		cmd1 = vsetq_lane_u64(sg.u, cmd1, 0);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+		sg.u = vgetq_lane_u64(cmd1, 0);
+		if (!(sg.u & (1ULL << 55)))
+			__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,
+						0);
+		rte_io_wmb();
+#endif
+	}
+	if (flags & NIX_TX_MULTI_SEG_F) {
+		if ((flags & NIX_TX_NEED_EXT_HDR) &&
+		    (flags & NIX_TX_OFFLOAD_TSTAMP_F)) {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+			off = segdw - 4;
+			off <<= 4;
+			vst1q_u64(LMT_OFF(laddr, 0, 48 + off), cmd3);
+		} else if (flags & NIX_TX_NEED_EXT_HDR) {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+		} else {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 32),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);
+		}
+	} else if (flags & NIX_TX_NEED_EXT_HDR) {
+		/* Store the prepared send desc to LMT lines */
+		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+			vst1q_u64(LMT_OFF(laddr, 0, 48), cmd3);
+		} else {
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+		}
+	} else {
+		/* Store the prepared send desc to LMT lines */
+		vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+		vst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);
+	}
+}
+
 static __rte_always_inline uint16_t
 cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			   uint16_t pkts, uint64_t *cmd, uintptr_t base,
@@ -998,10 +1462,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
 	uint64x2_t cmd0[NIX_DESCS_PER_LOOP], cmd1[NIX_DESCS_PER_LOOP],
 		cmd2[NIX_DESCS_PER_LOOP], cmd3[NIX_DESCS_PER_LOOP];
+	uint16_t left, scalar, burst, i, lmt_id, c_lmt_id;
 	uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3, pa;
 	uint64x2_t senddesc01_w0, senddesc23_w0;
 	uint64x2_t senddesc01_w1, senddesc23_w1;
-	uint16_t left, scalar, burst, i, lmt_id;
 	uint64x2_t sendext01_w0, sendext23_w0;
 	uint64x2_t sendext01_w1, sendext23_w1;
 	uint64x2_t sendmem01_w0, sendmem23_w0;
@@ -1010,12 +1474,16 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint64x2_t sgdesc01_w0, sgdesc23_w0;
 	uint64x2_t sgdesc01_w1, sgdesc23_w1;
 	struct cn10k_eth_txq *txq = tx_queue;
-	uintptr_t laddr = txq->lmt_base;
 	rte_iova_t io_addr = txq->io_addr;
+	uintptr_t laddr = txq->lmt_base;
+	uint8_t c_lnum, c_shft, c_loff;
 	uint64x2_t ltypes01, ltypes23;
 	uint64x2_t xtmp128, ytmp128;
 	uint64x2_t xmask01, xmask23;
-	uint8_t lnum, shift;
+	uintptr_t c_laddr = laddr;
+	uint8_t lnum, shift, loff;
+	rte_iova_t c_io_addr;
+	uint64_t sa_base;
 	union wdata {
 		__uint128_t data128;
 		uint64_t data[2];
@@ -1061,19 +1529,36 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	/* Get LMT base address and LMT ID as lcore id */
 	ROC_LMT_BASE_ID_GET(laddr, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_laddr, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	/* Number of packets to prepare depends on offloads enabled. */
 	burst = left > cn10k_nix_pkts_per_vec_brst(flags) ?
 			      cn10k_nix_pkts_per_vec_brst(flags) :
 			      left;
-	if (flags & NIX_TX_MULTI_SEG_F) {
+	if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)) {
 		wd.data128 = 0;
 		shift = 16;
 	}
 	lnum = 0;
+	if (NIX_TX_OFFLOAD_SECURITY_F) {
+		loff = 0;
+		c_loff = 0;
+		c_lnum = 0;
+		c_shft = 16;
+	}
 
 	for (i = 0; i < burst; i += NIX_DESCS_PER_LOOP) {
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && c_lnum + 2 > 16) {
+			burst = i;
+			break;
+		}
+
 		if (flags & NIX_TX_MULTI_SEG_F) {
 			uint8_t j;
 
@@ -1833,7 +2318,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
-		    !(flags & NIX_TX_MULTI_SEG_F)) {
+		    !(flags & NIX_TX_MULTI_SEG_F) &&
+		    !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
 			/* Set don't free bit if reference count > 1 */
 			xmask01 = vdupq_n_u64(0);
 			xmask23 = xmask01;
@@ -1873,7 +2359,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
 			senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-		} else if (!(flags & NIX_TX_MULTI_SEG_F)) {
+		} else if (!(flags & NIX_TX_MULTI_SEG_F) &&
+			   !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
 			/* Move mbufs to iova */
 			mbuf0 = (uint64_t *)tx_pkts[0];
 			mbuf1 = (uint64_t *)tx_pkts[1];
@@ -1918,7 +2405,84 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			cmd2[3] = vzip2q_u64(sendext23_w0, sendext23_w1);
 		}
 
-		if (flags & NIX_TX_MULTI_SEG_F) {
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+			const uint64x2_t olf = {PKT_TX_SEC_OFFLOAD,
+						PKT_TX_SEC_OFFLOAD};
+			uintptr_t next;
+			uint8_t dw;
+
+			/* Extract ol_flags. */
+			xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+			ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+			xtmp128 = vtstq_u64(olf, xtmp128);
+			ytmp128 = vtstq_u64(olf, ytmp128);
+
+			/* Process mbuf0 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[0]);
+			if (vgetq_lane_u64(xtmp128, 0))
+				cn10k_nix_prep_sec_vec(tx_pkts[0], &cmd0[0],
+						       &cmd1[0], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf0 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[0], segdw[0], next,
+					     cmd0[0], cmd1[0], cmd2[0], cmd3[0],
+					     flags);
+
+			/* Process mbuf1 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[1]);
+			if (vgetq_lane_u64(xtmp128, 1))
+				cn10k_nix_prep_sec_vec(tx_pkts[1], &cmd0[1],
+						       &cmd1[1], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf1 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[1], segdw[1], next,
+					     cmd0[1], cmd1[1], cmd2[1], cmd3[1],
+					     flags);
+
+			/* Process mbuf2 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[2]);
+			if (vgetq_lane_u64(ytmp128, 0))
+				cn10k_nix_prep_sec_vec(tx_pkts[2], &cmd0[2],
+						       &cmd1[2], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf2 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[2], segdw[2], next,
+					     cmd0[2], cmd1[2], cmd2[2], cmd3[2],
+					     flags);
+
+			/* Process mbuf3 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[3]);
+			if (vgetq_lane_u64(ytmp128, 1))
+				cn10k_nix_prep_sec_vec(tx_pkts[3], &cmd0[3],
+						       &cmd1[3], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf3 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[3], segdw[3], next,
+					     cmd0[3], cmd1[3], cmd2[3], cmd3[3],
+					     flags);
+
+		} else if (flags & NIX_TX_MULTI_SEG_F) {
 			uint8_t j;
 
 			segdw[4] = 8;
@@ -1982,21 +2546,35 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
 	}
 
-	if (flags & NIX_TX_MULTI_SEG_F)
+	/* Roundup lnum to last line if it is partial */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		lnum = lnum + !!loff;
+		wd.data128 = wd.data128 |
+			(((__uint128_t)(((loff >> 4) - 1) & 0x7) << shift));
+	}
+
+	if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 		wd.data[0] >>= 16;
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+
 	/* Trigger LMTST */
 	if (lnum > 16) {
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[0] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[0] & 0x7) << 4;
 		wd.data[0] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[0] <<= 16;
 
 		wd.data[0] |= (15ULL << 12);
@@ -2005,13 +2583,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* STEOR0 */
 		roc_lmt_submit_steorl(wd.data[0], pa);
 
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[1] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[1] & 0x7) << 4;
 		wd.data[1] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[1] <<= 16;
 
 		wd.data[1] |= ((uint64_t)(lnum - 17)) << 12;
@@ -2020,13 +2598,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* STEOR1 */
 		roc_lmt_submit_steorl(wd.data[1], pa);
 	} else if (lnum) {
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[0] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[0] & 0x7) << 4;
 		wd.data[0] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[0] <<= 16;
 
 		wd.data[0] |= ((uint64_t)(lnum - 1)) << 12;
@@ -2036,7 +2614,6 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		roc_lmt_submit_steorl(wd.data[0], pa);
 	}
 
-	left -= burst;
 	rte_io_wmb();
 	if (left)
 		goto again;
@@ -2076,139 +2653,269 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 #define NOFF_F	     NIX_TX_OFFLOAD_MBUF_NOFF_F
 #define TSO_F	     NIX_TX_OFFLOAD_TSO_F
 #define TSP_F	     NIX_TX_OFFLOAD_TSTAMP_F
+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F
 
-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
 #define NIX_TX_FASTPATH_MODES						\
-T(no_offload,				0, 0, 0, 0, 0, 0,	4,	\
+T(no_offload,				0, 0, 0, 0, 0, 0, 0,	4,	\
 		NIX_TX_OFFLOAD_NONE)					\
-T(l3l4csum,				0, 0, 0, 0, 0, 1,	4,	\
+T(l3l4csum,				0, 0, 0, 0, 0, 0, 1,	4,	\
 		L3L4CSUM_F)						\
-T(ol3ol4csum,				0, 0, 0, 0, 1, 0,	4,	\
+T(ol3ol4csum,				0, 0, 0, 0, 0, 1, 0,	4,	\
 		OL3OL4CSUM_F)						\
-T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 1, 1,	4,	\
+T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 0, 1, 1,	4,	\
 		OL3OL4CSUM_F | L3L4CSUM_F)				\
-T(vlan,					0, 0, 0, 1, 0, 0,	6,	\
+T(vlan,					0, 0, 0, 0, 1, 0, 0,	6,	\
 		VLAN_F)							\
-T(vlan_l3l4csum,			0, 0, 0, 1, 0, 1,	6,	\
+T(vlan_l3l4csum,			0, 0, 0, 0, 1, 0, 1,	6,	\
 		VLAN_F | L3L4CSUM_F)					\
-T(vlan_ol3ol4csum,			0, 0, 0, 1, 1, 0,	6,	\
+T(vlan_ol3ol4csum,			0, 0, 0, 0, 1, 1, 0,	6,	\
 		VLAN_F | OL3OL4CSUM_F)					\
-T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 1, 1,	6,	\
+T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 0, 1, 1, 1,	6,	\
 		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
-T(noff,					0, 0, 1, 0, 0, 0,	4,	\
+T(noff,					0, 0, 0, 1, 0, 0, 0,	4,	\
 		NOFF_F)							\
-T(noff_l3l4csum,			0, 0, 1, 0, 0, 1,	4,	\
+T(noff_l3l4csum,			0, 0, 0, 1, 0, 0, 1,	4,	\
 		NOFF_F | L3L4CSUM_F)					\
-T(noff_ol3ol4csum,			0, 0, 1, 0, 1, 0,	4,	\
+T(noff_ol3ol4csum,			0, 0, 0, 1, 0, 1, 0,	4,	\
 		NOFF_F | OL3OL4CSUM_F)					\
-T(noff_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1,	4,	\
+T(noff_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 0, 1, 1,	4,	\
 		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
-T(noff_vlan,				0, 0, 1, 1, 0, 0,	6,	\
+T(noff_vlan,				0, 0, 0, 1, 1, 0, 0,	6,	\
 		NOFF_F | VLAN_F)					\
-T(noff_vlan_l3l4csum,			0, 0, 1, 1, 0, 1,	6,	\
+T(noff_vlan_l3l4csum,			0, 0, 0, 1, 1, 0, 1,	6,	\
 		NOFF_F | VLAN_F | L3L4CSUM_F)				\
-T(noff_vlan_ol3ol4csum,			0, 0, 1, 1, 1, 0,	6,	\
+T(noff_vlan_ol3ol4csum,			0, 0, 0, 1, 1, 1, 0,	6,	\
 		NOFF_F | VLAN_F | OL3OL4CSUM_F)				\
-T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1,	6,	\
+T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 0, 1, 1, 1, 1,	6,	\
 		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
-T(tso,					0, 1, 0, 0, 0, 0,	6,	\
+T(tso,					0, 0, 1, 0, 0, 0, 0,	6,	\
 		TSO_F)							\
-T(tso_l3l4csum,				0, 1, 0, 0, 0, 1,	6,	\
+T(tso_l3l4csum,				0, 0, 1, 0, 0, 0, 1,	6,	\
 		TSO_F | L3L4CSUM_F)					\
-T(tso_ol3ol4csum,			0, 1, 0, 0, 1, 0,	6,	\
+T(tso_ol3ol4csum,			0, 0, 1, 0, 0, 1, 0,	6,	\
 		TSO_F | OL3OL4CSUM_F)					\
-T(tso_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1,	6,	\
+T(tso_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 0, 1, 1,	6,	\
 		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
-T(tso_vlan,				0, 1, 0, 1, 0, 0,	6,	\
+T(tso_vlan,				0, 0, 1, 0, 1, 0, 0,	6,	\
 		TSO_F | VLAN_F)						\
-T(tso_vlan_l3l4csum,			0, 1, 0, 1, 0, 1,	6,	\
+T(tso_vlan_l3l4csum,			0, 0, 1, 0, 1, 0, 1,	6,	\
 		TSO_F | VLAN_F | L3L4CSUM_F)				\
-T(tso_vlan_ol3ol4csum,			0, 1, 0, 1, 1, 0,	6,	\
+T(tso_vlan_ol3ol4csum,			0, 0, 1, 0, 1, 1, 0,	6,	\
 		TSO_F | VLAN_F | OL3OL4CSUM_F)				\
-T(tso_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 1, 1,	6,	\
+T(tso_vlan_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1, 1,	6,	\
 		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(tso_noff,				0, 1, 1, 0, 0, 0,	6,	\
+T(tso_noff,				0, 0, 1, 1, 0, 0, 0,	6,	\
 		TSO_F | NOFF_F)						\
-T(tso_noff_l3l4csum,			0, 1, 1, 0, 0, 1,	6,	\
+T(tso_noff_l3l4csum,			0, 0, 1, 1, 0, 0, 1,	6,	\
 		TSO_F | NOFF_F | L3L4CSUM_F)				\
-T(tso_noff_ol3ol4csum,			0, 1, 1, 0, 1, 0,	6,	\
+T(tso_noff_ol3ol4csum,			0, 0, 1, 1, 0, 1, 0,	6,	\
 		TSO_F | NOFF_F | OL3OL4CSUM_F)				\
-T(tso_noff_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 1, 1,	6,	\
+T(tso_noff_ol3ol4csum_l3l4csum,		0, 0, 1, 1, 0, 1, 1,	6,	\
 		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(tso_noff_vlan,			0, 1, 1, 1, 0, 0,	6,	\
+T(tso_noff_vlan,			0, 0, 1, 1, 1, 0, 0,	6,	\
 		TSO_F | NOFF_F | VLAN_F)				\
-T(tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 0, 1,	6,	\
+T(tso_noff_vlan_l3l4csum,		0, 0, 1, 1, 1, 0, 1,	6,	\
 		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
-T(tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 0,	6,	\
+T(tso_noff_vlan_ol3ol4csum,		0, 0, 1, 1, 1, 1, 0,	6,	\
 		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
-T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1,	6,	\
+T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1, 1,	6,	\
 		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
-T(ts,					1, 0, 0, 0, 0, 0,	8,	\
+T(ts,					0, 1, 0, 0, 0, 0, 0,	8,	\
 		TSP_F)							\
-T(ts_l3l4csum,				1, 0, 0, 0, 0, 1,	8,	\
+T(ts_l3l4csum,				0, 1, 0, 0, 0, 0, 1,	8,	\
 		TSP_F | L3L4CSUM_F)					\
-T(ts_ol3ol4csum,			1, 0, 0, 0, 1, 0,	8,	\
+T(ts_ol3ol4csum,			0, 1, 0, 0, 0, 1, 0,	8,	\
 		TSP_F | OL3OL4CSUM_F)					\
-T(ts_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1,	8,	\
+T(ts_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 0, 1, 1,	8,	\
 		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
-T(ts_vlan,				1, 0, 0, 1, 0, 0,	8,	\
+T(ts_vlan,				0, 1, 0, 0, 1, 0, 0,	8,	\
 		TSP_F | VLAN_F)						\
-T(ts_vlan_l3l4csum,			1, 0, 0, 1, 0, 1,	8,	\
+T(ts_vlan_l3l4csum,			0, 1, 0, 0, 1, 0, 1,	8,	\
 		TSP_F | VLAN_F | L3L4CSUM_F)				\
-T(ts_vlan_ol3ol4csum,			1, 0, 0, 1, 1, 0,	8,	\
+T(ts_vlan_ol3ol4csum,			0, 1, 0, 0, 1, 1, 0,	8,	\
 		TSP_F | VLAN_F | OL3OL4CSUM_F)				\
-T(ts_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 1, 1,	8,	\
+T(ts_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1, 1,	8,	\
 		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(ts_noff,				1, 0, 1, 0, 0, 0,	8,	\
+T(ts_noff,				0, 1, 0, 1, 0, 0, 0,	8,	\
 		TSP_F | NOFF_F)						\
-T(ts_noff_l3l4csum,			1, 0, 1, 0, 0, 1,	8,	\
+T(ts_noff_l3l4csum,			0, 1, 0, 1, 0, 0, 1,	8,	\
 		TSP_F | NOFF_F | L3L4CSUM_F)				\
-T(ts_noff_ol3ol4csum,			1, 0, 1, 0, 1, 0,	8,	\
+T(ts_noff_ol3ol4csum,			0, 1, 0, 1, 0, 1, 0,	8,	\
 		TSP_F | NOFF_F | OL3OL4CSUM_F)				\
-T(ts_noff_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 1, 1,	8,	\
+T(ts_noff_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 0, 1, 1,	8,	\
 		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(ts_noff_vlan,				1, 0, 1, 1, 0, 0,	8,	\
+T(ts_noff_vlan,				0, 1, 0, 1, 1, 0, 0,	8,	\
 		TSP_F | NOFF_F | VLAN_F)				\
-T(ts_noff_vlan_l3l4csum,		1, 0, 1, 1, 0, 1,	8,	\
+T(ts_noff_vlan_l3l4csum,		0, 1, 0, 1, 1, 0, 1,	8,	\
 		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
-T(ts_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 0,	8,	\
+T(ts_noff_vlan_ol3ol4csum,		0, 1, 0, 1, 1, 1, 0,	8,	\
 		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
-T(ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 1, 1,	8,	\
+T(ts_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 0, 1, 1, 1, 1,	8,	\
 		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
-T(ts_tso,				1, 1, 0, 0, 0, 0,	8,	\
+T(ts_tso,				0, 1, 1, 0, 0, 0, 0,	8,	\
 		TSP_F | TSO_F)						\
-T(ts_tso_l3l4csum,			1, 1, 0, 0, 0, 1,	8,	\
+T(ts_tso_l3l4csum,			0, 1, 1, 0, 0, 0, 1,	8,	\
 		TSP_F | TSO_F | L3L4CSUM_F)				\
-T(ts_tso_ol3ol4csum,			1, 1, 0, 0, 1, 0,	8,	\
+T(ts_tso_ol3ol4csum,			0, 1, 1, 0, 0, 1, 0,	8,	\
 		TSP_F | TSO_F | OL3OL4CSUM_F)				\
-T(ts_tso_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 1, 1,	8,	\
+T(ts_tso_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 0, 1, 1,	8,	\
 		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
-T(ts_tso_vlan,				1, 1, 0, 1, 0, 0,	8,	\
+T(ts_tso_vlan,				0, 1, 1, 0, 1, 0, 0,	8,	\
 		TSP_F | TSO_F | VLAN_F)					\
-T(ts_tso_vlan_l3l4csum,			1, 1, 0, 1, 0, 1,	8,	\
+T(ts_tso_vlan_l3l4csum,			0, 1, 1, 0, 1, 0, 1,	8,	\
 		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
-T(ts_tso_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 0,	8,	\
+T(ts_tso_vlan_ol3ol4csum,		0, 1, 1, 0, 1, 1, 0,	8,	\
 		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			\
-T(ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1,	8,	\
+T(ts_tso_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 0, 1, 1, 1,	8,	\
 		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
-T(ts_tso_noff,				1, 1, 1, 0, 0, 0,	8,	\
+T(ts_tso_noff,				0, 1, 1, 1, 0, 0, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F)					\
-T(ts_tso_noff_l3l4csum,			1, 1, 1, 0, 0, 1,	8,	\
+T(ts_tso_noff_l3l4csum,			0, 1, 1, 1, 0, 0, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
-T(ts_tso_noff_ol3ol4csum,		1, 1, 1, 0, 1, 0,	8,	\
+T(ts_tso_noff_ol3ol4csum,		0, 1, 1, 1, 0, 1, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			\
-T(ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1,	8,	\
+T(ts_tso_noff_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 0, 1, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
-T(ts_tso_noff_vlan,			1, 1, 1, 1, 0, 0,	8,	\
+T(ts_tso_noff_vlan,			0, 1, 1, 1, 1, 0, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F)			\
-T(ts_tso_noff_vlan_l3l4csum,		1, 1, 1, 1, 0, 1,	8,	\
+T(ts_tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 1, 0, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
-T(ts_tso_noff_vlan_ol3ol4csum,		1, 1, 1, 1, 1, 0,	8,	\
+T(ts_tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 1, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 1, 1,	8,	\
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec,					1, 0, 0, 0, 0, 0, 0,	4,	\
+		T_SEC_F)						\
+T(sec_l3l4csum,				1, 0, 0, 0, 0, 0, 1,	4,	\
+		T_SEC_F | L3L4CSUM_F)					\
+T(sec_ol3ol4csum,			1, 0, 0, 0, 0, 1, 0,	4,	\
+		T_SEC_F | OL3OL4CSUM_F)					\
+T(sec_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 0, 1, 1,	4,	\
+		T_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(sec_vlan,				1, 0, 0, 0, 1, 0, 0,	6,	\
+		T_SEC_F | VLAN_F)					\
+T(sec_vlan_l3l4csum,			1, 0, 0, 0, 1, 0, 1,	6,	\
+		T_SEC_F | VLAN_F | L3L4CSUM_F)				\
+T(sec_vlan_ol3ol4csum,			1, 0, 0, 0, 1, 1, 0,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F)			\
+T(sec_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1, 1,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff,				1, 0, 0, 1, 0, 0, 0,	4,	\
+		T_SEC_F | NOFF_F)					\
+T(sec_noff_l3l4csum,			1, 0, 0, 1, 0, 0, 1,	4,	\
+		T_SEC_F | NOFF_F | L3L4CSUM_F)				\
+T(sec_noff_ol3ol4csum,			1, 0, 0, 1, 0, 1, 0,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F)			\
+T(sec_noff_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 0, 1, 1,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff_vlan,			1, 0, 0, 1, 1, 0, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F)				\
+T(sec_noff_vlan_l3l4csum,		1, 0, 0, 1, 1, 0, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_noff_vlan_ol3ol4csum,		1, 0, 0, 1, 1, 1, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 0, 1, 1, 1, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso,				1, 0, 1, 0, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F)					\
+T(sec_tso_l3l4csum,			1, 0, 1, 0, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | L3L4CSUM_F)				\
+T(sec_tso_ol3ol4csum,			1, 0, 1, 0, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F)				\
+T(sec_tso_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_tso_vlan,				1, 0, 1, 0, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F)				\
+T(sec_tso_vlan_l3l4csum,		1, 0, 1, 0, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_tso_vlan_ol3ol4csum,		1, 0, 1, 0, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_tso_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 0, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff,				1, 0, 1, 1, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F)				\
+T(sec_tso_noff_l3l4csum,		1, 0, 1, 1, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_tso_noff_ol3ol4csum,		1, 0, 1, 1, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_tso_noff_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff_vlan,			1, 0, 1, 1, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F)			\
+T(sec_tso_noff_vlan_l3l4csum,		1, 0, 1, 1, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_tso_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts,				1, 1, 0, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F)					\
+T(sec_ts_l3l4csum,			1, 1, 0, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | L3L4CSUM_F)				\
+T(sec_ts_ol3ol4csum,			1, 1, 0, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F)				\
+T(sec_ts_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_ts_vlan,				1, 1, 0, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F)				\
+T(sec_ts_vlan_l3l4csum,			1, 1, 0, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_ts_vlan_ol3ol4csum,		1, 1, 0, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_ts_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff,				1, 1, 0, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F)				\
+T(sec_ts_noff_l3l4csum,			1, 1, 0, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_ts_noff_ol3ol4csum,		1, 1, 0, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_ts_noff_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff_vlan,			1, 1, 0, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F)			\
+T(sec_ts_noff_vlan_l3l4csum,		1, 1, 0, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_noff_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso,				1, 1, 1, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F)				\
+T(sec_ts_tso_l3l4csum,			1, 1, 1, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum,		1, 1, 1, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_tso_vlan,			1, 1, 1, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F)			\
+T(sec_ts_tso_vlan_l3l4csum,		1, 1, 1, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_tso_vlan_ol3ol4csum,		1, 1, 1, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff,			1, 1, 1, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F)			\
+T(sec_ts_tso_noff_l3l4csum,		1, 1, 1, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)		\
+T(sec_ts_tso_noff_ol3ol4csum,		1, 1, 1, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso_noff_vlan,			1, 1, 1, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)		\
+T(sec_ts_tso_noff_vlan_l3l4csum,	1, 1, 1, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)	\
+T(sec_ts_tso_noff_vlan_ol3ol4csum,	1, 1, 1, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\
+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
+		L3L4CSUM_F)
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(          \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn10k_tx_mseg.c b/drivers/net/cnxk/cn10k_tx_mseg.c
index 4ea4c8a..2b83409 100644
--- a/drivers/net/cnxk/cn10k_tx_mseg.c
+++ b/drivers/net/cnxk/cn10k_tx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_xmit_pkts_mseg_##name(void *tx_queue,                \
 						struct rte_mbuf **tx_pkts,     \
diff --git a/drivers/net/cnxk/cn10k_tx_vec.c b/drivers/net/cnxk/cn10k_tx_vec.c
index a035049..2789b13 100644
--- a/drivers/net/cnxk/cn10k_tx_vec.c
+++ b/drivers/net/cnxk/cn10k_tx_vec.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_xmit_pkts_vec_##name(void *tx_queue,                 \
 					       struct rte_mbuf **tx_pkts,      \
diff --git a/drivers/net/cnxk/cn10k_tx_vec_mseg.c b/drivers/net/cnxk/cn10k_tx_vec_mseg.c
index 7f98f79..98000df 100644
--- a/drivers/net/cnxk/cn10k_tx_vec_mseg.c
+++ b/drivers/net/cnxk/cn10k_tx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_vec_mseg_##name( \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 21/27] net/cnxk: add cn9k anti replay support for security offload
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (19 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 20/27] net/cnxk: add cn10k Tx " Nithin Dabilpuram
@ 2021-09-02  2:14 ` Nithin Dabilpuram
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 22/27] net/cnxk: add cn10k IPsec transport mode support Nithin Dabilpuram
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:14 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Adds anti replay support for cn9k platform.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn9k_ethdev.h     |  3 +++
 drivers/net/cnxk/cn9k_ethdev_sec.c | 29 ++++++++++++++++++++
 drivers/net/cnxk/cn9k_rx.h         | 54 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index f8818b8..2b452fe 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -6,6 +6,7 @@
 
 #include <cnxk_ethdev.h>
 #include <cnxk_security.h>
+#include <cnxk_security_ar.h>
 
 struct cn9k_eth_txq {
 	uint64_t cmd[8];
@@ -40,6 +41,8 @@ struct cn9k_eth_rxq {
 /* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */
 struct cn9k_inb_priv_data {
 	void *userdata;
+	uint32_t replay_win_sz;
+	struct cnxk_on_ipsec_ar ar;
 	struct cnxk_eth_sec_sess *eth_sec;
 };
 
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
index 3ec7497..deb1daf 100644
--- a/drivers/net/cnxk/cn9k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -73,6 +73,27 @@ static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {
 	}
 };
 
+static inline int
+ar_window_init(struct cn9k_inb_priv_data *inb_priv)
+{
+	if (inb_priv->replay_win_sz > CNXK_ON_AR_WIN_SIZE_MAX) {
+		plt_err("Replay window size:%u is not supported",
+			inb_priv->replay_win_sz);
+		return -ENOTSUP;
+	}
+
+	rte_spinlock_init(&inb_priv->ar.lock);
+	/*
+	 * Set window bottom to 1, base and top to size of
+	 * window
+	 */
+	inb_priv->ar.winb = 1;
+	inb_priv->ar.wint = inb_priv->replay_win_sz;
+	inb_priv->ar.base = inb_priv->replay_win_sz;
+
+	return 0;
+}
+
 static int
 cn9k_eth_sec_session_create(void *device,
 			    struct rte_security_session_conf *conf,
@@ -158,6 +179,14 @@ cn9k_eth_sec_session_create(void *device,
 		/* Save userdata in inb private area */
 		inb_priv->userdata = conf->userdata;
 
+		inb_priv->replay_win_sz = ipsec->replay_win_sz;
+		if (inb_priv->replay_win_sz) {
+			rc = ar_window_init(inb_priv);
+			if (rc)
+				goto mempool_put;
+		}
+
+		/* Prepare session priv */
 		sess_priv.inb_sa = 1;
 		sess_priv.sa_idx = ipsec->spi;
 
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index bdedeab..7ab415a 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -31,6 +31,9 @@
 #define CQE_CAST(x)	     ((struct nix_cqe_hdr_s *)(x))
 #define CQE_SZ(x)	     ((x) * CNXK_NIX_CQ_ENTRY_SZ)
 
+#define IPSEC_SQ_LO_IDX 4
+#define IPSEC_SQ_HI_IDX 8
+
 union mbuf_initializer {
 	struct {
 		uint16_t data_off;
@@ -166,6 +169,48 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 	mbuf->next = NULL;
 }
 
+static inline int
+ipsec_antireplay_check(struct roc_onf_ipsec_inb_sa *sa,
+		       struct cn9k_inb_priv_data *priv, uintptr_t data,
+		       uint32_t win_sz)
+{
+	struct cnxk_on_ipsec_ar *ar = &priv->ar;
+	uint64_t seq_in_sa;
+	uint32_t seqh = 0;
+	uint32_t seql;
+	uint64_t seq;
+	uint8_t esn;
+	int rc;
+
+	esn = sa->ctl.esn_en;
+	seql = rte_be_to_cpu_32(*((uint32_t *)(data + IPSEC_SQ_LO_IDX)));
+
+	if (!esn) {
+		seq = (uint64_t)seql;
+	} else {
+		seqh = rte_be_to_cpu_32(*((uint32_t *)(data +
+					IPSEC_SQ_HI_IDX)));
+		seq = ((uint64_t)seqh << 32) | seql;
+	}
+
+	if (unlikely(seq == 0))
+		return -1;
+
+	rte_spinlock_lock(&ar->lock);
+	rc = cnxk_on_anti_replay_check(seq, ar, win_sz);
+	if (esn && !rc) {
+		seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
+			    rte_be_to_cpu_32(sa->esn_low);
+		if (seq > seq_in_sa) {
+			sa->esn_low = rte_cpu_to_be_32(seql);
+			sa->esn_hi = rte_cpu_to_be_32(seqh);
+		}
+	}
+	rte_spinlock_unlock(&ar->lock);
+
+	return rc;
+}
+
 static __rte_always_inline uint64_t
 nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 		       uintptr_t sa_base, uint64_t *rearm_val, uint16_t *len)
@@ -178,8 +223,8 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	uint8_t lcptr = rx->lcptr;
 	struct rte_ipv4_hdr *ipv4;
 	uint16_t data_off, res;
+	uint32_t spi, win_sz;
 	uint32_t spi_mask;
-	uint32_t spi;
 	uintptr_t data;
 	__uint128_t dw;
 	uint8_t sa_w;
@@ -209,6 +254,13 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	dw = *(__uint128_t *)sa_priv;
 	*rte_security_dynfield(m) = (uint64_t)dw;
 
+	/* Check if anti-replay is enabled */
+	win_sz = (uint32_t)(dw >> 64);
+	if (win_sz) {
+		if (ipsec_antireplay_check(sa, sa_priv, data, win_sz) < 0)
+			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+	}
+
 	/* Get total length from IPv4 header. We can assume only IPv4 */
 	ipv4 = (struct rte_ipv4_hdr *)(data + ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
 				       ROC_ONF_IPSEC_INB_MAX_L2_SZ);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 22/27] net/cnxk: add cn10k IPsec transport mode support
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (20 preceding siblings ...)
  2021-09-02  2:14 ` [dpdk-dev] [PATCH 21/27] net/cnxk: add cn9k anti replay " Nithin Dabilpuram
@ 2021-09-02  2:15 ` Nithin Dabilpuram
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 23/27] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:15 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Adds IPsec transport mode capability to rte security
capabilities.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev_sec.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 3ffd824..dae5ea7 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -69,6 +69,30 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
 		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
 		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
 	},
+	{	/* IPsec Inline Protocol ESP Transport Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Transport Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
 	{
 		.action = RTE_SECURITY_ACTION_TYPE_NONE
 	}
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 23/27] net/cnxk: update ethertype for mixed IPsec tunnel versions
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (21 preceding siblings ...)
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 22/27] net/cnxk: add cn10k IPsec transport mode support Nithin Dabilpuram
@ 2021-09-02  2:15 ` Nithin Dabilpuram
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 24/27] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
                   ` (6 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:15 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Adds support to update ethertype for mixed IPsec tunnel
versions. And also sets et_overwr for inbound IPsec.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/common/cnxk/cnxk_security.c |  1 +
 drivers/net/cnxk/cn10k_ethdev.h     |  3 ++-
 drivers/net/cnxk/cn10k_ethdev_sec.c |  2 ++
 drivers/net/cnxk/cn10k_tx.h         | 19 +++++++++++++++++++
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index c25b3fd..90b5205 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -239,6 +239,7 @@ cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 	/* There are two words of CPT_CTX_HW_S for ucode to skip */
 	sa->w0.s.ctx_hdr_size = 1;
 	sa->w0.s.aop_valid = 1;
+	sa->w0.s.et_ovrwr = 1;
 
 	rte_wmb();
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 200cd93..c2a46ad 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -64,7 +64,8 @@ struct cn10k_sec_sess_priv {
 		struct {
 			uint32_t sa_idx;
 			uint8_t inb_sa : 1;
-			uint8_t rsvd1 : 2;
+			uint8_t outer_ip_ver : 1;
+			uint8_t mode : 1;
 			uint8_t roundup_byte : 5;
 			uint8_t roundup_len;
 			uint16_t partial_len;
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index dae5ea7..c66730a 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -341,6 +341,8 @@ cn10k_eth_sec_session_create(void *device,
 		sess_priv.roundup_byte = rlens->roundup_byte;
 		sess_priv.roundup_len = rlens->roundup_len;
 		sess_priv.partial_len = rlens->partial_len;
+		sess_priv.mode = outb_sa->w2.s.ipsec_mode;
+		sess_priv.outer_ip_ver = outb_sa->w2.s.outer_ip_ver;
 
 		/* Pointer from eth_sec -> outb_sa */
 		eth_sec->sa = outb_sa;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 70ba929..f56aa8e 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -302,6 +302,16 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
 	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
 
 	dptr += l2_len;
+
+	if (sess_priv.mode == ROC_IE_SA_MODE_TUNNEL) {
+		if (sess_priv.outer_ip_ver == ROC_IE_SA_IP_VERSION_4)
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		else
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	}
+
 	ucode_cmd[1] = dptr;
 	ucode_cmd[2] = dptr;
 
@@ -396,6 +406,15 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
 	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
 
 	dptr += l2_len;
+
+	if (sess_priv.mode == ROC_IE_SA_MODE_TUNNEL) {
+		if (sess_priv.outer_ip_ver == ROC_IE_SA_IP_VERSION_4)
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		else
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	}
 	ucode_cmd[1] = dptr;
 	ucode_cmd[2] = dptr;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 24/27] net/cnxk: allow zero udp6 checksum for non inline device
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (22 preceding siblings ...)
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 23/27] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
@ 2021-09-02  2:15 ` Nithin Dabilpuram
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 25/27] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:15 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Sets IP6_UDP_OPT in NIX RX config to allow optional
UDP checksum for IPv6 in case of security offload.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cnxk_ethdev.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 60a4df5..8a102aa 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1005,6 +1005,9 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
+
 	nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
 	nb_txq = RTE_MAX(data->nb_tx_queues, 1);
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 25/27] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (23 preceding siblings ...)
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 24/27] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
@ 2021-09-02  2:15 ` Nithin Dabilpuram
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 26/27] net/cnxk: add devargs for configuring channel mask Nithin Dabilpuram
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:15 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

From: Srujana Challa <schalla@marvell.com>

Adds capabitlities for AES_CBC and HMAC_SHA1 for 9k
security offload.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev_sec.c | 40 +++++++++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn9k_ethdev_sec.c  | 40 +++++++++++++++++++++++++++++++++++++
 2 files changed, 80 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index c66730a..82dc636 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -41,6 +41,46 @@ static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
 			}, }
 		}, }
 	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 20,
+					.max = 64,
+					.increment = 1
+				},
+				.digest_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
index deb1daf..b070ad5 100644
--- a/drivers/net/cnxk/cn9k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -40,6 +40,46 @@ static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {
 			}, }
 		}, }
 	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 20,
+					.max = 64,
+					.increment = 1
+				},
+				.digest_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 26/27] net/cnxk: add devargs for configuring channel mask
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (24 preceding siblings ...)
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 25/27] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
@ 2021-09-02  2:15 ` Nithin Dabilpuram
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 27/27] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:15 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev, Satheesh Paul

From: Satheesh Paul <psatheesh@marvell.com>

This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
---
 doc/guides/nics/cnxk.rst           | 20 +++++++++++++++++++
 drivers/net/cnxk/cnxk_ethdev_sec.c | 39 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index b542437..dd955d3 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -255,6 +255,26 @@ Runtime Config Options
    With the above configuration, inbound encrypted traffic from both the ports
    is received by ipsec inline device.
 
+- ``Inline IPsec device channel and mask`` (default ``none``)
+
+   Set channel and channel mask configuration for the inline IPSec device. This
+   will be used when creating flow rules with RTE_FLOW_ACTION_TYPE_SECURITY
+   action.
+
+   By default, RTE Flow API sets the channel number of the port on which the
+   rule is created in the MCAM entry and matches it exactly. This behaviour can
+   be modified using the ``inl_cpt_channel`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,inl_cpt_channel=0x100/0xf00
+
+   With the above configuration, RTE Flow rules API will set the channel
+   and channel mask as 0x100 and 0xF00 in the MCAM entries of the  flow rules
+   created with RTE_FLOW_ACTION_TYPE_SECURITY action. Since channel number is
+   set with this custom mask, inbound encrypted traffic from all ports with
+   matching channel number pattern will be directed to the inline IPSec device.
+
 .. note::
 
    Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index c002c30..523837a 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -6,6 +6,13 @@
 
 #define CNXK_NIX_INL_SELFTEST	      "selftest"
 #define CNXK_NIX_INL_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
+#define CNXK_INL_CPT_CHANNEL	      "inl_cpt_channel"
+
+struct inl_cpt_channel {
+	bool is_multi_channel;
+	uint16_t channel;
+	uint16_t mask;
+};
 
 #define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)
 #define CNXK_NIX_INL_DEV_NAME_LEN                                              \
@@ -137,13 +144,37 @@ parse_selftest(const char *key, const char *value, void *extra_args)
 }
 
 static int
+parse_inl_cpt_channel(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint16_t chan = 0, mask = 0;
+	char *next = 0;
+
+	/* next will point to the separator '/' */
+	chan = strtol(value, &next, 16);
+	mask = strtol(++next, 0, 16);
+
+	if (chan > GENMASK(12, 0) || mask > GENMASK(12, 0))
+		return -EINVAL;
+
+	((struct inl_cpt_channel *)extra_args)->channel = chan;
+	((struct inl_cpt_channel *)extra_args)->mask = mask;
+	((struct inl_cpt_channel *)extra_args)->is_multi_channel = true;
+
+	return 0;
+}
+
+static int
 nix_inl_parse_devargs(struct rte_devargs *devargs,
 		      struct roc_nix_inl_dev *inl_dev)
 {
 	uint32_t ipsec_in_max_spi = BIT(8) - 1;
+	struct inl_cpt_channel cpt_channel;
 	struct rte_kvargs *kvlist;
 	uint8_t selftest = 0;
 
+	memset(&cpt_channel, 0, sizeof(cpt_channel));
+
 	if (devargs == NULL)
 		goto null_devargs;
 
@@ -155,11 +186,16 @@ nix_inl_parse_devargs(struct rte_devargs *devargs,
 			   &selftest);
 	rte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,
 			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_process(kvlist, CNXK_INL_CPT_CHANNEL, &parse_inl_cpt_channel,
+			   &cpt_channel);
 	rte_kvargs_free(kvlist);
 
 null_devargs:
 	inl_dev->ipsec_in_max_spi = ipsec_in_max_spi;
 	inl_dev->selftest = selftest;
+	inl_dev->channel = cpt_channel.channel;
+	inl_dev->chan_mask = cpt_channel.mask;
+	inl_dev->is_multi_channel = cpt_channel.is_multi_channel;
 	return 0;
 exit:
 	return -EINVAL;
@@ -275,4 +311,5 @@ RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, "vfio-pci");
 
 RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,
 			      CNXK_NIX_INL_SELFTEST "=1"
-			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>");
+			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>"
+			      CNXK_INL_CPT_CHANNEL "=<1-4095>/<1-4095>");
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH 27/27] net/cnxk: reflect globally enabled offloads in queue conf
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (25 preceding siblings ...)
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 26/27] net/cnxk: add devargs for configuring channel mask Nithin Dabilpuram
@ 2021-09-02  2:15 ` Nithin Dabilpuram
  2021-09-29 12:44 ` [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Jerin Jacob
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-02  2:15 UTC (permalink / raw)
  To: Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: jerinj, schalla, dev

Reflect globally enabled Rx and Tx offloads in queue conf.
Also fix issue with lmt data prepare for multi seg.

Fixes: a24af6361e37 ("net/cnxk: add Tx queue setup and release")
Fixes: a86144cd9ded ("net/cnxk: add Rx queue setup and release")
Fixes: 305ca2c4c382 ("net/cnxk: support multi-segment vector Tx")

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/cnxk/cn10k_tx.h    | 2 +-
 drivers/net/cnxk/cnxk_ethdev.c | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f56aa8e..2821f71 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -1280,7 +1280,7 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,
 			vst1q_u64(lmt_addr + 14, cmd1[3]);
 
 			*data128 |= ((__uint128_t)7) << *shift;
-			shift += 3;
+			*shift += 3;
 
 			return 1;
 		}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8a102aa..978ee5b 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -377,6 +377,8 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq_sp->dev = dev;
 	txq_sp->qid = qid;
 	txq_sp->qconf.conf.tx = *tx_conf;
+	/* Queue config should reflect global offloads */
+	txq_sp->qconf.conf.tx.offloads = dev->tx_offloads;
 	txq_sp->qconf.nb_desc = nb_desc;
 
 	plt_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " lmt_addr=%p"
@@ -511,6 +513,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->dev = dev;
 	rxq_sp->qid = qid;
 	rxq_sp->qconf.conf.rx = *rx_conf;
+	/* Queue config should reflect global offloads */
+	rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads;
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (26 preceding siblings ...)
  2021-09-02  2:15 ` [dpdk-dev] [PATCH 27/27] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
@ 2021-09-29 12:44 ` Jerin Jacob
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
  29 siblings, 0 replies; 91+ messages in thread
From: Jerin Jacob @ 2021-09-29 12:44 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: Jerin Jacob, Srujana Challa, dpdk-dev

On Thu, Sep 2, 2021 at 7:46 AM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> Support for inline ipsec in CN9K event mode and in Cn10K event mode and
> poll mode.
>
> Depends-on: series-18524 ("Crypto adapter support for Marvell CNXK driver)
> Depends-on: series-18262 ("security: Improve inline fast path routines")
> Depends-on: series-18562 ("add lookaside IPsec additional features)

Now that these patches merged to main. Please rebase based on main.
Also, update the release notes for cnxk ethdev for this feature.

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 00/28] net/cnxk: support for inline ipsec
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (27 preceding siblings ...)
  2021-09-29 12:44 ` [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Jerin Jacob
@ 2021-09-30 17:00 ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
                     ` (28 more replies)
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
  29 siblings, 29 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj; +Cc: dev, Nithin Dabilpuram

Support for inline ipsec in CN9K event mode and in Cn10K event mode and
poll mode.

Kommula Shiva Shankar (1):
  common/cnxk: add CQ enable support in NIX Tx path

Nithin Dabilpuram (18):
  common/cnxk: support CPT parse header dump
  common/cnxk: allow reuse of SSO API for inline dev
  common/cnxk: change NIX debug API and queue API interface
  common/cnxk: support NIX inline device IRQ
  common/cnxk: support NIX inline device init and fini
  common/cnxk: support NIX inline inbound and outbound setup
  common/cnxk: disable CQ drop when inline inbound is enabled
  common/cnxk: dump CPT LF registers on error intr
  common/cnxk: align CPT LF enable/disable sequence
  common/cnxk: restore NIX sqb pool limit before destroy
  common/cnxk: setup aura BP conf based on nix
  net/cnxk: support inline security setup for cn9k
  net/cnxk: support inline security setup for cn10k
  net/cnxk: support Rx security offload on cn9k
  net/cnxk: support Tx security offload on cn9k
  net/cnxk: support Rx security offload on cn10k
  net/cnxk: support Tx security offload on cn10k
  net/cnxk: reflect globally enabled offloads in queue conf

Satheesh Paul (2):
  common/cnxk: support inline IPsec rte flow action
  net/cnxk: support configuring channel mask via devargs

Srujana Challa (7):
  common/cnxk: support cn9k fast path security session
  common/cnxk: support anti-replay check in SW for cn9k
  net/cnxk: support IPsec anti replay in cn9k
  net/cnxk: support IPsec transport mode in cn10k
  net/cnxk: update ethertype for mixed IPsec tunnel versions
  net/cnxk: allow zero udp6 checksum for non inline device
  net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1

v2:
- Included bug fixes for second pass packets
- Updated .ini files.
- Reworded commit messages with additional description
  and abbreviation fixes

 doc/guides/nics/cnxk.rst                         |  122 +++
 doc/guides/nics/features/cnxk.ini                |    1 +
 doc/guides/nics/features/cnxk_vec.ini            |    1 +
 doc/guides/nics/features/cnxk_vf.ini             |    1 +
 doc/guides/rel_notes/release_21_11.rst           |    2 +
 drivers/common/cnxk/cnxk_security.c              |  212 +++++
 drivers/common/cnxk/cnxk_security.h              |   12 +
 drivers/common/cnxk/cnxk_security_ar.h           |  184 ++++
 drivers/common/cnxk/hw/cpt.h                     |   19 +
 drivers/common/cnxk/meson.build                  |    3 +
 drivers/common/cnxk/roc_api.h                    |   49 +-
 drivers/common/cnxk/roc_constants.h              |   58 ++
 drivers/common/cnxk/roc_cpt.c                    |   54 +-
 drivers/common/cnxk/roc_cpt.h                    |   10 +
 drivers/common/cnxk/roc_cpt_debug.c              |   63 +-
 drivers/common/cnxk/roc_cpt_priv.h               |    1 +
 drivers/common/cnxk/roc_idev.c                   |    2 +
 drivers/common/cnxk/roc_idev_priv.h              |    3 +
 drivers/common/cnxk/roc_io.h                     |    9 +
 drivers/common/cnxk/roc_io_generic.h             |    3 +-
 drivers/common/cnxk/roc_irq.c                    |    7 +-
 drivers/common/cnxk/roc_nix.c                    |    2 +-
 drivers/common/cnxk/roc_nix.h                    |    7 +
 drivers/common/cnxk/roc_nix_debug.c              |  168 +++-
 drivers/common/cnxk/roc_nix_fc.c                 |   23 +-
 drivers/common/cnxk/roc_nix_inl.c                |  778 +++++++++++++++++
 drivers/common/cnxk/roc_nix_inl.h                |  170 ++++
 drivers/common/cnxk/roc_nix_inl_dev.c            |  639 ++++++++++++++
 drivers/common/cnxk/roc_nix_inl_dev_irq.c        |  359 ++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h           |   68 ++
 drivers/common/cnxk/roc_nix_priv.h               |   31 +
 drivers/common/cnxk/roc_nix_queue.c              |  137 +--
 drivers/common/cnxk/roc_npc.c                    |   27 +-
 drivers/common/cnxk/roc_npc_mcam.c               |   28 +-
 drivers/common/cnxk/roc_platform.h               |   11 +-
 drivers/common/cnxk/roc_priv.h                   |    3 +
 drivers/common/cnxk/roc_sso.c                    |   52 +-
 drivers/common/cnxk/roc_sso_priv.h               |    9 +
 drivers/common/cnxk/version.map                  |   34 +
 drivers/event/cnxk/cn10k_eventdev.c              |   93 +-
 drivers/event/cnxk/cn10k_worker.h                |  147 +++-
 drivers/event/cnxk/cn10k_worker_deq.c            |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_burst.c      |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_ca.c         |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_tmo.c        |    2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq.c         |    2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq_seg.c     |    2 +-
 drivers/event/cnxk/cn9k_eventdev.c               |  182 ++--
 drivers/event/cnxk/cn9k_worker.h                 |  170 +++-
 drivers/event/cnxk/cn9k_worker_deq.c             |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_burst.c       |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_ca.c          |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_tmo.c         |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq.c        |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_burst.c  |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c     |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c    |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |    2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq.c          |    2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |    2 +-
 drivers/event/cnxk/cnxk_eventdev_adptr.c         |   36 +-
 drivers/net/cnxk/cn10k_ethdev.c                  |   41 +-
 drivers/net/cnxk/cn10k_ethdev.h                  |   48 ++
 drivers/net/cnxk/cn10k_ethdev_sec.c              |  492 +++++++++++
 drivers/net/cnxk/cn10k_rx.c                      |   31 +-
 drivers/net/cnxk/cn10k_rx.h                      |  649 +++++++++++---
 drivers/net/cnxk/cn10k_rx_mseg.c                 |    2 +-
 drivers/net/cnxk/cn10k_rx_vec.c                  |    4 +-
 drivers/net/cnxk/cn10k_rx_vec_mseg.c             |    4 +-
 drivers/net/cnxk/cn10k_tx.c                      |   31 +-
 drivers/net/cnxk/cn10k_tx.h                      | 1006 +++++++++++++++++++---
 drivers/net/cnxk/cn10k_tx_mseg.c                 |    2 +-
 drivers/net/cnxk/cn10k_tx_vec.c                  |    2 +-
 drivers/net/cnxk/cn10k_tx_vec_mseg.c             |    2 +-
 drivers/net/cnxk/cn9k_ethdev.c                   |   23 +
 drivers/net/cnxk/cn9k_ethdev.h                   |   64 ++
 drivers/net/cnxk/cn9k_ethdev_sec.c               |  382 ++++++++
 drivers/net/cnxk/cn9k_rx.c                       |   31 +-
 drivers/net/cnxk/cn9k_rx.h                       |  493 +++++++++--
 drivers/net/cnxk/cn9k_rx_mseg.c                  |    2 +-
 drivers/net/cnxk/cn9k_rx_vec.c                   |    2 +-
 drivers/net/cnxk/cn9k_rx_vec_mseg.c              |    2 +-
 drivers/net/cnxk/cn9k_tx.c                       |   29 +-
 drivers/net/cnxk/cn9k_tx.h                       |  393 ++++++---
 drivers/net/cnxk/cn9k_tx_mseg.c                  |    2 +-
 drivers/net/cnxk/cn9k_tx_vec.c                   |    2 +-
 drivers/net/cnxk/cn9k_tx_vec_mseg.c              |    2 +-
 drivers/net/cnxk/cnxk_ethdev.c                   |  243 +++++-
 drivers/net/cnxk/cnxk_ethdev.h                   |  125 ++-
 drivers/net/cnxk/cnxk_ethdev_devargs.c           |   88 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c               |  315 +++++++
 drivers/net/cnxk/cnxk_lookup.c                   |   50 +-
 drivers/net/cnxk/meson.build                     |    3 +
 drivers/net/cnxk/version.map                     |    5 +
 usertools/dpdk-devbind.py                        |    8 +-
 96 files changed, 7686 insertions(+), 918 deletions(-)
 create mode 100644 drivers/common/cnxk/cnxk_security_ar.h
 create mode 100644 drivers/common/cnxk/roc_constants.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c

-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 01/28] common/cnxk: support cn9k fast path security session
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
                     ` (27 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Add security support to init cn9k fast path SA data
for AES GCM and AES CBC + HMAC SHA1.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/cnxk_security.c | 211 ++++++++++++++++++++++++++++++++++++
 drivers/common/cnxk/cnxk_security.h |  12 ++
 drivers/common/cnxk/version.map     |   4 +
 3 files changed, 227 insertions(+)

diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index cc5daf3..c117fa7 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -513,6 +513,217 @@ cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
 	return !!sa->w2.s.valid;
 }
 
+static inline int
+ipsec_xfrm_verify(struct rte_security_ipsec_xform *ipsec_xfrm,
+		  struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	if (crypto_xfrm->next == NULL)
+		return -EINVAL;
+
+	if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+		    crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+	} else {
+		if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+		    crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
+			       uint8_t *cipher_key, uint8_t *hmac_opad_ipad,
+			       struct rte_security_ipsec_xform *ipsec_xfrm,
+			       struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+	int rc, length, auth_key_len;
+	const uint8_t *key = NULL;
+
+	/* Set direction */
+	switch (ipsec_xfrm->direction) {
+	case RTE_SECURITY_IPSEC_SA_DIR_INGRESS:
+		ctl->direction = ROC_IE_SA_DIR_INBOUND;
+		auth_xfrm = crypto_xfrm;
+		cipher_xfrm = crypto_xfrm->next;
+		break;
+	case RTE_SECURITY_IPSEC_SA_DIR_EGRESS:
+		ctl->direction = ROC_IE_SA_DIR_OUTBOUND;
+		cipher_xfrm = crypto_xfrm;
+		auth_xfrm = crypto_xfrm->next;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set protocol - ESP vs AH */
+	switch (ipsec_xfrm->proto) {
+	case RTE_SECURITY_IPSEC_SA_PROTO_ESP:
+		ctl->ipsec_proto = ROC_IE_SA_PROTOCOL_ESP;
+		break;
+	case RTE_SECURITY_IPSEC_SA_PROTO_AH:
+		return -ENOTSUP;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set mode - transport vs tunnel */
+	switch (ipsec_xfrm->mode) {
+	case RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT:
+		ctl->ipsec_mode = ROC_IE_SA_MODE_TRANSPORT;
+		break;
+	case RTE_SECURITY_IPSEC_SA_MODE_TUNNEL:
+		ctl->ipsec_mode = ROC_IE_SA_MODE_TUNNEL;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set encryption algorithm */
+	if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		length = crypto_xfrm->aead.key.length;
+
+		switch (crypto_xfrm->aead.algo) {
+		case RTE_CRYPTO_AEAD_AES_GCM:
+			ctl->enc_type = ROC_IE_ON_SA_ENC_AES_GCM;
+			ctl->auth_type = ROC_IE_ON_SA_AUTH_NULL;
+			memcpy(salt, &ipsec_xfrm->salt, 4);
+			key = crypto_xfrm->aead.key.data;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+
+	} else {
+		rc = ipsec_xfrm_verify(ipsec_xfrm, crypto_xfrm);
+		if (rc)
+			return rc;
+
+		switch (cipher_xfrm->cipher.algo) {
+		case RTE_CRYPTO_CIPHER_AES_CBC:
+			ctl->enc_type = ROC_IE_ON_SA_ENC_AES_CBC;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+
+		switch (auth_xfrm->auth.algo) {
+		case RTE_CRYPTO_AUTH_SHA1_HMAC:
+			ctl->auth_type = ROC_IE_ON_SA_AUTH_SHA1;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+		auth_key_len = auth_xfrm->auth.key.length;
+		if (auth_key_len < 20 || auth_key_len > 64)
+			return -ENOTSUP;
+
+		key = cipher_xfrm->cipher.key.data;
+		length = cipher_xfrm->cipher.key.length;
+
+		ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+	}
+
+	switch (length) {
+	case ROC_CPT_AES128_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_128;
+		break;
+	case ROC_CPT_AES192_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_192;
+		break;
+	case ROC_CPT_AES256_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_256;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	memcpy(cipher_key, key, length);
+
+	if (ipsec_xfrm->options.esn)
+		ctl->esn_en = 1;
+
+	ctl->spi = rte_cpu_to_be_32(ipsec_xfrm->spi);
+	return 0;
+}
+
+int
+cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
+			   struct rte_security_ipsec_xform *ipsec_xfrm,
+			   struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
+	int rc;
+
+	rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
+					    sa->hmac_key, ipsec_xfrm,
+					    crypto_xfrm);
+	if (rc)
+		return rc;
+
+	rte_wmb();
+
+	/* Enable SA */
+	ctl->valid = 1;
+	return 0;
+}
+
+int
+cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
+			    struct rte_security_ipsec_xform *ipsec_xfrm,
+			    struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct rte_security_ipsec_tunnel_param *tunnel = &ipsec_xfrm->tunnel;
+	struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
+	int rc;
+
+	/* Fill common params */
+	rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
+					    sa->hmac_key, ipsec_xfrm,
+					    crypto_xfrm);
+	if (rc)
+		return rc;
+
+	if (ipsec_xfrm->mode != RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
+		goto skip_tunnel_info;
+
+	/* Tunnel header info */
+	switch (tunnel->type) {
+	case RTE_SECURITY_IPSEC_TUNNEL_IPV4:
+		memcpy(&sa->ip_src, &tunnel->ipv4.src_ip,
+		       sizeof(struct in_addr));
+		memcpy(&sa->ip_dst, &tunnel->ipv4.dst_ip,
+		       sizeof(struct in_addr));
+		break;
+	case RTE_SECURITY_IPSEC_TUNNEL_IPV6:
+		return -ENOTSUP;
+	default:
+		return -EINVAL;
+	}
+
+skip_tunnel_info:
+	rte_wmb();
+
+	/* Enable SA */
+	ctl->valid = 1;
+	return 0;
+}
+
+bool
+cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa)
+{
+	return !!sa->ctl.valid;
+}
+
+bool
+cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa)
+{
+	return !!sa->ctl.valid;
+}
+
 uint8_t
 cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		     enum rte_crypto_auth_algorithm a_algo,
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 602f583..db97887 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -46,4 +46,16 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 bool __roc_api cnxk_ot_ipsec_inb_sa_valid(struct roc_ot_ipsec_inb_sa *sa);
 bool __roc_api cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa);
 
+/* [CN9K, CN10K) */
+int __roc_api
+cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
+			   struct rte_security_ipsec_xform *ipsec_xfrm,
+			   struct rte_crypto_sym_xform *crypto_xfrm);
+int __roc_api
+cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
+			    struct rte_security_ipsec_xform *ipsec_xfrm,
+			    struct rte_crypto_sym_xform *crypto_xfrm);
+bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
+bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
+
 #endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 5df2e56..c132871 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -14,6 +14,10 @@ INTERNAL {
 	cnxk_logtype_sso;
 	cnxk_logtype_tim;
 	cnxk_logtype_tm;
+	cnxk_onf_ipsec_inb_sa_fill;
+	cnxk_onf_ipsec_outb_sa_fill;
+	cnxk_onf_ipsec_inb_sa_valid;
+	cnxk_onf_ipsec_outb_sa_valid;
 	cnxk_ot_ipsec_inb_sa_fill;
 	cnxk_ot_ipsec_outb_sa_fill;
 	cnxk_ot_ipsec_inb_sa_valid;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 02/28] common/cnxk: support CPT parse header dump
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
                     ` (26 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev

Add helper API to dump CPT parse header.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_cpt.h       |  2 ++
 drivers/common/cnxk/roc_cpt_debug.c | 31 +++++++++++++++++++++++++++++++
 drivers/common/cnxk/version.map     |  1 +
 3 files changed, 34 insertions(+)

diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 9e63073..c80a8e0 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -155,4 +155,6 @@ void __roc_api roc_cpt_iq_enable(struct roc_cpt_lf *lf);
 int __roc_api roc_cpt_lmtline_init(struct roc_cpt *roc_cpt,
 				   struct roc_cpt_lmtline *lmtline, int lf_id);
 
+void __roc_api roc_cpt_parse_hdr_dump(const struct cpt_parse_hdr_s *cpth);
+
 #endif /* _ROC_CPT_H_ */
diff --git a/drivers/common/cnxk/roc_cpt_debug.c b/drivers/common/cnxk/roc_cpt_debug.c
index 9a9dcba..a6c9004 100644
--- a/drivers/common/cnxk/roc_cpt_debug.c
+++ b/drivers/common/cnxk/roc_cpt_debug.c
@@ -5,6 +5,37 @@
 #include "roc_api.h"
 #include "roc_priv.h"
 
+void
+roc_cpt_parse_hdr_dump(const struct cpt_parse_hdr_s *cpth)
+{
+	plt_print("CPT_PARSE \t0x%p:", cpth);
+
+	/* W0 */
+	plt_print("W0: cookie \t0x%x\t\tmatch_id \t0x%04x\t\terr_sum \t%u \t",
+		  cpth->w0.cookie, cpth->w0.match_id, cpth->w0.err_sum);
+	plt_print("W0: reas_sts \t0x%x\t\tet_owr \t%u\t\tpkt_fmt \t%u \t",
+		  cpth->w0.reas_sts, cpth->w0.et_owr, cpth->w0.pkt_fmt);
+	plt_print("W0: pad_len \t%u\t\tnum_frags \t%u\t\tpkt_out \t%u \t",
+		  cpth->w0.pad_len, cpth->w0.num_frags, cpth->w0.pkt_out);
+
+	/* W1 */
+	plt_print("W1: wqe_ptr \t0x%016lx\t", cpth->wqe_ptr);
+
+	/* W2 */
+	plt_print("W2: frag_age \t0x%x\t\torig_pf_func \t0x%04x",
+		  cpth->w2.frag_age, cpth->w2.orig_pf_func);
+	plt_print("W2: il3_off \t0x%x\t\tfi_pad \t0x%x\t\tfi_offset \t0x%x \t",
+		  cpth->w2.il3_off, cpth->w2.fi_pad, cpth->w2.fi_offset);
+
+	/* W3 */
+	plt_print("W3: hw_ccode \t0x%x\t\tuc_ccode \t0x%x\t\tspi \t0x%08x",
+		  cpth->w3.hw_ccode, cpth->w3.uc_ccode, cpth->w3.spi);
+
+	/* W4 */
+	plt_print("W4: esn \t%" PRIx64 " \t OR frag1_wqe_ptr \t0x%" PRIx64,
+		  cpth->esn, cpth->frag1_wqe_ptr);
+}
+
 static int
 cpt_af_reg_read(struct roc_cpt *roc_cpt, uint64_t reg, uint64_t *val)
 {
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index c132871..1f9fe36 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -66,6 +66,7 @@ INTERNAL {
 	roc_cpt_lf_fini;
 	roc_cpt_lfs_print;
 	roc_cpt_lmtline_init;
+	roc_cpt_parse_hdr_dump;
 	roc_cpt_rxc_time_cfg;
 	roc_error_msg_get;
 	roc_hash_sha1_gen;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 03/28] common/cnxk: allow reuse of SSO API for inline dev
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
                     ` (25 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Rework interface of SSO internal functions to use for NIX inline dev's
SSO LF's.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 52 ++++++++++++++++++++++++--------------
 drivers/common/cnxk/roc_sso_priv.h |  9 +++++++
 2 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 1ccf262..bdf973f 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -6,11 +6,10 @@
 #include "roc_priv.h"
 
 /* Private functions. */
-static int
-sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf,
+int
+sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	     void **rsp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	int rc = -ENOSPC;
 
 	switch (lf_type) {
@@ -41,10 +40,9 @@ sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf,
 	return 0;
 }
 
-static int
-sso_lf_free(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf)
+int
+sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	int rc = -ENOSPC;
 
 	switch (lf_type) {
@@ -152,7 +150,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 	return 0;
 }
 
-static void
+void
 sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
 		    uint16_t hwgrp[], uint16_t n, uint16_t enable)
 {
@@ -172,8 +170,10 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
 		k = k ? k : 4;
 		for (j = 0; j < k; j++) {
 			mask[j] = hwgrp[i + j] | enable << 14;
-			enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
-				 plt_bitmap_clear(bmp, hwgrp[i + j]);
+			if (bmp) {
+				enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
+					 plt_bitmap_clear(bmp, hwgrp[i + j]);
+			}
 			plt_sso_dbg("HWS %d Linked to HWGRP %d", hws,
 				    hwgrp[i + j]);
 		}
@@ -388,10 +388,8 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 }
 
 int
-roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
-			uint16_t hwgrps)
+sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	struct sso_hw_setconfig *req;
 	int rc = -ENOSPC;
 
@@ -406,9 +404,17 @@ roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 }
 
 int
-roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
+roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
+			uint16_t hwgrps)
 {
 	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+
+	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+}
+
+int
+sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
+{
 	struct sso_hw_xaq_release *req;
 
 	req = mbox_alloc_msg_sso_hw_release_xaq_aura(dev->mbox);
@@ -420,6 +426,14 @@ roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 }
 
 int
+roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
+{
+	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+
+	return sso_hwgrp_release_xaq(dev, hwgrps);
+}
+
+int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
@@ -468,13 +482,13 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto hwgrp_atch_fail;
 	}
 
-	rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWS, nb_hws, NULL);
+	rc = sso_lf_alloc(&sso->dev, SSO_LF_TYPE_HWS, nb_hws, NULL);
 	if (rc < 0) {
 		plt_err("Unable to alloc SSO HWS LFs");
 		goto hws_alloc_fail;
 	}
 
-	rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp,
+	rc = sso_lf_alloc(&sso->dev, SSO_LF_TYPE_HWGRP, nb_hwgrp,
 			  (void **)&rsp_hwgrp);
 	if (rc < 0) {
 		plt_err("Unable to alloc SSO HWGRP Lfs");
@@ -503,9 +517,9 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 
 	return 0;
 sso_msix_fail:
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, nb_hwgrp);
 hwgrp_alloc_fail:
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, nb_hws);
 hws_alloc_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
@@ -523,8 +537,8 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
 				 roc_sso->nb_hws, roc_sso->nb_hwgrp);
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
 
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 5361d4f..8dffa3f 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -39,6 +39,15 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
 	return (struct sso *)&roc_sso->reserved[0];
 }
 
+/* SSO LF ops */
+int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
+		 void **rsp);
+int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
+			 uint16_t hwgrp[], uint16_t n, uint16_t enable);
+int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
+int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
+
 /* SSO IRQ */
 int sso_register_irqs_priv(struct roc_sso *roc_sso,
 			   struct plt_intr_handle *handle, uint16_t nb_hws,
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 04/28] common/cnxk: change NIX debug API and queue API interface
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (2 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
                     ` (24 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Change NIX debug API and queue API interface for use by
internal NIX inline device initialization.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix.c       |   2 +-
 drivers/common/cnxk/roc_nix_debug.c | 118 +++++++++++++++++++++++++++---------
 drivers/common/cnxk/roc_nix_priv.h  |  16 +++++
 drivers/common/cnxk/roc_nix_queue.c |  89 +++++++++++++++------------
 4 files changed, 159 insertions(+), 66 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index 23d508b..3ab954e 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -300,7 +300,7 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix)
 	}
 }
 
-static inline uint64_t
+uint64_t
 nix_get_blkaddr(struct dev *dev)
 {
 	uint64_t reg;
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 6e56513..9539bb9 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -110,17 +110,12 @@ roc_nix_lf_get_reg_count(struct roc_nix *roc_nix)
 }
 
 int
-roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+nix_lf_gen_reg_dump(uintptr_t nix_lf_base, uint64_t *data)
 {
-	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
-	uintptr_t nix_lf_base = nix->base;
 	bool dump_stdout;
 	uint64_t reg;
 	uint32_t i;
 
-	if (roc_nix == NULL)
-		return NIX_ERR_PARAM;
-
 	dump_stdout = data ? 0 : 1;
 
 	for (i = 0; i < PLT_DIM(nix_lf_reg); i++) {
@@ -131,8 +126,21 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 			*data++ = reg;
 	}
 
+	return i;
+}
+
+int
+nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint8_t lf_tx_stats,
+		     uint8_t lf_rx_stats)
+{
+	uint32_t i, count = 0;
+	bool dump_stdout;
+	uint64_t reg;
+
+	dump_stdout = data ? 0 : 1;
+
 	/* NIX_LF_TX_STATX */
-	for (i = 0; i < nix->lf_tx_stats; i++) {
+	for (i = 0; i < lf_tx_stats; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_TX_STATX(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_TX_STATX", i,
@@ -140,9 +148,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_RX_STATX */
-	for (i = 0; i < nix->lf_rx_stats; i++) {
+	for (i = 0; i < lf_rx_stats; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_RX_STATX(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_RX_STATX", i,
@@ -151,8 +160,21 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 			*data++ = reg;
 	}
 
+	return count + i;
+}
+
+int
+nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
+		    uint16_t cints)
+{
+	uint32_t i, count = 0;
+	bool dump_stdout;
+	uint64_t reg;
+
+	dump_stdout = data ? 0 : 1;
+
 	/* NIX_LF_QINTX_CNT*/
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_CNT", i,
@@ -160,9 +182,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_INT */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_INT", i,
@@ -170,9 +193,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_ENA_W1S */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1S",
@@ -180,9 +204,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_ENA_W1C */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1C",
@@ -190,9 +215,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_CNT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_CNT", i,
@@ -200,9 +226,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_WAIT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_WAIT", i,
@@ -210,9 +237,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_INT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT", i,
@@ -220,9 +248,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_INT_W1S */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT_W1S",
@@ -230,9 +259,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_ENA_W1S */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1S",
@@ -240,9 +270,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_ENA_W1C */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1C",
@@ -250,12 +281,40 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+
+	return count + i;
+}
+
+int
+roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	bool dump_stdout = data ? 0 : 1;
+	uintptr_t nix_base;
+	uint32_t i;
+
+	if (roc_nix == NULL)
+		return NIX_ERR_PARAM;
+
+	nix_base = nix->base;
+	/* General registers */
+	i = nix_lf_gen_reg_dump(nix_base, data);
+
+	/* Rx, Tx stat registers */
+	i += nix_lf_stat_reg_dump(nix_base, dump_stdout ? NULL : &data[i],
+				  nix->lf_tx_stats, nix->lf_rx_stats);
+
+	/* Intr registers */
+	i += nix_lf_int_reg_dump(nix_base, dump_stdout ? NULL : &data[i],
+				 nix->qints, nix->cints);
+
 	return 0;
 }
 
-static int
-nix_q_ctx_get(struct mbox *mbox, uint8_t ctype, uint16_t qid, __io void **ctx_p)
+int
+nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, __io void **ctx_p)
 {
+	struct mbox *mbox = dev->mbox;
 	int rc;
 
 	if (roc_model_is_cn9k()) {
@@ -485,7 +544,7 @@ nix_cn9k_lf_rq_dump(__io struct nix_rq_ctx_s *ctx)
 	nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
 }
 
-static inline void
+void
 nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx)
 {
 	nix_dump("W0: wqe_aura \t\t\t%d\nW0: len_ol3_dis \t\t\t%d",
@@ -595,12 +654,12 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 {
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	int rc = -1, q, rq = nix->nb_rx_queues;
-	struct mbox *mbox = (&nix->dev)->mbox;
 	struct npa_aq_enq_rsp *npa_rsp;
 	struct npa_aq_enq_req *npa_aq;
-	volatile void *ctx;
+	struct dev *dev = &nix->dev;
 	int sq = nix->nb_tx_queues;
 	struct npa_lf *npa_lf;
+	volatile void *ctx;
 	uint32_t sqb_aura;
 
 	npa_lf = idev_npa_obj_get();
@@ -608,7 +667,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 		return NPA_ERR_DEVICE_NOT_BOUNDED;
 
 	for (q = 0; q < rq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_CQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_CQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get cq context");
 			goto fail;
@@ -619,7 +678,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 	}
 
 	for (q = 0; q < rq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_RQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get rq context");
 			goto fail;
@@ -633,7 +692,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 	}
 
 	for (q = 0; q < sq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_SQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_SQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get sq context");
 			goto fail;
@@ -686,11 +745,13 @@ roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+	const uint64_t *sgs = (const uint64_t *)(rx + 1);
+	int i;
 
 	nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
 		 cq->tag, cq->q, cq->node, cq->cqe_type);
 
-	nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", rx->chan,
+	nix_dump("W0: chan \t0x%x\t\tdesc_sizem1 \t%d", rx->chan,
 		 rx->desc_sizem1);
 	nix_dump("W0: imm_copy \t%d\t\texpress \t%d", rx->imm_copy,
 		 rx->express);
@@ -731,6 +792,9 @@ roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 
 	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
 		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+
+	for (i = 0; i < (rx->desc_sizem1 + 1) << 1; i++)
+		nix_dump("sg[%u] = %p", i, (void *)sgs[i]);
 }
 
 void
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 1bd1b6a..0cabcd2 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -349,6 +349,12 @@ int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node,
 			 bool rr_quantum_only);
 int nix_tm_prepare_rate_limited_tree(struct roc_nix *roc_nix);
 
+int nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
+		    bool cfg, bool ena);
+int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
+	       bool ena);
+int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable);
+
 /*
  * TM priv utils.
  */
@@ -394,4 +400,14 @@ void nix_tm_node_free(struct nix_tm_node *node);
 struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void);
 void nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile);
 
+uint64_t nix_get_blkaddr(struct dev *dev);
+void nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx);
+int nix_lf_gen_reg_dump(uintptr_t nix_lf_base, uint64_t *data);
+int nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data,
+			 uint8_t lf_tx_stats, uint8_t lf_rx_stats);
+int nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
+			uint16_t cints);
+int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid,
+		  __io void **ctx_p);
+
 #endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index d7c4844..cff0ec3 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -29,46 +29,54 @@ nix_qsize_clampup(uint32_t val)
 }
 
 int
+nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable)
+{
+	struct mbox *mbox = dev->mbox;
+
+	/* Pkts will be dropped silently if RQ is disabled */
+	if (roc_model_is_cn9k()) {
+		struct nix_aq_enq_req *aq;
+
+		aq = mbox_alloc_msg_nix_aq_enq(mbox);
+		aq->qidx = rq->qid;
+		aq->ctype = NIX_AQ_CTYPE_RQ;
+		aq->op = NIX_AQ_INSTOP_WRITE;
+
+		aq->rq.ena = enable;
+		aq->rq_mask.ena = ~(aq->rq_mask.ena);
+	} else {
+		struct nix_cn10k_aq_enq_req *aq;
+
+		aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+		aq->qidx = rq->qid;
+		aq->ctype = NIX_AQ_CTYPE_RQ;
+		aq->op = NIX_AQ_INSTOP_WRITE;
+
+		aq->rq.ena = enable;
+		aq->rq_mask.ena = ~(aq->rq_mask.ena);
+	}
+
+	return mbox_process(mbox);
+}
+
+int
 roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable)
 {
 	struct nix *nix = roc_nix_to_nix_priv(rq->roc_nix);
-	struct mbox *mbox = (&nix->dev)->mbox;
 	int rc;
 
-	/* Pkts will be dropped silently if RQ is disabled */
-	if (roc_model_is_cn9k()) {
-		struct nix_aq_enq_req *aq;
-
-		aq = mbox_alloc_msg_nix_aq_enq(mbox);
-		aq->qidx = rq->qid;
-		aq->ctype = NIX_AQ_CTYPE_RQ;
-		aq->op = NIX_AQ_INSTOP_WRITE;
-
-		aq->rq.ena = enable;
-		aq->rq_mask.ena = ~(aq->rq_mask.ena);
-	} else {
-		struct nix_cn10k_aq_enq_req *aq;
-
-		aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
-		aq->qidx = rq->qid;
-		aq->ctype = NIX_AQ_CTYPE_RQ;
-		aq->op = NIX_AQ_INSTOP_WRITE;
-
-		aq->rq.ena = enable;
-		aq->rq_mask.ena = ~(aq->rq_mask.ena);
-	}
-
-	rc = mbox_process(mbox);
+	rc = nix_rq_ena_dis(&nix->dev, rq, enable);
 
 	if (roc_model_is_cn10k())
 		plt_write64(rq->qid, nix->base + NIX_LF_OP_VWQE_FLUSH);
 	return rc;
 }
 
-static int
-rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+int
+nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
+		bool cfg, bool ena)
 {
-	struct mbox *mbox = (&nix->dev)->mbox;
+	struct mbox *mbox = dev->mbox;
 	struct nix_aq_enq_req *aq;
 
 	aq = mbox_alloc_msg_nix_aq_enq(mbox);
@@ -118,7 +126,7 @@ rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
 	aq->rq.rq_int_ena = 0;
 	/* Many to one reduction */
-	aq->rq.qint_idx = rq->qid % nix->qints;
+	aq->rq.qint_idx = rq->qid % qints;
 	aq->rq.xqe_drop_ena = 1;
 
 	/* If RED enabled, then fill enable for all cases */
@@ -179,11 +187,12 @@ rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	return 0;
 }
 
-static int
-rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+int
+nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
+	   bool ena)
 {
-	struct mbox *mbox = (&nix->dev)->mbox;
 	struct nix_cn10k_aq_enq_req *aq;
+	struct mbox *mbox = dev->mbox;
 
 	aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
 	aq->qidx = rq->qid;
@@ -220,8 +229,10 @@ rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 		aq->rq.cq = rq->qid;
 	}
 
-	if (rq->ipsech_ena)
+	if (rq->ipsech_ena) {
 		aq->rq.ipsech_ena = 1;
+		aq->rq.ipsecd_drop_en = 1;
+	}
 
 	aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle);
 
@@ -260,7 +271,7 @@ rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
 	aq->rq.rq_int_ena = 0;
 	/* Many to one reduction */
-	aq->rq.qint_idx = rq->qid % nix->qints;
+	aq->rq.qint_idx = rq->qid % qints;
 	aq->rq.xqe_drop_ena = 1;
 
 	/* If RED enabled, then fill enable for all cases */
@@ -359,6 +370,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct mbox *mbox = (&nix->dev)->mbox;
 	bool is_cn9k = roc_model_is_cn9k();
+	struct dev *dev = &nix->dev;
 	int rc;
 
 	if (roc_nix == NULL || rq == NULL)
@@ -370,9 +382,9 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	rq->roc_nix = roc_nix;
 
 	if (is_cn9k)
-		rc = rq_cn9k_cfg(nix, rq, false, ena);
+		rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena);
 	else
-		rc = rq_cfg(nix, rq, false, ena);
+		rc = nix_rq_cfg(dev, rq, nix->qints, false, ena);
 
 	if (rc)
 		return rc;
@@ -386,6 +398,7 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct mbox *mbox = (&nix->dev)->mbox;
 	bool is_cn9k = roc_model_is_cn9k();
+	struct dev *dev = &nix->dev;
 	int rc;
 
 	if (roc_nix == NULL || rq == NULL)
@@ -397,9 +410,9 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	rq->roc_nix = roc_nix;
 
 	if (is_cn9k)
-		rc = rq_cn9k_cfg(nix, rq, true, ena);
+		rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, true, ena);
 	else
-		rc = rq_cfg(nix, rq, true, ena);
+		rc = nix_rq_cfg(dev, rq, nix->qints, true, ena);
 
 	if (rc)
 		return rc;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 05/28] common/cnxk: support NIX inline device IRQ
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (3 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
                     ` (23 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Add API to setup NIX inline device IRQ's. This registers
IRQ's for errors in case of NIX, CPT LF, SSOW and get wor
interrupt in case of SSO.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/meson.build           |   1 +
 drivers/common/cnxk/roc_api.h             |   3 +
 drivers/common/cnxk/roc_irq.c             |   7 +-
 drivers/common/cnxk/roc_nix_inl.h         |  10 +
 drivers/common/cnxk/roc_nix_inl_dev_irq.c | 359 ++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h    |  57 +++++
 drivers/common/cnxk/roc_platform.h        |   9 +-
 drivers/common/cnxk/roc_priv.h            |   3 +
 8 files changed, 442 insertions(+), 7 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_nix_inl.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h

diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 8a551d1..207ca00 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
         'roc_nix_mcast.c',
         'roc_nix_npc.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 7dec845..c1af95e 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -129,4 +129,7 @@
 /* HASH computation */
 #include "roc_hash.h"
 
+/* NIX Inline dev */
+#include "roc_nix_inl.h"
+
 #endif /* _ROC_API_H_ */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 4c2b4c3..28fe691 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -138,9 +138,10 @@ dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
 		irq_init(intr_handle);
 	}
 
-	if (vec > intr_handle->max_intr) {
-		plt_err("Vector=%d greater than max_intr=%d", vec,
-			intr_handle->max_intr);
+	if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
+		plt_err("Vector=%d greater than max_intr=%d or "
+			"max_efd=%" PRIu64,
+			vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
 		return -EINVAL;
 	}
 
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
new file mode 100644
index 0000000..1ec3dda
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_NIX_INL_H_
+#define _ROC_NIX_INL_H_
+
+/* Inline device SSO Work callback */
+typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
+
+#endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
new file mode 100644
index 0000000..25ed42f
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static void
+nix_inl_sso_work_cb(struct nix_inl_dev *inl_dev)
+{
+	uintptr_t getwrk_op = inl_dev->ssow_base + SSOW_LF_GWS_OP_GET_WORK0;
+	uintptr_t tag_wqe_op = inl_dev->ssow_base + SSOW_LF_GWS_WQE0;
+	uint32_t wdata = BIT(16) | 1;
+	union {
+		__uint128_t get_work;
+		uint64_t u64[2];
+	} gw;
+	uint64_t work;
+
+again:
+	/* Try to do get work */
+	gw.get_work = wdata;
+	plt_write64(gw.u64[0], getwrk_op);
+	do {
+		roc_load_pair(gw.u64[0], gw.u64[1], tag_wqe_op);
+	} while (gw.u64[0] & BIT_ULL(63));
+
+	work = gw.u64[1];
+	/* Do we have any work? */
+	if (work) {
+		if (inl_dev->work_cb)
+			inl_dev->work_cb(gw.u64, inl_dev->cb_args);
+		else
+			plt_warn("Undelivered inl dev work gw0: %p gw1: %p",
+				 (void *)gw.u64[0], (void *)gw.u64[1]);
+		goto again;
+	}
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+}
+
+static int
+nix_inl_nix_reg_dump(struct nix_inl_dev *inl_dev)
+{
+	uintptr_t nix_base = inl_dev->nix_base;
+
+	/* General registers */
+	nix_lf_gen_reg_dump(nix_base, NULL);
+
+	/* Rx, Tx stat registers */
+	nix_lf_stat_reg_dump(nix_base, NULL, inl_dev->lf_tx_stats,
+			     inl_dev->lf_rx_stats);
+
+	/* Intr registers */
+	nix_lf_int_reg_dump(nix_base, NULL, inl_dev->qints, inl_dev->cints);
+
+	return 0;
+}
+
+static void
+nix_inl_sso_hwgrp_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint64_t intr;
+
+	intr = plt_read64(sso_base + SSO_LF_GGRP_INT);
+	if (intr == 0)
+		return;
+
+	/* Check for work executable interrupt */
+	if (intr & BIT(1))
+		nix_inl_sso_work_cb(inl_dev);
+
+	if (!(intr & BIT(1)))
+		plt_err("GGRP 0 GGRP_INT=0x%" PRIx64 "", intr);
+
+	/* Clear interrupt */
+	plt_write64(intr, sso_base + SSO_LF_GGRP_INT);
+}
+
+static void
+nix_inl_sso_hws_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uint64_t intr;
+
+	intr = plt_read64(ssow_base + SSOW_LF_GWS_INT);
+	if (intr == 0)
+		return;
+
+	plt_err("GWS 0 GWS_INT=0x%" PRIx64 "", intr);
+
+	/* Clear interrupt */
+	plt_write64(intr, ssow_base + SSOW_LF_GWS_INT);
+}
+
+int
+nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint16_t sso_msixoff, ssow_msixoff;
+	int rc;
+
+	ssow_msixoff = inl_dev->ssow_msixoff;
+	sso_msixoff = inl_dev->sso_msixoff;
+	if (sso_msixoff == MSIX_VECTOR_INVALID ||
+	    ssow_msixoff == MSIX_VECTOR_INVALID) {
+		plt_err("Invalid SSO/SSOW MSIX offsets (0x%x, 0x%x)",
+			sso_msixoff, ssow_msixoff);
+		return -EINVAL;
+	}
+
+	/*
+	 * Setup SSOW interrupt
+	 */
+
+	/* Clear SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1C);
+	/* Register interrupt with vfio */
+	rc = dev_irq_register(handle, nix_inl_sso_hws_irq, inl_dev,
+			      ssow_msixoff + SSOW_LF_INT_VEC_IOP);
+	/* Set SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1S);
+
+	/*
+	 * Setup SSO/HWGRP interrupt
+	 */
+
+	/* Clear SSO interrupt enable */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1C);
+	/* Register IRQ */
+	rc |= dev_irq_register(handle, nix_inl_sso_hwgrp_irq, (void *)inl_dev,
+			       sso_msixoff + SSO_LF_INT_VEC_GRP);
+	/* Enable hw interrupt */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1S);
+
+	/* Setup threshold for work exec interrupt to 1 wqe in IAQ */
+	plt_write64(0x1ull, sso_base + SSO_LF_GGRP_INT_THR);
+
+	return rc;
+}
+
+void
+nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint16_t sso_msixoff, ssow_msixoff;
+
+	ssow_msixoff = inl_dev->ssow_msixoff;
+	sso_msixoff = inl_dev->sso_msixoff;
+
+	/* Clear SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1C);
+	/* Clear SSO/HWGRP interrupt enable */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1C);
+	/* Clear SSO threshold */
+	plt_write64(0, sso_base + SSO_LF_GGRP_INT_THR);
+
+	/* Unregister IRQ */
+	dev_irq_unregister(handle, nix_inl_sso_hws_irq, (void *)inl_dev,
+			   ssow_msixoff + SSOW_LF_INT_VEC_IOP);
+	dev_irq_unregister(handle, nix_inl_sso_hwgrp_irq, (void *)inl_dev,
+			   sso_msixoff + SSO_LF_INT_VEC_GRP);
+}
+
+static void
+nix_inl_nix_q_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t reg, intr;
+	uint8_t irq;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_QINTX_INT(0));
+	if (intr == 0)
+		return;
+
+	plt_err("Queue_intr=0x%" PRIx64 " qintx 0 pf=%d, vf=%d", intr, dev->pf,
+		dev->vf);
+
+	/* Get and clear RQ0 interrupt */
+	reg = roc_atomic64_add_nosync(0,
+				      (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
+	if (reg & BIT_ULL(42) /* OP_ERR */) {
+		plt_err("Failed to get rq_int");
+		return;
+	}
+	irq = reg & 0xff;
+	plt_write64(0 | irq, nix_base + NIX_LF_RQ_OP_INT);
+
+	if (irq & BIT_ULL(NIX_RQINT_DROP))
+		plt_err("RQ=0 NIX_RQINT_DROP");
+
+	if (irq & BIT_ULL(NIX_RQINT_RED))
+		plt_err("RQ=0 NIX_RQINT_RED");
+
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_QINTX_INT(0));
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+static void
+nix_inl_nix_ras_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t intr;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_RAS);
+	if (intr == 0)
+		return;
+
+	plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_RAS);
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+static void
+nix_inl_nix_err_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t intr;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_ERR_INT);
+	if (intr == 0)
+		return;
+
+	plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_ERR_INT);
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+int
+nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t nix_base = inl_dev->nix_base;
+	uint16_t msixoff;
+	int rc;
+
+	msixoff = inl_dev->nix_msixoff;
+	if (msixoff == MSIX_VECTOR_INVALID) {
+		plt_err("Invalid NIXLF MSIX vector offset: 0x%x", msixoff);
+		return -EINVAL;
+	}
+
+	/* Disable err interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_ERR_INT_ENA_W1C);
+	/* DIsable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1C);
+
+	/* Register err irq */
+	rc = dev_irq_register(handle, nix_inl_nix_err_irq, inl_dev,
+			      msixoff + NIX_LF_INT_VEC_ERR_INT);
+	rc |= dev_irq_register(handle, nix_inl_nix_ras_irq, inl_dev,
+			       msixoff + NIX_LF_INT_VEC_POISON);
+
+	/* Enable all nix lf error irqs except RQ_DISABLED and CQ_DISABLED */
+	plt_write64(~(BIT_ULL(11) | BIT_ULL(24)),
+		    nix_base + NIX_LF_ERR_INT_ENA_W1S);
+	/* Enable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1S);
+
+	/* Setup queue irq for RQ 0 */
+
+	/* Clear QINT CNT, interrupt */
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+
+	/* Register queue irq vector */
+	rc |= dev_irq_register(handle, nix_inl_nix_q_irq, inl_dev,
+			       msixoff + NIX_LF_INT_VEC_QINT_START);
+
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+	/* Enable QINT interrupt */
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(0));
+
+	return rc;
+}
+
+void
+nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t nix_base = inl_dev->nix_base;
+	uint16_t msixoff;
+
+	msixoff = inl_dev->nix_msixoff;
+	/* Disable err interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_ERR_INT_ENA_W1C);
+	/* DIsable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1C);
+
+	dev_irq_unregister(handle, nix_inl_nix_err_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_ERR_INT);
+	dev_irq_unregister(handle, nix_inl_nix_ras_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_POISON);
+
+	/* Clear QINT CNT */
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+
+	/* Disable QINT interrupt */
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+
+	/* Unregister queue irq vector */
+	dev_irq_unregister(handle, nix_inl_nix_q_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_QINT_START);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
new file mode 100644
index 0000000..f424009
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_NIX_INL_PRIV_H_
+#define _ROC_NIX_INL_PRIV_H_
+
+struct nix_inl_dev {
+	/* Base device object */
+	struct dev dev;
+
+	/* PCI device */
+	struct plt_pci_device *pci_dev;
+
+	/* LF specific BAR2 regions */
+	uintptr_t nix_base;
+	uintptr_t ssow_base;
+	uintptr_t sso_base;
+
+	/* MSIX vector offsets */
+	uint16_t nix_msixoff;
+	uint16_t ssow_msixoff;
+	uint16_t sso_msixoff;
+
+	/* SSO data */
+	uint32_t xaq_buf_size;
+	uint32_t xae_waes;
+	uint32_t iue;
+	uint64_t xaq_aura;
+	void *xaq_mem;
+	roc_nix_inl_sso_work_cb_t work_cb;
+	void *cb_args;
+
+	/* NIX data */
+	uint8_t lf_tx_stats;
+	uint8_t lf_rx_stats;
+	uint16_t cints;
+	uint16_t qints;
+	struct roc_nix_rq rq;
+	uint16_t rq_refs;
+	bool is_nix1;
+
+	/* NIX/CPT data */
+	void *inb_sa_base;
+	uint16_t inb_sa_sz;
+
+	/* Device arguments */
+	uint8_t selftest;
+	uint16_t ipsec_in_max_spi;
+};
+
+int nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev);
+void nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev);
+
+int nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev);
+void nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev);
+
+#endif /* _ROC_NIX_INL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b..177db3d 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -113,10 +113,11 @@
 #define plt_write64(val, addr)                                                 \
 	rte_write64_relaxed((val), (volatile void *)(addr))
 
-#define plt_wmb() rte_wmb()
-#define plt_rmb() rte_rmb()
-#define plt_io_wmb() rte_io_wmb()
-#define plt_io_rmb() rte_io_rmb()
+#define plt_wmb()		rte_wmb()
+#define plt_rmb()		rte_rmb()
+#define plt_io_wmb()		rte_io_wmb()
+#define plt_io_rmb()		rte_io_rmb()
+#define plt_atomic_thread_fence rte_atomic_thread_fence
 
 #define plt_mmap       mmap
 #define PLT_PROT_READ  PROT_READ
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index 7494b8d..f72bbd5 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -38,4 +38,7 @@
 /* CPT */
 #include "roc_cpt_priv.h"
 
+/* NIX Inline dev */
+#include "roc_nix_inl_priv.h"
+
 #endif /* _ROC_PRIV_H_ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 06/28] common/cnxk: support NIX inline device init and fini
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (4 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
                     ` (22 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev

Add support to init and fini inline device with NIX LF,
SSO LF and SSOW LF for inline inbound IPSec in CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/meson.build        |   1 +
 drivers/common/cnxk/roc_api.h          |   2 +
 drivers/common/cnxk/roc_cpt.c          |   7 +-
 drivers/common/cnxk/roc_idev.c         |   2 +
 drivers/common/cnxk/roc_idev_priv.h    |   3 +
 drivers/common/cnxk/roc_nix_debug.c    |  35 ++
 drivers/common/cnxk/roc_nix_inl.h      |  56 +++
 drivers/common/cnxk/roc_nix_inl_dev.c  | 636 +++++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h |   8 +
 drivers/common/cnxk/roc_platform.h     |   2 +
 drivers/common/cnxk/version.map        |   3 +
 11 files changed, 752 insertions(+), 3 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c

diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 207ca00..e8940d7 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl_dev.c',
         'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
         'roc_nix_mcast.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index c1af95e..53f4e4b 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -53,6 +53,8 @@
 #define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
 #define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
 #define PCI_DEVID_CNXK_BPHY	      0xA089
+#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
+#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
 
 #define PCI_DEVID_CN9K_CGX  0xA059
 #define PCI_DEVID_CN10K_RPM 0xA060
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 33524ef..48a378b 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -381,11 +381,12 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr,
 	if (blkaddr != RVU_BLOCK_ADDR_CPT0 && blkaddr != RVU_BLOCK_ADDR_CPT1)
 		return -EINVAL;
 
-	PLT_SET_USED(inl_dev_sso);
-
 	req = mbox_alloc_msg_cpt_lf_alloc(mbox);
 	req->nix_pf_func = 0;
-	req->sso_pf_func = idev_sso_pffunc_get();
+	if (inl_dev_sso && nix_inl_dev_pffunc_get())
+		req->sso_pf_func = nix_inl_dev_pffunc_get();
+	else
+		req->sso_pf_func = idev_sso_pffunc_get();
 	req->eng_grpmsk = eng_grpmsk;
 	req->blkaddr = blkaddr;
 
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index 1494187..648f37b 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -38,6 +38,8 @@ idev_set_defaults(struct idev_cfg *idev)
 	idev->num_lmtlines = 0;
 	idev->bphy = NULL;
 	idev->cpt = NULL;
+	idev->nix_inl_dev = NULL;
+	plt_spinlock_init(&idev->nix_inl_dev_lock);
 	__atomic_store_n(&idev->npa_refcnt, 0, __ATOMIC_RELEASE);
 }
 
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index 84e6f1e..2c8309b 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -9,6 +9,7 @@
 struct npa_lf;
 struct roc_bphy;
 struct roc_cpt;
+struct nix_inl_dev;
 struct idev_cfg {
 	uint16_t sso_pf_func;
 	uint16_t npa_pf_func;
@@ -20,6 +21,8 @@ struct idev_cfg {
 	uint64_t lmt_base_addr;
 	struct roc_bphy *bphy;
 	struct roc_cpt *cpt;
+	struct nix_inl_dev *nix_inl_dev;
+	plt_spinlock_t nix_inl_dev_lock;
 };
 
 /* Generic */
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 9539bb9..582f5a3 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -1213,3 +1213,38 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  \trss_alg_idx = %d", nix->rss_alg_idx);
 	nix_dump("  \ttx_pause = %d", nix->tx_pause);
 }
+
+void
+roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct nix_inl_dev *inl_dev =
+		(struct nix_inl_dev *)&roc_inl_dev->reserved;
+	struct dev *dev = &inl_dev->dev;
+
+	nix_dump("nix_inl_dev@%p", inl_dev);
+	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
+	nix_dump("  vf = %d", dev_get_vf(dev->pf_func));
+	nix_dump("  bar2 = 0x%" PRIx64, dev->bar2);
+	nix_dump("  bar4 = 0x%" PRIx64, dev->bar4);
+
+	nix_dump("  \tpci_dev = %p", inl_dev->pci_dev);
+	nix_dump("  \tnix_base = 0x%" PRIxPTR "", inl_dev->nix_base);
+	nix_dump("  \tsso_base = 0x%" PRIxPTR "", inl_dev->sso_base);
+	nix_dump("  \tssow_base = 0x%" PRIxPTR "", inl_dev->ssow_base);
+	nix_dump("  \tnix_msixoff = %d", inl_dev->nix_msixoff);
+	nix_dump("  \tsso_msixoff = %d", inl_dev->sso_msixoff);
+	nix_dump("  \tssow_msixoff = %d", inl_dev->ssow_msixoff);
+	nix_dump("  \tnix_cints = %d", inl_dev->cints);
+	nix_dump("  \tnix_qints = %d", inl_dev->qints);
+	nix_dump("  \trq_refs = %d", inl_dev->rq_refs);
+	nix_dump("  \tinb_sa_base = 0x%p", inl_dev->inb_sa_base);
+	nix_dump("  \tinb_sa_sz = %d", inl_dev->inb_sa_sz);
+	nix_dump("  \txaq_buf_size = %u", inl_dev->xaq_buf_size);
+	nix_dump("  \txae_waes = %u", inl_dev->xae_waes);
+	nix_dump("  \tiue = %u", inl_dev->iue);
+	nix_dump("  \txaq_aura = 0x%" PRIx64, inl_dev->xaq_aura);
+	nix_dump("  \txaq_mem = 0x%p", inl_dev->xaq_mem);
+
+	nix_dump("  \tinl_dev_rq:");
+	roc_nix_rq_dump(&inl_dev->rq);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 1ec3dda..1b3aab0 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -4,7 +4,63 @@
 #ifndef _ROC_NIX_INL_H_
 #define _ROC_NIX_INL_H_
 
+/* ONF INB HW area */
+#define ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ                                        \
+	PLT_ALIGN(sizeof(struct roc_onf_ipsec_inb_sa), ROC_ALIGN)
+/* ONF INB SW reserved area */
+#define ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD 384
+#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ                                        \
+	(ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD)
+#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2 9
+
+/* ONF OUTB HW area */
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ                                       \
+	PLT_ALIGN(sizeof(struct roc_onf_ipsec_outb_sa), ROC_ALIGN)
+/* ONF OUTB SW reserved area */
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD 128
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ                                       \
+	(ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD)
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2 8
+
+/* OT INB HW area */
+#define ROC_NIX_INL_OT_IPSEC_INB_HW_SZ                                         \
+	PLT_ALIGN(sizeof(struct roc_ot_ipsec_inb_sa), ROC_ALIGN)
+/* OT INB SW reserved area */
+#define ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD 128
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ                                         \
+	(ROC_NIX_INL_OT_IPSEC_INB_HW_SZ + ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD)
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2 10
+
+/* OT OUTB HW area */
+#define ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ                                        \
+	PLT_ALIGN(sizeof(struct roc_ot_ipsec_outb_sa), ROC_ALIGN)
+/* OT OUTB SW reserved area */
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD 128
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ                                        \
+	(ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD)
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2 9
+
+/* Alignment of SA Base */
+#define ROC_NIX_INL_SA_BASE_ALIGN BIT_ULL(16)
+
 /* Inline device SSO Work callback */
 typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
 
+struct roc_nix_inl_dev {
+	/* Input parameters */
+	struct plt_pci_device *pci_dev;
+	uint16_t ipsec_in_max_spi;
+	bool selftest;
+	bool attach_cptlf;
+	/* End of input parameters */
+
+#define ROC_NIX_INL_MEM_SZ (1280)
+	uint8_t reserved[ROC_NIX_INL_MEM_SZ] __plt_cache_aligned;
+} __plt_cache_aligned;
+
+/* NIX Inline Device API */
+int __roc_api roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev);
+int __roc_api roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev);
+void __roc_api roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev);
+
 #endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
new file mode 100644
index 0000000..0789f99
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -0,0 +1,636 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define XAQ_CACHE_CNT 0x7
+
+/* Default Rx Config for Inline NIX LF */
+#define NIX_INL_LF_RX_CFG                                                      \
+	(ROC_NIX_LF_RX_CFG_DROP_RE | ROC_NIX_LF_RX_CFG_L2_LEN_ERR |            \
+	 ROC_NIX_LF_RX_CFG_IP6_UDP_OPT | ROC_NIX_LF_RX_CFG_DIS_APAD |          \
+	 ROC_NIX_LF_RX_CFG_CSUM_IL4 | ROC_NIX_LF_RX_CFG_CSUM_OL4 |             \
+	 ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |               \
+	 ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3)
+
+uint16_t
+nix_inl_dev_pffunc_get(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev != NULL) {
+		inl_dev = idev->nix_inl_dev;
+		if (inl_dev)
+			return inl_dev->dev.pf_func;
+	}
+	return 0;
+}
+
+static void
+nix_inl_selftest_work_cb(uint64_t *gw, void *args)
+{
+	uintptr_t work = gw[1];
+
+	*((uintptr_t *)args + (gw[0] & 0x1)) = work;
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+}
+
+static int
+nix_inl_selftest(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	roc_nix_inl_sso_work_cb_t save_cb;
+	static uintptr_t work_arr[2];
+	struct nix_inl_dev *inl_dev;
+	void *save_cb_args;
+	uint64_t add_work0;
+	int rc = 0;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inl_dev == NULL)
+		return -ENOTSUP;
+
+	plt_info("Performing nix inl self test");
+
+	/* Save and update cb to test cb */
+	save_cb = inl_dev->work_cb;
+	save_cb_args = inl_dev->cb_args;
+	inl_dev->work_cb = nix_inl_selftest_work_cb;
+	inl_dev->cb_args = work_arr;
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+#define WORK_MAGIC1 0x335577ff0
+#define WORK_MAGIC2 0xdeadbeef0
+
+	/* Add work */
+	add_work0 = ((uint64_t)(SSO_TT_ORDERED) << 32) | 0x0;
+	roc_store_pair(add_work0, WORK_MAGIC1, inl_dev->sso_base);
+	add_work0 = ((uint64_t)(SSO_TT_ORDERED) << 32) | 0x1;
+	roc_store_pair(add_work0, WORK_MAGIC2, inl_dev->sso_base);
+
+	plt_delay_ms(10000);
+
+	/* Check if we got expected work */
+	if (work_arr[0] != WORK_MAGIC1 || work_arr[1] != WORK_MAGIC2) {
+		plt_err("Failed to get expected work, [0]=%p [1]=%p",
+			(void *)work_arr[0], (void *)work_arr[1]);
+		rc = -EFAULT;
+		goto exit;
+	}
+
+	plt_info("Work, [0]=%p [1]=%p", (void *)work_arr[0],
+		 (void *)work_arr[1]);
+
+exit:
+	/* Restore state */
+	inl_dev->work_cb = save_cb;
+	inl_dev->cb_args = save_cb_args;
+	return rc;
+}
+
+static int
+nix_inl_nix_ipsec_cfg(struct nix_inl_dev *inl_dev, bool ena)
+{
+	struct nix_inline_ipsec_lf_cfg *lf_cfg;
+	struct mbox *mbox = (&inl_dev->dev)->mbox;
+	uint32_t sa_w;
+
+	lf_cfg = mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
+	if (lf_cfg == NULL)
+		return -ENOSPC;
+
+	if (ena) {
+		sa_w = plt_align32pow2(inl_dev->ipsec_in_max_spi + 1);
+		sa_w = plt_log2_u32(sa_w);
+
+		lf_cfg->enable = 1;
+		lf_cfg->sa_base_addr = (uintptr_t)inl_dev->inb_sa_base;
+		lf_cfg->ipsec_cfg1.sa_idx_w = sa_w;
+		/* CN9K SA size is different */
+		if (roc_model_is_cn9k())
+			lf_cfg->ipsec_cfg0.lenm1_max = NIX_CN9K_MAX_HW_FRS - 1;
+		else
+			lf_cfg->ipsec_cfg0.lenm1_max = NIX_RPM_MAX_HW_FRS - 1;
+		lf_cfg->ipsec_cfg1.sa_idx_max = inl_dev->ipsec_in_max_spi;
+		lf_cfg->ipsec_cfg0.sa_pow2_size =
+			plt_log2_u32(inl_dev->inb_sa_sz);
+
+		lf_cfg->ipsec_cfg0.tag_const = 0;
+		lf_cfg->ipsec_cfg0.tt = SSO_TT_ORDERED;
+	} else {
+		lf_cfg->enable = 0;
+	}
+
+	return mbox_process(mbox);
+}
+
+static int
+nix_inl_cpt_setup(struct nix_inl_dev *inl_dev)
+{
+	struct roc_cpt_lf *lf = &inl_dev->cpt_lf;
+	struct dev *dev = &inl_dev->dev;
+	uint8_t eng_grpmask;
+	int rc;
+
+	if (!inl_dev->attach_cptlf)
+		return 0;
+
+	/* Alloc CPT LF */
+	eng_grpmask = (1ULL << ROC_CPT_DFLT_ENG_GRP_SE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
+	rc = cpt_lfs_alloc(dev, eng_grpmask, RVU_BLOCK_ADDR_CPT0, false);
+	if (rc) {
+		plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
+		return rc;
+	}
+
+	/* Setup CPT LF for submitting control opcode */
+	lf = &inl_dev->cpt_lf;
+	lf->lf_id = 0;
+	lf->nb_desc = 0; /* Set to default */
+	lf->dev = &inl_dev->dev;
+	lf->msixoff = inl_dev->cpt_msixoff;
+	lf->pci_dev = inl_dev->pci_dev;
+
+	rc = cpt_lf_init(lf);
+	if (rc) {
+		plt_err("Failed to initialize CPT LF, rc=%d", rc);
+		goto lf_free;
+	}
+
+	roc_cpt_iq_enable(lf);
+	return 0;
+lf_free:
+	rc |= cpt_lfs_free(dev);
+	return rc;
+}
+
+static int
+nix_inl_cpt_release(struct nix_inl_dev *inl_dev)
+{
+	struct roc_cpt_lf *lf = &inl_dev->cpt_lf;
+	struct dev *dev = &inl_dev->dev;
+	int rc, ret = 0;
+
+	if (!inl_dev->attach_cptlf)
+		return 0;
+
+	/* Cleanup CPT LF queue */
+	cpt_lf_fini(lf);
+
+	/* Free LF resources */
+	rc = cpt_lfs_free(dev);
+	if (rc)
+		plt_err("Failed to free CPT LF resources, rc=%d", rc);
+	ret |= rc;
+
+	/* Detach LF */
+	rc = cpt_lfs_detach(dev);
+	if (rc)
+		plt_err("Failed to detach CPT LF, rc=%d", rc);
+	ret |= rc;
+
+	return ret;
+}
+
+static int
+nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
+{
+	struct sso_lf_alloc_rsp *sso_rsp;
+	struct dev *dev = &inl_dev->dev;
+	uint32_t xaq_cnt, count, aura;
+	uint16_t hwgrp[1] = {0};
+	struct npa_pool_s pool;
+	uintptr_t iova;
+	int rc;
+
+	/* Alloc SSOW LF */
+	rc = sso_lf_alloc(dev, SSO_LF_TYPE_HWS, 1, NULL);
+	if (rc) {
+		plt_err("Failed to alloc SSO HWS, rc=%d", rc);
+		return rc;
+	}
+
+	/* Alloc HWGRP LF */
+	rc = sso_lf_alloc(dev, SSO_LF_TYPE_HWGRP, 1, (void **)&sso_rsp);
+	if (rc) {
+		plt_err("Failed to alloc SSO HWGRP, rc=%d", rc);
+		goto free_ssow;
+	}
+
+	inl_dev->xaq_buf_size = sso_rsp->xaq_buf_size;
+	inl_dev->xae_waes = sso_rsp->xaq_wq_entries;
+	inl_dev->iue = sso_rsp->in_unit_entries;
+
+	/* Create XAQ pool */
+	xaq_cnt = XAQ_CACHE_CNT;
+	xaq_cnt += inl_dev->iue / inl_dev->xae_waes;
+	plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+
+	inl_dev->xaq_mem = plt_zmalloc(inl_dev->xaq_buf_size * xaq_cnt,
+				       inl_dev->xaq_buf_size);
+	if (!inl_dev->xaq_mem) {
+		rc = NIX_ERR_NO_MEM;
+		plt_err("Failed to alloc xaq buf mem");
+		goto free_sso;
+	}
+
+	memset(&pool, 0, sizeof(struct npa_pool_s));
+	pool.nat_align = 1;
+	rc = roc_npa_pool_create(&inl_dev->xaq_aura, inl_dev->xaq_buf_size,
+				 xaq_cnt, NULL, &pool);
+	if (rc) {
+		plt_err("Failed to alloc aura for XAQ, rc=%d", rc);
+		goto free_mem;
+	}
+
+	/* Fill the XAQ buffers */
+	iova = (uint64_t)inl_dev->xaq_mem;
+	for (count = 0; count < xaq_cnt; count++) {
+		roc_npa_aura_op_free(inl_dev->xaq_aura, 0, iova);
+		iova += inl_dev->xaq_buf_size;
+	}
+	roc_npa_aura_op_range_set(inl_dev->xaq_aura, (uint64_t)inl_dev->xaq_mem,
+				  iova);
+
+	aura = roc_npa_aura_handle_to_aura(inl_dev->xaq_aura);
+
+	/* Setup xaq for hwgrps */
+	rc = sso_hwgrp_alloc_xaq(dev, aura, 1);
+	if (rc) {
+		plt_err("Failed to setup hwgrp xaq aura, rc=%d", rc);
+		goto destroy_pool;
+	}
+
+	/* Register SSO, SSOW error and work irq's */
+	rc = nix_inl_sso_register_irqs(inl_dev);
+	if (rc) {
+		plt_err("Failed to register sso irq's, rc=%d", rc);
+		goto release_xaq;
+	}
+
+	/* Setup hwgrp->hws link */
+	sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+
+	/* Enable HWGRP */
+	plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
+
+	return 0;
+
+release_xaq:
+	sso_hwgrp_release_xaq(&inl_dev->dev, 1);
+destroy_pool:
+	roc_npa_pool_destroy(inl_dev->xaq_aura);
+	inl_dev->xaq_aura = 0;
+free_mem:
+	plt_free(inl_dev->xaq_mem);
+	inl_dev->xaq_mem = NULL;
+free_sso:
+	sso_lf_free(dev, SSO_LF_TYPE_HWGRP, 1);
+free_ssow:
+	sso_lf_free(dev, SSO_LF_TYPE_HWS, 1);
+	return rc;
+}
+
+static int
+nix_inl_sso_release(struct nix_inl_dev *inl_dev)
+{
+	uint16_t hwgrp[1] = {0};
+
+	/* Disable HWGRP */
+	plt_write64(0, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
+
+	/* Unregister SSO/SSOW IRQ's */
+	nix_inl_sso_unregister_irqs(inl_dev);
+
+	/* Unlink hws */
+	sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+
+	/* Release XAQ aura */
+	sso_hwgrp_release_xaq(&inl_dev->dev, 1);
+
+	/* Free SSO, SSOW LF's */
+	sso_lf_free(&inl_dev->dev, SSO_LF_TYPE_HWS, 1);
+	sso_lf_free(&inl_dev->dev, SSO_LF_TYPE_HWGRP, 1);
+
+	return 0;
+}
+
+static int
+nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
+{
+	uint16_t ipsec_in_max_spi = inl_dev->ipsec_in_max_spi;
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct nix_lf_alloc_rsp *rsp;
+	struct nix_lf_alloc_req *req;
+	size_t inb_sa_sz;
+	int rc = -ENOSPC;
+
+	/* Alloc NIX LF needed for single RQ */
+	req = mbox_alloc_msg_nix_lf_alloc(mbox);
+	if (req == NULL)
+		return rc;
+	req->rq_cnt = 1;
+	req->sq_cnt = 1;
+	req->cq_cnt = 1;
+	/* XQESZ is W16 */
+	req->xqe_sz = NIX_XQESZ_W16;
+	/* RSS size does not matter as this RQ is only for UCAST_IPSEC action */
+	req->rss_sz = ROC_NIX_RSS_RETA_SZ_64;
+	req->rss_grps = ROC_NIX_RSS_GRPS;
+	req->npa_func = idev_npa_pffunc_get();
+	req->sso_func = dev->pf_func;
+	req->rx_cfg = NIX_INL_LF_RX_CFG;
+	req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
+
+	if (roc_model_is_cn10ka_a0() || roc_model_is_cnf10ka_a0() ||
+	    roc_model_is_cnf10kb_a0())
+		req->rx_cfg &= ~ROC_NIX_LF_RX_CFG_DROP_RE;
+
+	rc = mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		plt_err("Failed to alloc lf, rc=%d", rc);
+		return rc;
+	}
+
+	inl_dev->lf_tx_stats = rsp->lf_tx_stats;
+	inl_dev->lf_rx_stats = rsp->lf_rx_stats;
+	inl_dev->qints = rsp->qints;
+	inl_dev->cints = rsp->cints;
+
+	/* Register nix interrupts */
+	rc = nix_inl_nix_register_irqs(inl_dev);
+	if (rc) {
+		plt_err("Failed to register nix irq's, rc=%d", rc);
+		goto lf_free;
+	}
+
+	/* CN9K SA is different */
+	if (roc_model_is_cn9k())
+		inb_sa_sz = ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ;
+	else
+		inb_sa_sz = ROC_NIX_INL_OT_IPSEC_INB_SA_SZ;
+
+	/* Alloc contiguous memory for Inbound SA's */
+	inl_dev->inb_sa_sz = inb_sa_sz;
+	inl_dev->inb_sa_base = plt_zmalloc(inb_sa_sz * ipsec_in_max_spi,
+					   ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!inl_dev->inb_sa_base) {
+		plt_err("Failed to allocate memory for Inbound SA");
+		rc = -ENOMEM;
+		goto unregister_irqs;
+	}
+
+	/* Setup device specific inb SA table */
+	rc = nix_inl_nix_ipsec_cfg(inl_dev, true);
+	if (rc) {
+		plt_err("Failed to setup NIX Inbound SA conf, rc=%d", rc);
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	plt_free(inl_dev->inb_sa_base);
+	inl_dev->inb_sa_base = NULL;
+unregister_irqs:
+	nix_inl_nix_unregister_irqs(inl_dev);
+lf_free:
+	mbox_alloc_msg_nix_lf_free(mbox);
+	rc |= mbox_process(mbox);
+	return rc;
+}
+
+static int
+nix_inl_nix_release(struct nix_inl_dev *inl_dev)
+{
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct nix_lf_free_req *req;
+	struct ndc_sync_op *ndc_req;
+	int rc = -ENOSPC;
+
+	/* Disable Inbound processing */
+	rc = nix_inl_nix_ipsec_cfg(inl_dev, false);
+	if (rc)
+		plt_err("Failed to disable Inbound IPSec, rc=%d", rc);
+
+	/* Sync NDC-NIX for LF */
+	ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+	if (ndc_req == NULL)
+		return rc;
+	ndc_req->nix_lf_rx_sync = 1;
+	rc = mbox_process(mbox);
+	if (rc)
+		plt_err("Error on NDC-NIX-RX LF sync, rc %d", rc);
+
+	/* Unregister IRQs */
+	nix_inl_nix_unregister_irqs(inl_dev);
+
+	/* By default all associated mcam rules are deleted */
+	req = mbox_alloc_msg_nix_lf_free(mbox);
+	if (req == NULL)
+		return -ENOSPC;
+
+	return mbox_process(mbox);
+}
+
+static int
+nix_inl_lf_attach(struct nix_inl_dev *inl_dev)
+{
+	struct msix_offset_rsp *msix_rsp;
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct rsrc_attach_req *req;
+	uint64_t nix_blkaddr;
+	int rc = -ENOSPC;
+
+	req = mbox_alloc_msg_attach_resources(mbox);
+	if (req == NULL)
+		return rc;
+	req->modify = true;
+	/* Attach 1 NIXLF, SSO HWS and SSO HWGRP */
+	req->nixlf = true;
+	req->ssow = 1;
+	req->sso = 1;
+	if (inl_dev->attach_cptlf) {
+		req->cptlfs = 1;
+		req->cpt_blkaddr = RVU_BLOCK_ADDR_CPT0;
+	}
+
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		return rc;
+
+	/* Get MSIX vector offsets */
+	mbox_alloc_msg_msix_offset(mbox);
+	rc = mbox_process_msg(dev->mbox, (void **)&msix_rsp);
+	if (rc)
+		return rc;
+
+	inl_dev->nix_msixoff = msix_rsp->nix_msixoff;
+	inl_dev->ssow_msixoff = msix_rsp->ssow_msixoff[0];
+	inl_dev->sso_msixoff = msix_rsp->sso_msixoff[0];
+	inl_dev->cpt_msixoff = msix_rsp->cptlf_msixoff[0];
+
+	nix_blkaddr = nix_get_blkaddr(dev);
+	inl_dev->is_nix1 = (nix_blkaddr == RVU_BLOCK_ADDR_NIX1);
+
+	/* Update base addresses for LF's */
+	inl_dev->nix_base = dev->bar2 + (nix_blkaddr << 20);
+	inl_dev->ssow_base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20);
+	inl_dev->sso_base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20);
+	inl_dev->cpt_base = dev->bar2 + (RVU_BLOCK_ADDR_CPT0 << 20);
+
+	return 0;
+}
+
+static int
+nix_inl_lf_detach(struct nix_inl_dev *inl_dev)
+{
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct rsrc_detach_req *req;
+	int rc = -ENOSPC;
+
+	req = mbox_alloc_msg_detach_resources(mbox);
+	if (req == NULL)
+		return rc;
+	req->partial = true;
+	req->nixlf = true;
+	req->ssow = true;
+	req->sso = true;
+	req->cptlfs = !!inl_dev->attach_cptlf;
+
+	return mbox_process(dev->mbox);
+}
+
+int
+roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct plt_pci_device *pci_dev;
+	struct nix_inl_dev *inl_dev;
+	struct idev_cfg *idev;
+	int rc;
+
+	pci_dev = roc_inl_dev->pci_dev;
+
+	/* Skip probe if already done */
+	idev = idev_get_cfg();
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	if (idev->nix_inl_dev) {
+		plt_info("Skipping device %s, inline device already probed",
+			 pci_dev->name);
+		return -EEXIST;
+	}
+
+	PLT_STATIC_ASSERT(sizeof(struct nix_inl_dev) <= ROC_NIX_INL_MEM_SZ);
+
+	inl_dev = (struct nix_inl_dev *)roc_inl_dev->reserved;
+	memset(inl_dev, 0, sizeof(*inl_dev));
+
+	inl_dev->pci_dev = pci_dev;
+	inl_dev->ipsec_in_max_spi = roc_inl_dev->ipsec_in_max_spi;
+	inl_dev->selftest = roc_inl_dev->selftest;
+	inl_dev->attach_cptlf = roc_inl_dev->attach_cptlf;
+
+	/* Initialize base device */
+	rc = dev_init(&inl_dev->dev, pci_dev);
+	if (rc) {
+		plt_err("Failed to init roc device");
+		goto error;
+	}
+
+	/* Attach LF resources */
+	rc = nix_inl_lf_attach(inl_dev);
+	if (rc) {
+		plt_err("Failed to attach LF resources, rc=%d", rc);
+		goto dev_cleanup;
+	}
+
+	/* Setup NIX LF */
+	rc = nix_inl_nix_setup(inl_dev);
+	if (rc)
+		goto lf_detach;
+
+	/* Setup SSO LF */
+	rc = nix_inl_sso_setup(inl_dev);
+	if (rc)
+		goto nix_release;
+
+	/* Setup CPT LF */
+	rc = nix_inl_cpt_setup(inl_dev);
+	if (rc)
+		goto sso_release;
+
+	/* Perform selftest if asked for */
+	if (inl_dev->selftest) {
+		rc = nix_inl_selftest();
+		if (rc)
+			goto cpt_release;
+	}
+
+	idev->nix_inl_dev = inl_dev;
+
+	return 0;
+cpt_release:
+	rc |= nix_inl_cpt_release(inl_dev);
+sso_release:
+	rc |= nix_inl_sso_release(inl_dev);
+nix_release:
+	rc |= nix_inl_nix_release(inl_dev);
+lf_detach:
+	rc |= nix_inl_lf_detach(inl_dev);
+dev_cleanup:
+	rc |= dev_fini(&inl_dev->dev, pci_dev);
+error:
+	return rc;
+}
+
+int
+roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct plt_pci_device *pci_dev;
+	struct nix_inl_dev *inl_dev;
+	struct idev_cfg *idev;
+	int rc;
+
+	idev = idev_get_cfg();
+	if (idev == NULL)
+		return 0;
+
+	if (!idev->nix_inl_dev ||
+	    PLT_PTR_DIFF(roc_inl_dev->reserved, idev->nix_inl_dev))
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	pci_dev = inl_dev->pci_dev;
+
+	/* Release SSO */
+	rc = nix_inl_sso_release(inl_dev);
+
+	/* Release NIX */
+	rc |= nix_inl_nix_release(inl_dev);
+
+	/* Detach LF's */
+	rc |= nix_inl_lf_detach(inl_dev);
+
+	/* Cleanup mbox */
+	rc |= dev_fini(&inl_dev->dev, pci_dev);
+	if (rc)
+		return rc;
+
+	idev->nix_inl_dev = NULL;
+	return 0;
+}
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index f424009..4729a38 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -15,11 +15,13 @@ struct nix_inl_dev {
 	uintptr_t nix_base;
 	uintptr_t ssow_base;
 	uintptr_t sso_base;
+	uintptr_t cpt_base;
 
 	/* MSIX vector offsets */
 	uint16_t nix_msixoff;
 	uint16_t ssow_msixoff;
 	uint16_t sso_msixoff;
+	uint16_t cpt_msixoff;
 
 	/* SSO data */
 	uint32_t xaq_buf_size;
@@ -43,9 +45,13 @@ struct nix_inl_dev {
 	void *inb_sa_base;
 	uint16_t inb_sa_sz;
 
+	/* CPT data */
+	struct roc_cpt_lf cpt_lf;
+
 	/* Device arguments */
 	uint8_t selftest;
 	uint16_t ipsec_in_max_spi;
+	bool attach_cptlf;
 };
 
 int nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev);
@@ -54,4 +60,6 @@ void nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev);
 int nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev);
 void nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev);
 
+uint16_t nix_inl_dev_pffunc_get(void);
+
 #endif /* _ROC_NIX_INL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 177db3d..241655b 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -37,6 +37,7 @@
 #define PLT_MEMZONE_NAMESIZE	 RTE_MEMZONE_NAMESIZE
 #define PLT_STD_C11		 RTE_STD_C11
 #define PLT_PTR_ADD		 RTE_PTR_ADD
+#define PLT_PTR_DIFF		 RTE_PTR_DIFF
 #define PLT_MAX_RXTX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
 #define PLT_INTR_VEC_RXTX_OFFSET RTE_INTR_VEC_RXTX_OFFSET
 #define PLT_MIN			 RTE_MIN
@@ -77,6 +78,7 @@
 #define plt_cpu_to_be_64 rte_cpu_to_be_64
 #define plt_be_to_cpu_64 rte_be_to_cpu_64
 
+#define plt_align32pow2	    rte_align32pow2
 #define plt_align32prevpow2 rte_align32prevpow2
 
 #define plt_bitmap			rte_bitmap
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 1f9fe36..3256aeb 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -100,6 +100,9 @@ INTERNAL {
 	roc_nix_get_pf_func;
 	roc_nix_get_vf;
 	roc_nix_get_vwqe_interval;
+	roc_nix_inl_dev_dump;
+	roc_nix_inl_dev_fini;
+	roc_nix_inl_dev_init;
 	roc_nix_is_lbk;
 	roc_nix_is_pf;
 	roc_nix_is_sdp;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 07/28] common/cnxk: support NIX inline inbound and outbound setup
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (5 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
                     ` (21 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev

Add API to support setting up NIX inline inbound and
NIX inline outbound. In case of inbound, SA base is setup
on NIX PFFUNC and in case of outbound, required number of
CPT LF's are attached to NIX PFFUNC.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/hw/cpt.h         |   8 +
 drivers/common/cnxk/meson.build      |   1 +
 drivers/common/cnxk/roc_api.h        |  48 +--
 drivers/common/cnxk/roc_constants.h  |  58 +++
 drivers/common/cnxk/roc_io.h         |   9 +
 drivers/common/cnxk/roc_io_generic.h |   3 +-
 drivers/common/cnxk/roc_nix.h        |   5 +
 drivers/common/cnxk/roc_nix_debug.c  |  15 +
 drivers/common/cnxk/roc_nix_inl.c    | 778 +++++++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl.h    | 101 +++++
 drivers/common/cnxk/roc_nix_priv.h   |  15 +
 drivers/common/cnxk/roc_nix_queue.c  |  28 +-
 drivers/common/cnxk/roc_npc.c        |  27 +-
 drivers/common/cnxk/version.map      |  26 ++
 14 files changed, 1047 insertions(+), 75 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_constants.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl.c

diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 84ebf2d..975139f 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -40,6 +40,7 @@
 #define CPT_LF_CTX_ENC_PKT_CNT	(0x540ull)
 #define CPT_LF_CTX_DEC_BYTE_CNT (0x550ull)
 #define CPT_LF_CTX_DEC_PKT_CNT	(0x560ull)
+#define CPT_LF_CTX_RELOAD	(0x570ull)
 
 #define CPT_AF_LFX_CTL(a)  (0x27000ull | (uint64_t)(a) << 3)
 #define CPT_AF_LFX_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
@@ -68,6 +69,13 @@ union cpt_lf_ctx_flush {
 	} s;
 };
 
+union cpt_lf_ctx_reload {
+	uint64_t u;
+	struct {
+		uint64_t cptr : 46;
+	} s;
+};
+
 union cpt_lf_inprog {
 	uint64_t u;
 	struct cpt_lf_inprog_s {
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index e8940d7..cd19ad2 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl.c',
         'roc_nix_inl_dev.c',
         'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 53f4e4b..b8f3667 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -9,28 +9,21 @@
 #include <stdint.h>
 #include <string.h>
 
-/* Alignment */
-#define ROC_ALIGN 128
-
 /* Bits manipulation */
 #include "roc_bits.h"
 
 /* Bitfields manipulation */
 #include "roc_bitfield.h"
 
+/* ROC Constants */
+#include "roc_constants.h"
+
 /* Constants */
 #define PLT_ETHER_ADDR_LEN 6
 
 /* Platform definition */
 #include "roc_platform.h"
 
-#define ROC_LMT_LINE_SZ		    128
-#define ROC_NUM_LMT_LINES	    2048
-#define ROC_LMT_LINES_PER_CORE_LOG2 5
-#define ROC_LMT_LINE_SIZE_LOG2	    7
-#define ROC_LMT_BASE_PER_CORE_LOG2                                             \
-	(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
-
 /* IO */
 #if defined(__aarch64__)
 #include "roc_io.h"
@@ -38,41 +31,6 @@
 #include "roc_io_generic.h"
 #endif
 
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM	      0x177D
-#define PCI_DEVID_CNXK_RVU_PF	      0xA063
-#define PCI_DEVID_CNXK_RVU_VF	      0xA064
-#define PCI_DEVID_CNXK_RVU_AF	      0xA065
-#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_CNXK_RVU_NPA_PF     0xA0FB
-#define PCI_DEVID_CNXK_RVU_NPA_VF     0xA0FC
-#define PCI_DEVID_CNXK_RVU_AF_VF      0xA0f8
-#define PCI_DEVID_CNXK_DPI_VF	      0xA081
-#define PCI_DEVID_CNXK_EP_VF	      0xB203
-#define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
-#define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
-#define PCI_DEVID_CNXK_BPHY	      0xA089
-#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
-#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
-
-#define PCI_DEVID_CN9K_CGX  0xA059
-#define PCI_DEVID_CN10K_RPM 0xA060
-
-#define PCI_DEVID_CN9K_RVU_CPT_PF  0xA0FD
-#define PCI_DEVID_CN9K_RVU_CPT_VF  0xA0FE
-#define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2
-#define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3
-
-#define PCI_SUBSYSTEM_DEVID_CN10KA  0xB900
-#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
-
-#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
-#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
-#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
-#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
-#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
-
 /* HW structure definition */
 #include "hw/cpt.h"
 #include "hw/nix.h"
diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h
new file mode 100644
index 0000000..1e6427c
--- /dev/null
+++ b/drivers/common/cnxk/roc_constants.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_CONSTANTS_H_
+#define _ROC_CONSTANTS_H_
+
+/* Alignment */
+#define ROC_ALIGN 128
+
+/* LMTST constants */
+/* [CN10K, .) */
+#define ROC_LMT_LINE_SZ		    128
+#define ROC_NUM_LMT_LINES	    2048
+#define ROC_LMT_LINES_PER_CORE_LOG2 5
+#define ROC_LMT_LINE_SIZE_LOG2	    7
+#define ROC_LMT_BASE_PER_CORE_LOG2                                             \
+	(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
+#define ROC_LMT_MAX_THREADS		42UL
+#define ROC_LMT_CPT_LINES_PER_CORE_LOG2 4
+#define ROC_LMT_CPT_BASE_ID_OFF                                                \
+	(ROC_LMT_MAX_THREADS << ROC_LMT_LINES_PER_CORE_LOG2)
+
+/* PCI IDs */
+#define PCI_VENDOR_ID_CAVIUM	      0x177D
+#define PCI_DEVID_CNXK_RVU_PF	      0xA063
+#define PCI_DEVID_CNXK_RVU_VF	      0xA064
+#define PCI_DEVID_CNXK_RVU_AF	      0xA065
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
+#define PCI_DEVID_CNXK_RVU_NPA_PF     0xA0FB
+#define PCI_DEVID_CNXK_RVU_NPA_VF     0xA0FC
+#define PCI_DEVID_CNXK_RVU_AF_VF      0xA0f8
+#define PCI_DEVID_CNXK_DPI_VF	      0xA081
+#define PCI_DEVID_CNXK_EP_VF	      0xB203
+#define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
+#define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
+#define PCI_DEVID_CNXK_BPHY	      0xA089
+#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
+#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
+
+#define PCI_DEVID_CN9K_CGX  0xA059
+#define PCI_DEVID_CN10K_RPM 0xA060
+
+#define PCI_DEVID_CN9K_RVU_CPT_PF  0xA0FD
+#define PCI_DEVID_CN9K_RVU_CPT_VF  0xA0FE
+#define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2
+#define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3
+
+#define PCI_SUBSYSTEM_DEVID_CN10KA  0xB900
+#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
+
+#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
+#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
+#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
+#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
+#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
+
+#endif /* _ROC_CONSTANTS_H_ */
diff --git a/drivers/common/cnxk/roc_io.h b/drivers/common/cnxk/roc_io.h
index aee8c7f..fe5f7f4 100644
--- a/drivers/common/cnxk/roc_io.h
+++ b/drivers/common/cnxk/roc_io.h
@@ -13,6 +13,15 @@
 		(lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2);    \
 	} while (0)
 
+#define ROC_LMT_CPT_BASE_ID_GET(lmt_addr, lmt_id)                              \
+	do {                                                                   \
+		/* 16 Lines per core */                                        \
+		lmt_id = ROC_LMT_CPT_BASE_ID_OFF;                              \
+		lmt_id += (plt_lcore_id() << ROC_LMT_CPT_LINES_PER_CORE_LOG2); \
+		/* Each line is of 128B */                                     \
+		(lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2);    \
+	} while (0)
+
 #define roc_load_pair(val0, val1, addr)                                        \
 	({                                                                     \
 		asm volatile("ldp %x[x0], %x[x1], [%x[p1]]"                    \
diff --git a/drivers/common/cnxk/roc_io_generic.h b/drivers/common/cnxk/roc_io_generic.h
index 28cb096..ceaa3a3 100644
--- a/drivers/common/cnxk/roc_io_generic.h
+++ b/drivers/common/cnxk/roc_io_generic.h
@@ -5,7 +5,8 @@
 #ifndef _ROC_IO_GENERIC_H_
 #define _ROC_IO_GENERIC_H_
 
-#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
+#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id)	  (lmt_id = 0)
+#define ROC_LMT_CPT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
 
 #define roc_load_pair(val0, val1, addr)                                        \
 	do {                                                                   \
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index b0e6fab..ff8c93a 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -171,6 +171,7 @@ struct roc_nix_rq {
 	uint8_t spb_red_pass;
 	/* End of Input parameters */
 	struct roc_nix *roc_nix;
+	bool inl_dev_ref;
 };
 
 struct roc_nix_cq {
@@ -258,6 +259,10 @@ struct roc_nix {
 	bool enable_loop;
 	bool hw_vlan_ins;
 	uint8_t lock_rx_ctx;
+	uint32_t outb_nb_desc;
+	uint16_t outb_nb_crypto_qs;
+	uint16_t ipsec_in_max_spi;
+	uint16_t ipsec_out_max_sa;
 	/* End of input parameters */
 	/* LMT line base for "Per Core Tx LMT line" mode*/
 	uintptr_t lmt_base;
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 582f5a3..266935a 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -818,6 +818,7 @@ roc_nix_rq_dump(struct roc_nix_rq *rq)
 	nix_dump("  vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo);
 	nix_dump("  vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
 	nix_dump("  roc_nix = %p", rq->roc_nix);
+	nix_dump("  inl_dev_ref = %d", rq->inl_dev_ref);
 }
 
 void
@@ -1160,6 +1161,7 @@ roc_nix_dump(struct roc_nix *roc_nix)
 {
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct dev *dev = &nix->dev;
+	int i;
 
 	nix_dump("nix@%p", nix);
 	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
@@ -1169,6 +1171,7 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  port_id = %d", roc_nix->port_id);
 	nix_dump("  rss_tag_as_xor = %d", roc_nix->rss_tag_as_xor);
 	nix_dump("  rss_tag_as_xor = %d", roc_nix->max_sqb_count);
+	nix_dump("  outb_nb_desc = %u", roc_nix->outb_nb_desc);
 
 	nix_dump("  \tpci_dev = %p", nix->pci_dev);
 	nix_dump("  \tbase = 0x%" PRIxPTR "", nix->base);
@@ -1206,12 +1209,24 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  \ttx_link = %d", nix->tx_link);
 	nix_dump("  \tsqb_size = %d", nix->sqb_size);
 	nix_dump("  \tmsixoff = %d", nix->msixoff);
+	for (i = 0; i < nix->nb_cpt_lf; i++)
+		nix_dump("  \tcpt_msixoff[%d] = %d", i, nix->cpt_msixoff[i]);
 	nix_dump("  \tcints = %d", nix->cints);
 	nix_dump("  \tqints = %d", nix->qints);
 	nix_dump("  \tsdp_link = %d", nix->sdp_link);
 	nix_dump("  \tptp_en = %d", nix->ptp_en);
 	nix_dump("  \trss_alg_idx = %d", nix->rss_alg_idx);
 	nix_dump("  \ttx_pause = %d", nix->tx_pause);
+	nix_dump("  \tinl_inb_ena = %d", nix->inl_inb_ena);
+	nix_dump("  \tinl_outb_ena = %d", nix->inl_outb_ena);
+	nix_dump("  \tinb_sa_base = 0x%p", nix->inb_sa_base);
+	nix_dump("  \tinb_sa_sz = %" PRIu64, nix->inb_sa_sz);
+	nix_dump("  \toutb_sa_base = 0x%p", nix->outb_sa_base);
+	nix_dump("  \toutb_sa_sz = %" PRIu64, nix->outb_sa_sz);
+	nix_dump("  \toutb_err_sso_pffunc = 0x%x", nix->outb_err_sso_pffunc);
+	nix_dump("  \tcpt_lf_base = 0x%p", nix->cpt_lf_base);
+	nix_dump("  \tnb_cpt_lf = %d", nix->nb_cpt_lf);
+	nix_dump("  \tinb_inl_dev = %d", nix->inb_inl_dev);
 }
 
 void
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
new file mode 100644
index 0000000..1d962e3
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -0,0 +1,778 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ ==
+		  1UL << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ == 512);
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ ==
+		  1UL << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ ==
+		  1UL << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ == 1024);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ ==
+		  1UL << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2);
+
+static int
+nix_inl_inb_sa_tbl_setup(struct roc_nix *roc_nix)
+{
+	uint16_t ipsec_in_max_spi = roc_nix->ipsec_in_max_spi;
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_nix_ipsec_cfg cfg;
+	size_t inb_sa_sz;
+	int rc;
+
+	/* CN9K SA size is different */
+	if (roc_model_is_cn9k())
+		inb_sa_sz = ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ;
+	else
+		inb_sa_sz = ROC_NIX_INL_OT_IPSEC_INB_SA_SZ;
+
+	/* Alloc contiguous memory for Inbound SA's */
+	nix->inb_sa_sz = inb_sa_sz;
+	nix->inb_sa_base = plt_zmalloc(inb_sa_sz * ipsec_in_max_spi,
+				       ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!nix->inb_sa_base) {
+		plt_err("Failed to allocate memory for Inbound SA");
+		return -ENOMEM;
+	}
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.sa_size = inb_sa_sz;
+	cfg.iova = (uintptr_t)nix->inb_sa_base;
+	cfg.max_sa = ipsec_in_max_spi + 1;
+	cfg.tt = SSO_TT_ORDERED;
+
+	/* Setup device specific inb SA table */
+	rc = roc_nix_lf_inl_ipsec_cfg(roc_nix, &cfg, true);
+	if (rc) {
+		plt_err("Failed to setup NIX Inbound SA conf, rc=%d", rc);
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	plt_free(nix->inb_sa_base);
+	nix->inb_sa_base = NULL;
+	return rc;
+}
+
+static int
+nix_inl_sa_tbl_release(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	int rc;
+
+	rc = roc_nix_lf_inl_ipsec_cfg(roc_nix, NULL, false);
+	if (rc) {
+		plt_err("Failed to disable Inbound inline ipsec, rc=%d", rc);
+		return rc;
+	}
+
+	plt_free(nix->inb_sa_base);
+	nix->inb_sa_base = NULL;
+	return 0;
+}
+
+struct roc_cpt_lf *
+roc_nix_inl_outb_lf_base_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	/* NIX Inline config needs to be done */
+	if (!nix->inl_outb_ena || !nix->cpt_lf_base)
+		return NULL;
+
+	return (struct roc_cpt_lf *)nix->cpt_lf_base;
+}
+
+uintptr_t
+roc_nix_inl_outb_sa_base_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return (uintptr_t)nix->outb_sa_base;
+}
+
+uintptr_t
+roc_nix_inl_inb_sa_base_get(struct roc_nix *roc_nix, bool inb_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inb_inl_dev) {
+		/* Return inline dev sa base */
+		if (inl_dev)
+			return (uintptr_t)inl_dev->inb_sa_base;
+		return 0;
+	}
+
+	return (uintptr_t)nix->inb_sa_base;
+}
+
+uint32_t
+roc_nix_inl_inb_sa_max_spi(struct roc_nix *roc_nix, bool inb_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inb_inl_dev) {
+		if (inl_dev)
+			return inl_dev->ipsec_in_max_spi;
+		return 0;
+	}
+
+	return roc_nix->ipsec_in_max_spi;
+}
+
+uint32_t
+roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix, bool inl_dev_sa)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!inl_dev_sa)
+		return nix->inb_sa_sz;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inl_dev_sa && inl_dev)
+		return inl_dev->inb_sa_sz;
+
+	/* On error */
+	return 0;
+}
+
+uintptr_t
+roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix, bool inb_inl_dev, uint32_t spi)
+{
+	uintptr_t sa_base;
+	uint32_t max_spi;
+	uint64_t sz;
+
+	sa_base = roc_nix_inl_inb_sa_base_get(roc_nix, inb_inl_dev);
+	/* Check if SA base exists */
+	if (!sa_base)
+		return 0;
+
+	/* Check if SPI is in range */
+	max_spi = roc_nix_inl_inb_sa_max_spi(roc_nix, inb_inl_dev);
+	if (spi > max_spi) {
+		plt_err("Inbound SA SPI %u exceeds max %u", spi, max_spi);
+		return 0;
+	}
+
+	/* Get SA size */
+	sz = roc_nix_inl_inb_sa_sz(roc_nix, inb_inl_dev);
+	if (!sz)
+		return 0;
+
+	/* Basic logic of SPI->SA for now */
+	return (sa_base + (spi * sz));
+}
+
+int
+roc_nix_inl_inb_init(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct roc_cpt *roc_cpt;
+	uint16_t param1;
+	int rc;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	/* Unless we have another mechanism to trigger
+	 * onetime Inline config in CPTPF, we cannot
+	 * support without CPT being probed.
+	 */
+	roc_cpt = idev->cpt;
+	if (!roc_cpt) {
+		plt_err("Cannot support inline inbound, cryptodev not probed");
+		return -ENOTSUP;
+	}
+
+	if (roc_model_is_cn9k()) {
+		param1 = ROC_ONF_IPSEC_INB_MAX_L2_SZ;
+	} else {
+		union roc_ot_ipsec_inb_param1 u;
+
+		u.u16 = 0;
+		u.s.esp_trailer_disable = 1;
+		param1 = u.u16;
+	}
+
+	/* Do onetime Inbound Inline config in CPTPF */
+	rc = roc_cpt_inline_ipsec_inb_cfg(roc_cpt, param1, 0);
+	if (rc && rc != -EEXIST) {
+		plt_err("Failed to setup inbound lf, rc=%d", rc);
+		return rc;
+	}
+
+	/* Setup Inbound SA table */
+	rc = nix_inl_inb_sa_tbl_setup(roc_nix);
+	if (rc)
+		return rc;
+
+	nix->inl_inb_ena = true;
+	return 0;
+}
+
+int
+roc_nix_inl_inb_fini(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	nix->inl_inb_ena = false;
+
+	/* Disable Inbound SA */
+	return nix_inl_sa_tbl_release(roc_nix);
+}
+
+int
+roc_nix_inl_outb_init(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct roc_cpt_lf *lf_base, *lf;
+	struct dev *dev = &nix->dev;
+	struct msix_offset_rsp *rsp;
+	struct nix_inl_dev *inl_dev;
+	uint16_t sso_pffunc;
+	uint8_t eng_grpmask;
+	uint64_t blkaddr;
+	uint16_t nb_lf;
+	void *sa_base;
+	size_t sa_sz;
+	int i, j, rc;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	nb_lf = roc_nix->outb_nb_crypto_qs;
+	blkaddr = nix->is_nix1 ? RVU_BLOCK_ADDR_CPT1 : RVU_BLOCK_ADDR_CPT0;
+
+	/* Retrieve inline device if present */
+	inl_dev = idev->nix_inl_dev;
+	sso_pffunc = inl_dev ? inl_dev->dev.pf_func : idev_sso_pffunc_get();
+	if (!sso_pffunc) {
+		plt_err("Failed to setup inline outb, need either "
+			"inline device or sso device");
+		return -ENOTSUP;
+	}
+
+	/* Attach CPT LF for outbound */
+	rc = cpt_lfs_attach(dev, blkaddr, true, nb_lf);
+	if (rc) {
+		plt_err("Failed to attach CPT LF for inline outb, rc=%d", rc);
+		return rc;
+	}
+
+	/* Alloc CPT LF */
+	eng_grpmask = (1ULL << ROC_CPT_DFLT_ENG_GRP_SE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
+	rc = cpt_lfs_alloc(dev, eng_grpmask, blkaddr, true);
+	if (rc) {
+		plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
+		goto lf_detach;
+	}
+
+	/* Get msix offsets */
+	rc = cpt_get_msix_offset(dev, &rsp);
+	if (rc) {
+		plt_err("Failed to get CPT LF msix offset, rc=%d", rc);
+		goto lf_free;
+	}
+
+	mbox_memcpy(nix->cpt_msixoff,
+		    nix->is_nix1 ? rsp->cpt1_lf_msixoff : rsp->cptlf_msixoff,
+		    sizeof(nix->cpt_msixoff));
+
+	/* Alloc required num of cpt lfs */
+	lf_base = plt_zmalloc(nb_lf * sizeof(struct roc_cpt_lf), 0);
+	if (!lf_base) {
+		plt_err("Failed to alloc cpt lf memory");
+		rc = -ENOMEM;
+		goto lf_free;
+	}
+
+	/* Initialize CPT LF's */
+	for (i = 0; i < nb_lf; i++) {
+		lf = &lf_base[i];
+
+		lf->lf_id = i;
+		lf->nb_desc = roc_nix->outb_nb_desc;
+		lf->dev = &nix->dev;
+		lf->msixoff = nix->cpt_msixoff[i];
+		lf->pci_dev = nix->pci_dev;
+
+		/* Setup CPT LF instruction queue */
+		rc = cpt_lf_init(lf);
+		if (rc) {
+			plt_err("Failed to initialize CPT LF, rc=%d", rc);
+			goto lf_fini;
+		}
+
+		/* Associate this CPT LF with NIX PFFUNC */
+		rc = cpt_lf_outb_cfg(dev, sso_pffunc, nix->dev.pf_func, i,
+				     true);
+		if (rc) {
+			plt_err("Failed to setup CPT LF->(NIX,SSO) link, rc=%d",
+				rc);
+			goto lf_fini;
+		}
+
+		/* Enable IQ */
+		roc_cpt_iq_enable(lf);
+	}
+
+	if (!roc_nix->ipsec_out_max_sa)
+		goto skip_sa_alloc;
+
+	/* CN9K SA size is different */
+	if (roc_model_is_cn9k())
+		sa_sz = ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ;
+	else
+		sa_sz = ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ;
+	/* Alloc contiguous memory of outbound SA */
+	sa_base = plt_zmalloc(sa_sz * roc_nix->ipsec_out_max_sa,
+			      ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!sa_base) {
+		plt_err("Outbound SA base alloc failed");
+		goto lf_fini;
+	}
+	nix->outb_sa_base = sa_base;
+	nix->outb_sa_sz = sa_sz;
+
+skip_sa_alloc:
+
+	nix->cpt_lf_base = lf_base;
+	nix->nb_cpt_lf = nb_lf;
+	nix->outb_err_sso_pffunc = sso_pffunc;
+	nix->inl_outb_ena = true;
+	return 0;
+
+lf_fini:
+	for (j = i - 1; j >= 0; j--)
+		cpt_lf_fini(&lf_base[j]);
+	plt_free(lf_base);
+lf_free:
+	rc |= cpt_lfs_free(dev);
+lf_detach:
+	rc |= cpt_lfs_detach(dev);
+	return rc;
+}
+
+int
+roc_nix_inl_outb_fini(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_cpt_lf *lf_base = nix->cpt_lf_base;
+	struct dev *dev = &nix->dev;
+	int i, rc, ret = 0;
+
+	if (!nix->inl_outb_ena)
+		return 0;
+
+	nix->inl_outb_ena = false;
+
+	/* Cleanup CPT LF instruction queue */
+	for (i = 0; i < nix->nb_cpt_lf; i++)
+		cpt_lf_fini(&lf_base[i]);
+
+	/* Free LF resources */
+	rc = cpt_lfs_free(dev);
+	if (rc)
+		plt_err("Failed to free CPT LF resources, rc=%d", rc);
+	ret |= rc;
+
+	/* Detach LF */
+	rc = cpt_lfs_detach(dev);
+	if (rc)
+		plt_err("Failed to detach CPT LF, rc=%d", rc);
+
+	/* Free LF memory */
+	plt_free(lf_base);
+	nix->cpt_lf_base = NULL;
+	nix->nb_cpt_lf = 0;
+
+	/* Free outbound SA base */
+	plt_free(nix->outb_sa_base);
+	nix->outb_sa_base = NULL;
+
+	ret |= rc;
+	return ret;
+}
+
+bool
+roc_nix_inl_dev_is_probed(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev == NULL)
+		return 0;
+
+	return !!idev->nix_inl_dev;
+}
+
+bool
+roc_nix_inl_inb_is_enabled(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inl_inb_ena;
+}
+
+bool
+roc_nix_inl_outb_is_enabled(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inl_outb_ena;
+}
+
+int
+roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	struct dev *dev;
+	int rc;
+
+	if (idev == NULL)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	/* Nothing to do if no inline device */
+	if (!inl_dev)
+		return 0;
+
+	/* Just take reference if already inited */
+	if (inl_dev->rq_refs) {
+		inl_dev->rq_refs++;
+		rq->inl_dev_ref = true;
+		return 0;
+	}
+
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rq;
+	memset(inl_rq, 0, sizeof(struct roc_nix_rq));
+
+	/* Take RQ pool attributes from the first ethdev RQ */
+	inl_rq->qid = 0;
+	inl_rq->aura_handle = rq->aura_handle;
+	inl_rq->first_skip = rq->first_skip;
+	inl_rq->later_skip = rq->later_skip;
+	inl_rq->lpb_size = rq->lpb_size;
+
+	if (!roc_model_is_cn9k()) {
+		uint64_t aura_limit =
+			roc_npa_aura_op_limit_get(inl_rq->aura_handle);
+		uint64_t aura_shift = plt_log2_u32(aura_limit);
+
+		if (aura_shift < 8)
+			aura_shift = 0;
+		else
+			aura_shift = aura_shift - 8;
+
+		/* Set first pass RQ to drop when half of the buffers are in
+		 * use to avoid metabuf alloc failure. This is needed as long
+		 * as we cannot use different
+		 */
+		inl_rq->red_pass = (aura_limit / 2) >> aura_shift;
+		inl_rq->red_drop = ((aura_limit / 2) - 1) >> aura_shift;
+	}
+
+	/* Enable IPSec */
+	inl_rq->ipsech_ena = true;
+
+	inl_rq->flow_tag_width = 20;
+	/* Special tag mask */
+	inl_rq->tag_mask = 0xFFF00000;
+	inl_rq->tt = SSO_TT_ORDERED;
+	inl_rq->hwgrp = 0;
+	inl_rq->wqe_skip = 1;
+	inl_rq->sso_ena = true;
+
+	/* Prepare and send RQ init mbox */
+	if (roc_model_is_cn9k())
+		rc = nix_rq_cn9k_cfg(dev, inl_rq, inl_dev->qints, false, true);
+	else
+		rc = nix_rq_cfg(dev, inl_rq, inl_dev->qints, false, true);
+	if (rc) {
+		plt_err("Failed to prepare aq_enq msg, rc=%d", rc);
+		return rc;
+	}
+
+	rc = mbox_process(dev->mbox);
+	if (rc) {
+		plt_err("Failed to send aq_enq msg, rc=%d", rc);
+		return rc;
+	}
+
+	inl_dev->rq_refs++;
+	rq->inl_dev_ref = true;
+	return 0;
+}
+
+int
+roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	struct dev *dev;
+	int rc;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!rq->inl_dev_ref)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	/* Inline device should be there if we have ref */
+	if (!inl_dev) {
+		plt_err("Failed to find inline device with refs");
+		return -EFAULT;
+	}
+
+	rq->inl_dev_ref = false;
+	inl_dev->rq_refs--;
+	if (inl_dev->rq_refs)
+		return 0;
+
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rq;
+	/* There are no more references, disable RQ */
+	rc = nix_rq_ena_dis(dev, inl_rq, false);
+	if (rc)
+		plt_err("Failed to disable inline device rq, rc=%d", rc);
+
+	/* Flush NIX LF for CN10K */
+	if (roc_model_is_cn10k())
+		plt_write64(0, inl_dev->nix_base + NIX_LF_OP_VWQE_FLUSH);
+
+	return rc;
+}
+
+uint64_t
+roc_nix_inl_dev_rq_limit_get(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+
+	if (!idev || !idev->nix_inl_dev)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev->rq_refs)
+		return 0;
+
+	inl_rq = &inl_dev->rq;
+
+	return roc_npa_aura_op_limit_get(inl_rq->aura_handle);
+}
+
+void
+roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	/* Info used by NPC flow rule add */
+	nix->inb_inl_dev = use_inl_dev;
+}
+
+bool
+roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inb_inl_dev;
+}
+
+struct roc_nix_rq *
+roc_nix_inl_dev_rq(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev != NULL) {
+		inl_dev = idev->nix_inl_dev;
+		if (inl_dev != NULL && inl_dev->rq_refs)
+			return &inl_dev->rq;
+	}
+
+	return NULL;
+}
+
+uint16_t __roc_api
+roc_nix_inl_outb_sso_pffunc_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->outb_err_sso_pffunc;
+}
+
+int
+roc_nix_inl_cb_register(roc_nix_inl_sso_work_cb_t cb, void *args)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return -EIO;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev)
+		return -EIO;
+
+	/* Be silent if registration called with same cb and args */
+	if (inl_dev->work_cb == cb && inl_dev->cb_args == args)
+		return 0;
+
+	/* Don't allow registration again if registered with different cb */
+	if (inl_dev->work_cb)
+		return -EBUSY;
+
+	inl_dev->work_cb = cb;
+	inl_dev->cb_args = args;
+	return 0;
+}
+
+int
+roc_nix_inl_cb_unregister(roc_nix_inl_sso_work_cb_t cb, void *args)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return -ENOENT;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev)
+		return -ENOENT;
+
+	if (inl_dev->work_cb != cb || inl_dev->cb_args != args)
+		return -EINVAL;
+
+	inl_dev->work_cb = NULL;
+	inl_dev->cb_args = NULL;
+	return 0;
+}
+
+int
+roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix, uint32_t tag_const,
+			   uint8_t tt)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_nix_ipsec_cfg cfg;
+
+	/* Be silent if inline inbound not enabled */
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.sa_size = nix->inb_sa_sz;
+	cfg.iova = (uintptr_t)nix->inb_sa_base;
+	cfg.max_sa = roc_nix->ipsec_in_max_spi + 1;
+	cfg.tt = tt;
+	cfg.tag_const = tag_const;
+
+	return roc_nix_lf_inl_ipsec_cfg(roc_nix, &cfg, true);
+}
+
+int
+roc_nix_inl_sa_sync(struct roc_nix *roc_nix, void *sa, bool inb,
+		    enum roc_nix_inl_sa_sync_op op)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_cpt_lf *outb_lf = nix->cpt_lf_base;
+	union cpt_lf_ctx_reload reload;
+	union cpt_lf_ctx_flush flush;
+	uintptr_t rbase;
+
+	/* Nothing much to do on cn9k */
+	if (roc_model_is_cn9k()) {
+		plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+		return 0;
+	}
+
+	if (!inb && !outb_lf)
+		return -EINVAL;
+
+	/* Performing op via outbound lf is enough
+	 * when inline dev is not in use.
+	 */
+	if (outb_lf && !nix->inb_inl_dev) {
+		rbase = outb_lf->rbase;
+
+		flush.u = 0;
+		reload.u = 0;
+		switch (op) {
+		case ROC_NIX_INL_SA_OP_FLUSH_INVAL:
+			flush.s.inval = 1;
+			/* fall through */
+		case ROC_NIX_INL_SA_OP_FLUSH:
+			flush.s.cptr = ((uintptr_t)sa) >> 7;
+			plt_write64(flush.u, rbase + CPT_LF_CTX_FLUSH);
+			break;
+		case ROC_NIX_INL_SA_OP_RELOAD:
+			reload.s.cptr = ((uintptr_t)sa) >> 7;
+			plt_write64(reload.u, rbase + CPT_LF_CTX_RELOAD);
+			break;
+		default:
+			return -EINVAL;
+		}
+		return 0;
+	}
+
+	return -ENOTSUP;
+}
+
+void
+roc_nix_inl_dev_lock(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev != NULL)
+		plt_spinlock_lock(&idev->nix_inl_dev_lock);
+}
+
+void
+roc_nix_inl_dev_unlock(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev != NULL)
+		plt_spinlock_unlock(&idev->nix_inl_dev_lock);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 1b3aab0..6b8c268 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -43,6 +43,62 @@
 /* Alignment of SA Base */
 #define ROC_NIX_INL_SA_BASE_ALIGN BIT_ULL(16)
 
+static inline struct roc_onf_ipsec_inb_sa *
+roc_nix_inl_onf_ipsec_inb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline struct roc_onf_ipsec_outb_sa *
+roc_nix_inl_onf_ipsec_outb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline void *
+roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ);
+}
+
+static inline void *
+roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ);
+}
+
+static inline struct roc_ot_ipsec_inb_sa *
+roc_nix_inl_ot_ipsec_inb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline struct roc_ot_ipsec_outb_sa *
+roc_nix_inl_ot_ipsec_outb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline void *
+roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
+}
+
+static inline void *
+roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
+}
+
 /* Inline device SSO Work callback */
 typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
 
@@ -62,5 +118,50 @@ struct roc_nix_inl_dev {
 int __roc_api roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev);
 int __roc_api roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev);
 void __roc_api roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev);
+bool __roc_api roc_nix_inl_dev_is_probed(void);
+void __roc_api roc_nix_inl_dev_lock(void);
+void __roc_api roc_nix_inl_dev_unlock(void);
+
+/* NIX Inline Inbound API */
+int __roc_api roc_nix_inl_inb_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_inb_fini(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_inl_inb_is_enabled(struct roc_nix *roc_nix);
+uintptr_t __roc_api roc_nix_inl_inb_sa_base_get(struct roc_nix *roc_nix,
+						bool inl_dev_sa);
+uint32_t __roc_api roc_nix_inl_inb_sa_max_spi(struct roc_nix *roc_nix,
+					      bool inl_dev_sa);
+uint32_t __roc_api roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix,
+					 bool inl_dev_sa);
+uintptr_t __roc_api roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix,
+					   bool inl_dev_sa, uint32_t spi);
+void __roc_api roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev);
+int __roc_api roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq);
+int __roc_api roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq);
+bool __roc_api roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix);
+struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(void);
+int __roc_api roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix,
+					 uint32_t tag_const, uint8_t tt);
+uint64_t __roc_api roc_nix_inl_dev_rq_limit_get(void);
+
+/* NIX Inline Outbound API */
+int __roc_api roc_nix_inl_outb_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_outb_fini(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_inl_outb_is_enabled(struct roc_nix *roc_nix);
+uintptr_t __roc_api roc_nix_inl_outb_sa_base_get(struct roc_nix *roc_nix);
+struct roc_cpt_lf *__roc_api
+roc_nix_inl_outb_lf_base_get(struct roc_nix *roc_nix);
+uint16_t __roc_api roc_nix_inl_outb_sso_pffunc_get(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_cb_register(roc_nix_inl_sso_work_cb_t cb, void *args);
+int __roc_api roc_nix_inl_cb_unregister(roc_nix_inl_sso_work_cb_t cb,
+					void *args);
+/* NIX Inline/Outbound API */
+enum roc_nix_inl_sa_sync_op {
+	ROC_NIX_INL_SA_OP_FLUSH,
+	ROC_NIX_INL_SA_OP_FLUSH_INVAL,
+	ROC_NIX_INL_SA_OP_RELOAD,
+};
+
+int __roc_api roc_nix_inl_sa_sync(struct roc_nix *roc_nix, void *sa, bool inb,
+				  enum roc_nix_inl_sa_sync_op op);
 
 #endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index 0cabcd2..13867b9 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -163,6 +163,21 @@ struct nix {
 	uint16_t tm_link_cfg_lvl;
 	uint16_t contig_rsvd[NIX_TXSCH_LVL_CNT];
 	uint16_t discontig_rsvd[NIX_TXSCH_LVL_CNT];
+
+	/* Ipsec info */
+	uint16_t cpt_msixoff[MAX_RVU_BLKLF_CNT];
+	bool inl_inb_ena;
+	bool inl_outb_ena;
+	void *inb_sa_base;
+	size_t inb_sa_sz;
+	void *outb_sa_base;
+	size_t outb_sa_sz;
+	uint16_t outb_err_sso_pffunc;
+	struct roc_cpt_lf *cpt_lf_base;
+	uint16_t nb_cpt_lf;
+	/* Mode provided by driver */
+	bool inb_inl_dev;
+
 } __plt_cache_aligned;
 
 enum nix_err_status {
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index cff0ec3..41e8f2c 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -131,11 +131,11 @@ nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
 
 	/* If RED enabled, then fill enable for all cases */
 	if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
-		aq->rq.spb_aura_pass = rq->spb_red_pass;
-		aq->rq.lpb_aura_pass = rq->red_pass;
+		aq->rq.spb_pool_pass = rq->spb_red_pass;
+		aq->rq.lpb_pool_pass = rq->red_pass;
 
-		aq->rq.spb_aura_drop = rq->spb_red_drop;
-		aq->rq.lpb_aura_drop = rq->red_drop;
+		aq->rq.spb_pool_drop = rq->spb_red_drop;
+		aq->rq.lpb_pool_drop = rq->red_drop;
 	}
 
 	if (cfg) {
@@ -176,11 +176,11 @@ nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
 		aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena;
 
 		if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
-			aq->rq_mask.spb_aura_pass = ~aq->rq_mask.spb_aura_pass;
-			aq->rq_mask.lpb_aura_pass = ~aq->rq_mask.lpb_aura_pass;
+			aq->rq_mask.spb_pool_pass = ~aq->rq_mask.spb_pool_pass;
+			aq->rq_mask.lpb_pool_pass = ~aq->rq_mask.lpb_pool_pass;
 
-			aq->rq_mask.spb_aura_drop = ~aq->rq_mask.spb_aura_drop;
-			aq->rq_mask.lpb_aura_drop = ~aq->rq_mask.lpb_aura_drop;
+			aq->rq_mask.spb_pool_drop = ~aq->rq_mask.spb_pool_drop;
+			aq->rq_mask.lpb_pool_drop = ~aq->rq_mask.lpb_pool_drop;
 		}
 	}
 
@@ -276,17 +276,13 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
 
 	/* If RED enabled, then fill enable for all cases */
 	if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
-		aq->rq.spb_pool_pass = rq->red_pass;
-		aq->rq.spb_aura_pass = rq->red_pass;
+		aq->rq.spb_pool_pass = rq->spb_red_pass;
 		aq->rq.lpb_pool_pass = rq->red_pass;
-		aq->rq.lpb_aura_pass = rq->red_pass;
 		aq->rq.wqe_pool_pass = rq->red_pass;
 		aq->rq.xqe_pass = rq->red_pass;
 
-		aq->rq.spb_pool_drop = rq->red_drop;
-		aq->rq.spb_aura_drop = rq->red_drop;
+		aq->rq.spb_pool_drop = rq->spb_red_drop;
 		aq->rq.lpb_pool_drop = rq->red_drop;
-		aq->rq.lpb_aura_drop = rq->red_drop;
 		aq->rq.wqe_pool_drop = rq->red_drop;
 		aq->rq.xqe_drop = rq->red_drop;
 	}
@@ -346,16 +342,12 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
 
 		if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
 			aq->rq_mask.spb_pool_pass = ~aq->rq_mask.spb_pool_pass;
-			aq->rq_mask.spb_aura_pass = ~aq->rq_mask.spb_aura_pass;
 			aq->rq_mask.lpb_pool_pass = ~aq->rq_mask.lpb_pool_pass;
-			aq->rq_mask.lpb_aura_pass = ~aq->rq_mask.lpb_aura_pass;
 			aq->rq_mask.wqe_pool_pass = ~aq->rq_mask.wqe_pool_pass;
 			aq->rq_mask.xqe_pass = ~aq->rq_mask.xqe_pass;
 
 			aq->rq_mask.spb_pool_drop = ~aq->rq_mask.spb_pool_drop;
-			aq->rq_mask.spb_aura_drop = ~aq->rq_mask.spb_aura_drop;
 			aq->rq_mask.lpb_pool_drop = ~aq->rq_mask.lpb_pool_drop;
-			aq->rq_mask.lpb_aura_drop = ~aq->rq_mask.lpb_aura_drop;
 			aq->rq_mask.wqe_pool_drop = ~aq->rq_mask.wqe_pool_drop;
 			aq->rq_mask.xqe_drop = ~aq->rq_mask.xqe_drop;
 		}
diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
index 52a54b3..047b969 100644
--- a/drivers/common/cnxk/roc_npc.c
+++ b/drivers/common/cnxk/roc_npc.c
@@ -340,10 +340,11 @@ roc_npc_fini(struct roc_npc *roc_npc)
 }
 
 static int
-npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr,
+npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 		  const struct roc_npc_action actions[],
 		  struct roc_npc_flow *flow)
 {
+	struct npc *npc = roc_npc_to_npc_priv(roc_npc);
 	const struct roc_npc_action_mark *act_mark;
 	const struct roc_npc_action_queue *act_q;
 	const struct roc_npc_action_vf *vf_act;
@@ -425,15 +426,16 @@ npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr,
 			 *    NPC_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
 			 *  session_protocol ==
 			 *    NPC_SECURITY_PROTOCOL_IPSEC
-			 *
-			 * RSS is not supported with inline ipsec. Get the
-			 * rq from associated conf, or make
-			 * ROC_NPC_ACTION_TYPE_QUEUE compulsory with this
-			 * action.
-			 * Currently, rq = 0 is assumed.
 			 */
 			req_act |= ROC_NPC_ACTION_TYPE_SEC;
 			rq = 0;
+
+			/* Special processing when with inline device */
+			if (roc_nix_inb_is_with_inl_dev(roc_npc->roc_nix) &&
+			    roc_nix_inl_dev_is_probed()) {
+				rq = 0;
+				pf_func = nix_inl_dev_pffunc_get();
+			}
 			break;
 		case ROC_NPC_ACTION_TYPE_VLAN_STRIP:
 			req_act |= ROC_NPC_ACTION_TYPE_VLAN_STRIP;
@@ -677,11 +679,12 @@ npc_parse_attr(struct npc *npc, const struct roc_npc_attr *attr,
 }
 
 static int
-npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr,
+npc_parse_rule(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	       const struct roc_npc_item_info pattern[],
 	       const struct roc_npc_action actions[], struct roc_npc_flow *flow,
 	       struct npc_parse_state *pst)
 {
+	struct npc *npc = roc_npc_to_npc_priv(roc_npc);
 	int err;
 
 	/* Check attr */
@@ -695,7 +698,7 @@ npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr,
 		return err;
 
 	/* Check action */
-	err = npc_parse_actions(npc, attr, actions, flow);
+	err = npc_parse_actions(roc_npc, attr, actions, flow);
 	if (err)
 		return err;
 	return 0;
@@ -711,7 +714,8 @@ roc_npc_flow_parse(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	struct npc_parse_state parse_state = {0};
 	int rc;
 
-	rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state);
+	rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow,
+			    &parse_state);
 	if (rc)
 		return rc;
 
@@ -1191,7 +1195,8 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	}
 	memset(flow, 0, sizeof(*flow));
 
-	rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state);
+	rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow,
+			    &parse_state);
 	if (rc != 0) {
 		*errcode = rc;
 		goto err_exit;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 3256aeb..0ea3cbd 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -100,9 +100,35 @@ INTERNAL {
 	roc_nix_get_pf_func;
 	roc_nix_get_vf;
 	roc_nix_get_vwqe_interval;
+	roc_nix_inl_cb_register;
+	roc_nix_inl_cb_unregister;
 	roc_nix_inl_dev_dump;
 	roc_nix_inl_dev_fini;
 	roc_nix_inl_dev_init;
+	roc_nix_inl_dev_is_probed;
+	roc_nix_inl_dev_lock;
+	roc_nix_inl_dev_unlock;
+	roc_nix_inl_dev_rq;
+	roc_nix_inl_dev_rq_get;
+	roc_nix_inl_dev_rq_put;
+	roc_nix_inl_dev_rq_limit_get;
+	roc_nix_inl_inb_is_enabled;
+	roc_nix_inl_inb_init;
+	roc_nix_inl_inb_sa_base_get;
+	roc_nix_inl_inb_sa_get;
+	roc_nix_inl_inb_sa_max_spi;
+	roc_nix_inl_inb_sa_sz;
+	roc_nix_inl_inb_tag_update;
+	roc_nix_inl_inb_fini;
+	roc_nix_inb_is_with_inl_dev;
+	roc_nix_inb_mode_set;
+	roc_nix_inl_outb_fini;
+	roc_nix_inl_outb_init;
+	roc_nix_inl_outb_lf_base_get;
+	roc_nix_inl_outb_sa_base_get;
+	roc_nix_inl_outb_sso_pffunc_get;
+	roc_nix_inl_outb_is_enabled;
+	roc_nix_inl_sa_sync;
 	roc_nix_is_lbk;
 	roc_nix_is_pf;
 	roc_nix_is_sdp;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 08/28] common/cnxk: disable CQ drop when inline inbound is enabled
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (6 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
                     ` (20 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Disable CQ drop when inline inbound is enabled. CQ drop
is not supported for second pass IPsec decrypted packets.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_queue.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 41e8f2c..41a1422 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -492,15 +492,20 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq)
 		cq->drop_thresh = min_rx_drop;
 	} else {
 		cq->drop_thresh = NIX_CQ_THRESH_LEVEL;
-		cq_ctx->drop = cq->drop_thresh;
-		cq_ctx->drop_ena = 1;
+		/* Drop processing or red drop cannot be enabled due to
+		 * due to packets coming for second pass from CPT.
+		 */
+		if (!roc_nix_inl_inb_is_enabled(roc_nix)) {
+			cq_ctx->drop = cq->drop_thresh;
+			cq_ctx->drop_ena = 1;
+		}
 	}
 
 	/* TX pause frames enable flow ctrl on RX side */
 	if (nix->tx_pause) {
 		/* Single BPID is allocated for all rx channels for now */
 		cq_ctx->bpid = nix->bpid[0];
-		cq_ctx->bp = cq_ctx->drop;
+		cq_ctx->bp = cq->drop_thresh;
 		cq_ctx->bp_ena = 1;
 	}
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 09/28] common/cnxk: dump CPT LF registers on error intr
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (7 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
                     ` (19 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Dump CPT LF registers on error interrupt for debugging
purpose.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_cpt.c       |  5 ++++-
 drivers/common/cnxk/roc_cpt_debug.c | 32 ++++++++++++++++++++++++++++++--
 drivers/common/cnxk/roc_cpt_priv.h  |  1 +
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 48a378b..6ddbaa2 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -51,6 +51,9 @@ cpt_lf_misc_irq(void *param)
 
 	plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
 
+	/* Dump lf registers */
+	cpt_lf_print(lf);
+
 	/* Clear interrupt */
 	plt_write64(intr, lf->rbase + CPT_LF_MISC_INT);
 }
@@ -203,7 +206,7 @@ cpt_lf_dump(struct roc_cpt_lf *lf)
 	plt_cpt_dbg("CPT LF REG:");
 	plt_cpt_dbg("LF_CTL[0x%016llx]: 0x%016" PRIx64, CPT_LF_CTL,
 		    plt_read64(lf->rbase + CPT_LF_CTL));
-	plt_cpt_dbg("Q_SIZE[0x%016llx]: 0x%016" PRIx64, CPT_LF_INPROG,
+	plt_cpt_dbg("LF_INPROG[0x%016llx]: 0x%016" PRIx64, CPT_LF_INPROG,
 		    plt_read64(lf->rbase + CPT_LF_INPROG));
 
 	plt_cpt_dbg("Q_BASE[0x%016llx]: 0x%016" PRIx64, CPT_LF_Q_BASE,
diff --git a/drivers/common/cnxk/roc_cpt_debug.c b/drivers/common/cnxk/roc_cpt_debug.c
index a6c9004..847d969 100644
--- a/drivers/common/cnxk/roc_cpt_debug.c
+++ b/drivers/common/cnxk/roc_cpt_debug.c
@@ -157,11 +157,40 @@ roc_cpt_afs_print(struct roc_cpt *roc_cpt)
 	return 0;
 }
 
-static void
+void
 cpt_lf_print(struct roc_cpt_lf *lf)
 {
 	uint64_t reg_val;
 
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_BASE);
+	plt_print("    CPT_LF_Q_BASE:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_SIZE);
+	plt_print("    CPT_LF_Q_SIZE:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_INST_PTR);
+	plt_print("    CPT_LF_Q_INST_PTR:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_GRP_PTR);
+	plt_print("    CPT_LF_Q_GRP_PTR:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_CTL);
+	plt_print("    CPT_LF_CTL:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_MISC_INT_ENA_W1S);
+	plt_print("    CPT_LF_MISC_INT_ENA_W1S:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_MISC_INT);
+	plt_print("    CPT_LF_MISC_INT:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_INPROG);
+	plt_print("    CPT_LF_INPROG:\t%016lx", reg_val);
+
+	if (roc_model_is_cn9k())
+		return;
+
+	plt_print("Count registers for CPT LF%d:", lf->lf_id);
+
 	reg_val = plt_read64(lf->rbase + CPT_LF_CTX_ENC_BYTE_CNT);
 	plt_print("    Encrypted byte count:\t%" PRIu64, reg_val);
 
@@ -190,7 +219,6 @@ roc_cpt_lfs_print(struct roc_cpt *roc_cpt)
 		if (lf == NULL)
 			continue;
 
-		plt_print("Count registers for CPT LF%d:", lf_id);
 		cpt_lf_print(lf);
 	}
 
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 21911e5..61dec9a 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -31,5 +31,6 @@ int cpt_lf_outb_cfg(struct dev *dev, uint16_t sso_pf_func, uint16_t nix_pf_func,
 		    uint8_t lf_id, bool ena);
 int cpt_get_msix_offset(struct dev *dev, struct msix_offset_rsp **msix_rsp);
 uint64_t cpt_get_blkaddr(struct dev *dev);
+void cpt_lf_print(struct roc_cpt_lf *lf);
 
 #endif /* _ROC_CPT_PRIV_H_ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 10/28] common/cnxk: align CPT LF enable/disable sequence
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (8 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
                     ` (18 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

For CPT LF IQ enable, set CPT_LF_CTL[ENA] before setting
CPT_LF_INPROG[EENA] to true.

For CPT LF IQ disable, align sequence to that of HRM.

Also this patch aligns space for instructions in CPT LF
to ROC_ALIGN to make complete memory cache aligned and
has other minor fixes/additions.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/hw/cpt.h  | 11 +++++++++++
 drivers/common/cnxk/roc_cpt.c | 42 ++++++++++++++++++++++++++++++++++--------
 drivers/common/cnxk/roc_cpt.h |  8 ++++++++
 3 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 975139f..4d9df59 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -124,6 +124,17 @@ union cpt_lf_misc_int {
 	} s;
 };
 
+union cpt_lf_q_grp_ptr {
+	uint64_t u;
+	struct {
+		uint64_t dq_ptr : 15;
+		uint64_t reserved_31_15 : 17;
+		uint64_t nq_ptr : 15;
+		uint64_t reserved_47_62 : 16;
+		uint64_t xq_xor : 1;
+	} s;
+};
+
 union cpt_inst_w4 {
 	uint64_t u64;
 	struct {
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 6ddbaa2..68fdb27 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -437,8 +437,10 @@ cpt_lf_iq_mem_calc(uint32_t nb_desc)
 	len += CPT_IQ_FC_LEN;
 
 	/* For instruction queues */
-	len += CPT_IQ_NB_DESC_SIZE_DIV40(nb_desc) * CPT_IQ_NB_DESC_MULTIPLIER *
-	       sizeof(struct cpt_inst_s);
+	len += PLT_ALIGN(CPT_IQ_NB_DESC_SIZE_DIV40(nb_desc) *
+				 CPT_IQ_NB_DESC_MULTIPLIER *
+				 sizeof(struct cpt_inst_s),
+			 ROC_ALIGN);
 
 	return len;
 }
@@ -550,6 +552,7 @@ cpt_lf_init(struct roc_cpt_lf *lf)
 	iq_mem = plt_zmalloc(cpt_lf_iq_mem_calc(lf->nb_desc), ROC_ALIGN);
 	if (iq_mem == NULL)
 		return -ENOMEM;
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
 
 	blkaddr = cpt_get_blkaddr(dev);
 	lf->rbase = dev->bar2 + ((blkaddr << 20) | (lf->lf_id << 12));
@@ -634,7 +637,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
 	}
 
 	/* Reserve 1 CPT LF for inline inbound */
-	nb_lf_avail = PLT_MIN(nb_lf_avail, ROC_CPT_MAX_LFS - 1);
+	nb_lf_avail = PLT_MIN(nb_lf_avail, (uint16_t)(ROC_CPT_MAX_LFS - 1));
 
 	roc_cpt->nb_lf_avail = nb_lf_avail;
 
@@ -770,8 +773,10 @@ void
 roc_cpt_iq_disable(struct roc_cpt_lf *lf)
 {
 	union cpt_lf_ctl lf_ctl = {.u = 0x0};
+	union cpt_lf_q_grp_ptr grp_ptr;
 	union cpt_lf_inprog lf_inprog;
 	int timeout = 20;
+	int cnt;
 
 	/* Disable instructions enqueuing */
 	plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL);
@@ -795,6 +800,27 @@ roc_cpt_iq_disable(struct roc_cpt_lf *lf)
 	 */
 	lf_inprog.s.eena = 0x0;
 	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
+
+	/* Wait for instruction queue to become empty */
+	cnt = 0;
+	do {
+		lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+		if (lf_inprog.s.grb_partial)
+			cnt = 0;
+		else
+			cnt++;
+		grp_ptr.u = plt_read64(lf->rbase + CPT_LF_Q_GRP_PTR);
+	} while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
+
+	cnt = 0;
+	do {
+		lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+		if ((lf_inprog.s.inflight == 0) && (lf_inprog.s.gwb_cnt < 40) &&
+		    ((lf_inprog.s.grb_cnt == 0) || (lf_inprog.s.grb_cnt == 40)))
+			cnt++;
+		else
+			cnt = 0;
+	} while (cnt < 10);
 }
 
 void
@@ -806,11 +832,6 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf)
 	/* Disable command queue */
 	roc_cpt_iq_disable(lf);
 
-	/* Enable command queue execution */
-	lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
-	lf_inprog.s.eena = 1;
-	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
-
 	/* Enable instruction queue enqueuing */
 	lf_ctl.u = plt_read64(lf->rbase + CPT_LF_CTL);
 	lf_ctl.s.ena = 1;
@@ -819,6 +840,11 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf)
 	lf_ctl.s.fc_hyst_bits = lf->fc_hyst_bits;
 	plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL);
 
+	/* Enable command queue execution */
+	lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+	lf_inprog.s.eena = 1;
+	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
+
 	cpt_lf_dump(lf);
 }
 
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index c80a8e0..06277d1 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -76,6 +76,14 @@
 #define ROC_CPT_TUNNEL_IPV4_HDR_LEN 20
 #define ROC_CPT_TUNNEL_IPV6_HDR_LEN 40
 
+#define ROC_CPT_CCM_AAD_DATA 1
+#define ROC_CPT_CCM_MSG_LEN  4
+#define ROC_CPT_CCM_ICV_LEN  16
+#define ROC_CPT_CCM_FLAGS                                                      \
+	((ROC_CPT_CCM_AAD_DATA << 6) |                                         \
+	 (((ROC_CPT_CCM_ICV_LEN - 2) / 2) << 3) | (ROC_CPT_CCM_MSG_LEN - 1))
+#define ROC_CPT_CCM_SALT_LEN 3
+
 struct roc_cpt_lmtline {
 	uint64_t io_addr;
 	uint64_t *fc_addr;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 11/28] common/cnxk: restore NIX sqb pool limit before destroy
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (9 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
                     ` (17 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Restore SQB AURA/POOL limit before destroying SQB to be
able to drain all the buffers from the aura.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_queue.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 41a1422..a8a713a 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -934,6 +934,11 @@ roc_nix_sq_fini(struct roc_nix_sq *sq)
 		rc |= NIX_ERR_NDC_SYNC;
 
 	rc |= nix_tm_sq_flush_post(sq);
+
+	/* Restore limit to max SQB count that the pool was created
+	 * for aura drain to succeed.
+	 */
+	roc_npa_aura_limit_modify(sq->aura_handle, NIX_MAX_SQB);
 	rc |= roc_npa_pool_destroy(sq->aura_handle);
 	plt_free(sq->fc);
 	plt_free(sq->sqe_mem);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 12/28] common/cnxk: add CQ enable support in NIX Tx path
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (10 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
                     ` (16 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Kommula Shiva Shankar

From: Kommula Shiva Shankar <kshankar@marvell.com>

This patch provides applications to add CQ support
in Tx path. This enables packet completion events on
CQ for requested packets.

Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
---
 drivers/common/cnxk/roc_nix.h       | 2 ++
 drivers/common/cnxk/roc_nix_queue.c | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index ff8c93a..9d3f338 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -194,7 +194,9 @@ struct roc_nix_sq {
 	enum roc_nix_sq_max_sqe_sz max_sqe_sz;
 	uint32_t nb_desc;
 	uint16_t qid;
+	uint16_t cqid;
 	bool sso_ena;
+	bool cq_ena;
 	/* End of Input parameters */
 	uint16_t sqes_per_sqb_log2;
 	struct roc_nix *roc_nix;
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index a8a713a..cba1294 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -661,6 +661,8 @@ sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
 	aq->sq.sqe_stype = NIX_STYPE_STF;
 	aq->sq.ena = 1;
 	aq->sq.sso_ena = !!sq->sso_ena;
+	aq->sq.cq_ena = !!sq->cq_ena;
+	aq->sq.cq = sq->cqid;
 	if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
 		aq->sq.sqe_stype = NIX_STYPE_STP;
 	aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
@@ -759,6 +761,8 @@ sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
 	aq->sq.sqe_stype = NIX_STYPE_STF;
 	aq->sq.ena = 1;
 	aq->sq.sso_ena = !!sq->sso_ena;
+	aq->sq.cq_ena = !!sq->cq_ena;
+	aq->sq.cq = sq->cqid;
 	if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
 		aq->sq.sqe_stype = NIX_STYPE_STP;
 	aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 13/28] common/cnxk: setup aura BP conf based on nix
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (11 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
                     ` (15 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Currently only NIX0 conf is setup in AURA for backpressure.
This patch adds support for NIX1 as well.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_fc.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index f17eba4..7eac7d0 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -284,8 +284,18 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena,
 	limit = rsp->aura.limit;
 	/* BP is already enabled. */
 	if (rsp->aura.bp_ena) {
+		uint16_t bpid;
+		bool nix1;
+
+		nix1 = !!(rsp->aura.bp_ena & 0x2);
+		if (nix1)
+			bpid = rsp->aura.nix1_bpid;
+		else
+			bpid = rsp->aura.nix0_bpid;
+
 		/* If BP ids don't match disable BP. */
-		if ((rsp->aura.nix0_bpid != nix->bpid[0]) && !force) {
+		if (((nix1 != nix->is_nix1) || (bpid != nix->bpid[0])) &&
+		    !force) {
 			req = mbox_alloc_msg_npa_aq_enq(mbox);
 			if (req == NULL)
 				return;
@@ -315,14 +325,19 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena,
 	req->op = NPA_AQ_INSTOP_WRITE;
 
 	if (ena) {
-		req->aura.nix0_bpid = nix->bpid[0];
-		req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
+		if (nix->is_nix1) {
+			req->aura.nix1_bpid = nix->bpid[0];
+			req->aura_mask.nix1_bpid = ~(req->aura_mask.nix1_bpid);
+		} else {
+			req->aura.nix0_bpid = nix->bpid[0];
+			req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
+		}
 		req->aura.bp = NIX_RQ_AURA_THRESH(
 			limit > 128 ? 256 : limit); /* 95% of size*/
 		req->aura_mask.bp = ~(req->aura_mask.bp);
 	}
 
-	req->aura.bp_ena = !!ena;
+	req->aura.bp_ena = (!!ena << nix->is_nix1);
 	req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
 
 	mbox_process(mbox);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 14/28] common/cnxk: support anti-replay check in SW for cn9k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (12 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
@ 2021-09-30 17:00   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
                     ` (14 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:00 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds anti replay SW implementation for cn9k platform.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/common/cnxk/cnxk_security_ar.h | 184 +++++++++++++++++++++++++++++++++
 1 file changed, 184 insertions(+)
 create mode 100644 drivers/common/cnxk/cnxk_security_ar.h

diff --git a/drivers/common/cnxk/cnxk_security_ar.h b/drivers/common/cnxk/cnxk_security_ar.h
new file mode 100644
index 0000000..6bc517c
--- /dev/null
+++ b/drivers/common/cnxk/cnxk_security_ar.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_SECURITY_AR_H__
+#define __CNXK_SECURITY_AR_H__
+
+#include <rte_mbuf.h>
+
+#include "cnxk_security.h"
+
+#define CNXK_ON_AR_WIN_SIZE_MAX 1024
+
+/* u64 array size to fit anti replay window bits */
+#define AR_WIN_ARR_SZ                                                          \
+	(PLT_ALIGN_CEIL(CNXK_ON_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) /        \
+	 BITS_PER_LONG_LONG)
+
+#define WORD_SHIFT 6
+#define WORD_SIZE  (1 << WORD_SHIFT)
+#define WORD_MASK  (WORD_SIZE - 1)
+
+#define IPSEC_ANTI_REPLAY_FAILED (-1)
+
+struct cnxk_on_ipsec_ar {
+	rte_spinlock_t lock;
+	uint32_t winb;
+	uint32_t wint;
+	uint64_t base;			/**< base of the anti-replay window */
+	uint64_t window[AR_WIN_ARR_SZ]; /**< anti-replay window */
+};
+
+static inline int
+cnxk_on_anti_replay_check(uint64_t seq, struct cnxk_on_ipsec_ar *ar,
+			  uint32_t winsz)
+{
+	uint64_t ex_winsz = winsz + WORD_SIZE;
+	uint64_t *window = &ar->window[0];
+	uint64_t seqword, shiftwords;
+	uint64_t base = ar->base;
+	uint32_t winb = ar->winb;
+	uint32_t wint = ar->wint;
+	uint64_t winwords;
+	uint64_t bit_pos;
+	uint64_t shift;
+	uint64_t *wptr;
+	uint64_t tmp;
+
+	winwords = ex_winsz >> WORD_SHIFT;
+	if (winsz > 64)
+		goto slow_shift;
+	/* Check if the seq is the biggest one yet */
+	if (likely(seq > base)) {
+		shift = seq - base;
+		if (shift < winsz) { /* In window */
+			/*
+			 * If more than 64-bit anti-replay window,
+			 * use slow shift routine
+			 */
+			wptr = window + (shift >> WORD_SHIFT);
+			*wptr <<= shift;
+			*wptr |= 1ull;
+		} else {
+			/* No special handling of window size > 64 */
+			wptr = window + ((winsz - 1) >> WORD_SHIFT);
+			/*
+			 * Zero out the whole window (especially for
+			 * bigger than 64b window) till the last 64b word
+			 * as the incoming sequence number minus
+			 * base sequence is more than the window size.
+			 */
+			while (window != wptr)
+				*window++ = 0ull;
+			/*
+			 * Set the last bit (of the window) to 1
+			 * as that corresponds to the base sequence number.
+			 * Now any incoming sequence number which is
+			 * (base - window size - 1) will pass anti-replay check
+			 */
+			*wptr = 1ull;
+		}
+		/*
+		 * Set the base to incoming sequence number as
+		 * that is the biggest sequence number seen yet
+		 */
+		ar->base = seq;
+		return 0;
+	}
+
+	bit_pos = base - seq;
+
+	/* If seq falls behind the window, return failure */
+	if (bit_pos >= winsz)
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* seq is within anti-replay window */
+	wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
+	bit_pos &= WORD_MASK;
+
+	/* Check if this is a replayed packet */
+	if (*wptr & ((1ull) << bit_pos))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* mark as seen */
+	*wptr |= ((1ull) << bit_pos);
+	return 0;
+
+slow_shift:
+	if (likely(seq > base)) {
+		uint32_t i;
+
+		shift = seq - base;
+		if (unlikely(shift >= winsz)) {
+			/*
+			 * shift is bigger than the window,
+			 * so just zero out everything
+			 */
+			for (i = 0; i < winwords; i++)
+				window[i] = 0;
+winupdate:
+			/* Find out the word */
+			seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
+
+			/* Find out the bit in the word */
+			bit_pos = (seq - 1) & WORD_MASK;
+
+			/*
+			 * Set the bit corresponding to sequence number
+			 * in window to mark it as received
+			 */
+			window[seqword] |= (1ull << (63 - bit_pos));
+
+			/* wint and winb range from 1 to ex_winsz */
+			ar->wint = ((wint + shift - 1) % ex_winsz) + 1;
+			ar->winb = ((winb + shift - 1) % ex_winsz) + 1;
+
+			ar->base = seq;
+			return 0;
+		}
+
+		/*
+		 * New sequence number is bigger than the base but
+		 * it's not bigger than base + window size
+		 */
+
+		shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
+			     ((wint - 1) >> WORD_SHIFT);
+		if (unlikely(shiftwords)) {
+			tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
+			for (i = 0; i < shiftwords; i++) {
+				tmp %= winwords;
+				window[tmp++] = 0;
+			}
+		}
+
+		goto winupdate;
+	}
+
+	/* Sequence number is before the window */
+	if (unlikely((seq + winsz) <= base))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* Sequence number is within the window */
+
+	/* Find out the word */
+	seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
+
+	/* Find out the bit in the word */
+	bit_pos = (seq - 1) & WORD_MASK;
+
+	/* Check if this is a replayed packet */
+	if (window[seqword] & (1ull << (63 - bit_pos)))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/*
+	 * Set the bit corresponding to sequence number
+	 * in window to mark it as received
+	 */
+	window[seqword] |= (1ull << (63 - bit_pos));
+
+	return 0;
+}
+
+#endif /* __CNXK_SECURITY_AR_H__ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 15/28] common/cnxk: support inline IPsec rte flow action
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (13 preceding siblings ...)
  2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
                     ` (13 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Satheesh Paul

From: Satheesh Paul <psatheesh@marvell.com>

Add support to configure flow rules with inline IPsec action.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
---
 drivers/common/cnxk/roc_nix_inl.h      |  3 +++
 drivers/common/cnxk/roc_nix_inl_dev.c  |  3 +++
 drivers/common/cnxk/roc_nix_inl_priv.h |  3 +++
 drivers/common/cnxk/roc_npc_mcam.c     | 28 ++++++++++++++++++++++++++--
 4 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 6b8c268..ae5e022 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -107,6 +107,9 @@ struct roc_nix_inl_dev {
 	struct plt_pci_device *pci_dev;
 	uint16_t ipsec_in_max_spi;
 	bool selftest;
+	bool is_multi_channel;
+	uint16_t channel;
+	uint16_t chan_mask;
 	bool attach_cptlf;
 	/* End of input parameters */
 
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index 0789f99..495dd19 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -543,6 +543,9 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
 	inl_dev->pci_dev = pci_dev;
 	inl_dev->ipsec_in_max_spi = roc_inl_dev->ipsec_in_max_spi;
 	inl_dev->selftest = roc_inl_dev->selftest;
+	inl_dev->is_multi_channel = roc_inl_dev->is_multi_channel;
+	inl_dev->channel = roc_inl_dev->channel;
+	inl_dev->chan_mask = roc_inl_dev->chan_mask;
 	inl_dev->attach_cptlf = roc_inl_dev->attach_cptlf;
 
 	/* Initialize base device */
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index 4729a38..3dc526f 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -50,6 +50,9 @@ struct nix_inl_dev {
 
 	/* Device arguments */
 	uint8_t selftest;
+	uint16_t channel;
+	uint16_t chan_mask;
+	bool is_multi_channel;
 	uint16_t ipsec_in_max_spi;
 	bool attach_cptlf;
 };
diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c
index 8ccaaad..4985d22 100644
--- a/drivers/common/cnxk/roc_npc_mcam.c
+++ b/drivers/common/cnxk/roc_npc_mcam.c
@@ -503,8 +503,11 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
 {
 	int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
 	struct npc_mcam_write_entry_req *req;
+	struct nix_inl_dev *inl_dev = NULL;
 	struct mbox *mbox = npc->mbox;
 	struct mbox_msghdr *rsp;
+	struct idev_cfg *idev;
+	uint16_t pf_func = 0;
 	uint16_t ctr = ~(0);
 	int rc, idx;
 	int entry;
@@ -553,9 +556,30 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
 		req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
 	}
 
+	idev = idev_get_cfg();
+	if (idev)
+		inl_dev = idev->nix_inl_dev;
+
 	if (flow->nix_intf == NIX_INTF_RX) {
-		req->entry_data.kw[0] |= (uint64_t)npc->channel;
-		req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+		if (inl_dev && inl_dev->is_multi_channel &&
+		    (flow->npc_action & NIX_RX_ACTIONOP_UCAST_IPSEC)) {
+			req->entry_data.kw[0] |= (uint64_t)inl_dev->channel;
+			req->entry_data.kw_mask[0] |=
+				(uint64_t)inl_dev->chan_mask;
+			pf_func = nix_inl_dev_pffunc_get();
+			req->entry_data.action &= ~(GENMASK(19, 4));
+			req->entry_data.action |= (uint64_t)pf_func << 4;
+
+			flow->npc_action &= ~(GENMASK(19, 4));
+			flow->npc_action |= (uint64_t)pf_func << 4;
+			flow->mcam_data[0] |= (uint64_t)inl_dev->channel;
+			flow->mcam_mask[0] |= (uint64_t)inl_dev->chan_mask;
+		} else {
+			req->entry_data.kw[0] |= (uint64_t)npc->channel;
+			req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+			flow->mcam_data[0] |= (uint64_t)npc->channel;
+			flow->mcam_mask[0] |= (BIT_ULL(12) - 1);
+		}
 	} else {
 		uint16_t pf_func = (flow->npc_action >> 4) & 0xffff;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 16/28] net/cnxk: support inline security setup for cn9k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (14 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
                     ` (12 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella, Anatoly Burakov
  Cc: dev

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/cnxk/cn9k_ethdev.c         |  23 +++
 drivers/net/cnxk/cn9k_ethdev.h         |  61 +++++++
 drivers/net/cnxk/cn9k_ethdev_sec.c     | 313 +++++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn9k_rx.h             |   1 +
 drivers/net/cnxk/cn9k_tx.h             |   1 +
 drivers/net/cnxk/cnxk_ethdev.c         | 230 +++++++++++++++++++++++-
 drivers/net/cnxk/cnxk_ethdev.h         | 121 ++++++++++++-
 drivers/net/cnxk/cnxk_ethdev_devargs.c |  88 ++++++++-
 drivers/net/cnxk/cnxk_ethdev_sec.c     | 278 +++++++++++++++++++++++++++++
 drivers/net/cnxk/cnxk_lookup.c         |  50 +++++-
 drivers/net/cnxk/meson.build           |   2 +
 drivers/net/cnxk/version.map           |   5 +
 12 files changed, 1162 insertions(+), 11 deletions(-)
 create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c

diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678..08c86f9 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -179,8 +185,10 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_cpt_lf *inl_lf;
 	struct cn9k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -200,6 +208,19 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(BIT_ULL(16) == ROC_NIX_INL_SA_BASE_ALIGN);
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -508,6 +529,8 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn9k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index 3d4a206..f8818b8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN9K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn9k_eth_txq {
 	uint64_t cmd[8];
@@ -15,6 +16,10 @@ struct cn9k_eth_txq {
 	uint64_t lso_tun_fmt;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 } __plt_cache_aligned;
 
 struct cn9k_eth_rxq {
@@ -32,8 +37,64 @@ struct cn9k_eth_rxq {
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */
+struct cn9k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_onf_ipsec_outb_sa */
+struct cn9k_outb_priv_data {
+	union {
+		uint64_t esn;
+		struct {
+			uint32_t seq;
+			uint32_t esn_hi;
+		};
+	};
+
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+
+	/* IP identifier */
+	uint16_t ip_id;
+
+	/* SA index */
+	uint32_t sa_idx;
+
+	/* Flags */
+	uint16_t copy_salt : 1;
+
+	/* Salt */
+	uint32_t nonce;
+
+	/* User data pointer */
+	void *userdata;
+
+	/* Back pointer to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+struct cn9k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn9k_eth_sec_ops_override(void);
+
 #endif /* __CN9K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
new file mode 100644
index 0000000..3ec7497
--- /dev/null
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -0,0 +1,313 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn9k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn9k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn9k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static int
+cn9k_eth_sec_session_create(void *device,
+			    struct rte_security_session_conf *conf,
+			    struct rte_security_session *sess,
+			    struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn9k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+
+	/* Search if a session already exists */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	if (inbound) {
+		struct cn9k_inb_priv_data *inb_priv;
+		struct roc_onf_ipsec_inb_sa *inb_sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn9k_inb_priv_data) <
+				  ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE. Assume no inline
+		 * device always for CN9K.
+		 */
+		inb_sa = (struct roc_onf_ipsec_inb_sa *)
+			roc_nix_inl_inb_sa_get(&dev->nix, false, ipsec->spi);
+		if (!inb_sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		/* Check if SA is already in use */
+		if (inb_sa->ctl.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_onf_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_onf_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn9k_outb_priv_data *outb_priv;
+		struct roc_onf_ipsec_outb_sa *outb_sa;
+		uintptr_t sa_base = dev->outb.sa_base;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn9k_outb_priv_data) <
+				  ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_onf_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_onf_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_onf_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+		/* Start sequence number with 1 */
+		outb_priv->seq = 1;
+
+		memcpy(&outb_priv->nonce, outb_sa->nonce, 4);
+		if (outb_sa->ctl.enc_type == ROC_IE_ON_SA_ENC_AES_GCM)
+			outb_priv->copy_salt = 1;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync SA content */
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn9k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_onf_ipsec_outb_sa *outb_sa;
+	struct roc_onf_ipsec_inb_sa *inb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->ctl.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->ctl.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync SA content */
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn9k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn9k_eth_sec_capabilities;
+}
+
+void
+cn9k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn9k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn9k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn9k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index a3bf4e0..59545af 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -17,6 +17,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index ed65cd3..a27ff76 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 7152dcd..5a64691 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -38,6 +38,162 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	return speed_capa;
 }
 
+int
+cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev)
+{
+	struct roc_nix *nix = &dev->nix;
+
+	if (dev->inb.inl_dev == use_inl_dev)
+		return 0;
+
+	plt_nix_dbg("Security sessions(%u) still active, inl=%u!!!",
+		    dev->inb.nb_sess, !!dev->inb.inl_dev);
+
+	/* Change the mode */
+	dev->inb.inl_dev = use_inl_dev;
+
+	/* Update RoC for NPC rule insertion */
+	roc_nix_inb_mode_set(nix, use_inl_dev);
+
+	/* Setup lookup mem */
+	return cnxk_nix_lookup_mem_sa_base_set(dev);
+}
+
+static int
+nix_security_setup(struct cnxk_eth_dev *dev)
+{
+	struct roc_nix *nix = &dev->nix;
+	int i, rc = 0;
+
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Setup Inline Inbound */
+		rc = roc_nix_inl_inb_init(nix);
+		if (rc) {
+			plt_err("Failed to initialize nix inline inb, rc=%d",
+				rc);
+			return rc;
+		}
+
+		/* By default pick using inline device for poll mode.
+		 * Will be overridden when event mode rq's are setup.
+		 */
+		cnxk_nix_inb_mode_set(dev, true);
+	}
+
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		struct plt_bitmap *bmap;
+		size_t bmap_sz;
+		void *mem;
+
+		/* Setup enough descriptors for all tx queues */
+		nix->outb_nb_desc = dev->outb.nb_desc;
+		nix->outb_nb_crypto_qs = dev->outb.nb_crypto_qs;
+
+		/* Setup Inline Outbound */
+		rc = roc_nix_inl_outb_init(nix);
+		if (rc) {
+			plt_err("Failed to initialize nix inline outb, rc=%d",
+				rc);
+			goto cleanup;
+		}
+
+		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
+
+		/* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
+		if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+			goto done;
+
+		rc = -ENOMEM;
+		/* Allocate a bitmap to alloc and free sa indexes */
+		bmap_sz = plt_bitmap_get_memory_footprint(dev->outb.max_sa);
+		mem = plt_zmalloc(bmap_sz, PLT_CACHE_LINE_SIZE);
+		if (mem == NULL) {
+			plt_err("Outbound SA bmap alloc failed");
+
+			rc |= roc_nix_inl_outb_fini(nix);
+			goto cleanup;
+		}
+
+		rc = -EIO;
+		bmap = plt_bitmap_init(dev->outb.max_sa, mem, bmap_sz);
+		if (!bmap) {
+			plt_err("Outbound SA bmap init failed");
+
+			rc |= roc_nix_inl_outb_fini(nix);
+			plt_free(mem);
+			goto cleanup;
+		}
+
+		for (i = 0; i < dev->outb.max_sa; i++)
+			plt_bitmap_set(bmap, i);
+
+		dev->outb.sa_base = roc_nix_inl_outb_sa_base_get(nix);
+		dev->outb.sa_bmap_mem = mem;
+		dev->outb.sa_bmap = bmap;
+	}
+
+done:
+	return 0;
+cleanup:
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		rc |= roc_nix_inl_inb_fini(nix);
+	return rc;
+}
+
+static int
+nix_security_release(struct cnxk_eth_dev *dev)
+{
+	struct rte_eth_dev *eth_dev = dev->eth_dev;
+	struct cnxk_eth_sec_sess *eth_sec, *tvar;
+	struct roc_nix *nix = &dev->nix;
+	int rc, ret = 0;
+
+	/* Cleanup Inline inbound */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Destroy inbound sessions */
+		tvar = NULL;
+		TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
+			cnxk_eth_sec_ops.session_destroy(eth_dev,
+							 eth_sec->sess);
+
+		/* Clear lookup mem */
+		cnxk_nix_lookup_mem_sa_base_clear(dev);
+
+		rc = roc_nix_inl_inb_fini(nix);
+		if (rc)
+			plt_err("Failed to cleanup nix inline inb, rc=%d", rc);
+		ret |= rc;
+	}
+
+	/* Cleanup Inline outbound */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Destroy outbound sessions */
+		tvar = NULL;
+		TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
+			cnxk_eth_sec_ops.session_destroy(eth_dev,
+							 eth_sec->sess);
+
+		rc = roc_nix_inl_outb_fini(nix);
+		if (rc)
+			plt_err("Failed to cleanup nix inline outb, rc=%d", rc);
+		ret |= rc;
+
+		plt_bitmap_free(dev->outb.sa_bmap);
+		plt_free(dev->outb.sa_bmap_mem);
+		dev->outb.sa_bmap = NULL;
+		dev->outb.sa_bmap_mem = NULL;
+	}
+
+	dev->inb.inl_dev = false;
+	roc_nix_inb_mode_set(nix, false);
+	dev->nb_rxq_sso = 0;
+	dev->inb.nb_sess = 0;
+	dev->outb.nb_sess = 0;
+	return ret;
+}
+
 static void
 nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 {
@@ -194,6 +350,12 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 		eth_dev->data->tx_queues[qid] = NULL;
 	}
 
+	/* When Tx Security offload is enabled, increase tx desc count by
+	 * max possible outbound desc count.
+	 */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+		nb_desc += dev->outb.nb_desc;
+
 	/* Setup ROC SQ */
 	sq = &dev->sqs[qid];
 	sq->qid = qid;
@@ -266,6 +428,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
 	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct rte_mempool_ops *ops;
 	const char *platform_ops;
@@ -303,6 +466,19 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 		eth_dev->data->rx_queues[qid] = NULL;
 	}
 
+	/* Clam up cq limit to size of packet pool aura for LBK
+	 * to avoid meta packet drop as LBK does not currently support
+	 * backpressure.
+	 */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
+
+		/* Use current RQ's aura limit if inl rq is not available */
+		if (!pkt_pool_limit)
+			pkt_pool_limit = roc_npa_aura_op_limit_get(mp->pool_id);
+		nb_desc = RTE_MAX(nb_desc, pkt_pool_limit);
+	}
+
 	/* Setup ROC CQ */
 	cq = &dev->cqs[qid];
 	cq->qid = qid;
@@ -328,6 +504,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rq->later_skip = sizeof(struct rte_mbuf);
 	rq->lpb_size = mp->elt_size;
 
+	/* Enable Inline IPSec on RQ, will not be used for Poll mode */
+	if (roc_nix_inl_inb_is_enabled(nix))
+		rq->ipsech_ena = true;
+
 	rc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started);
 	if (rc) {
 		plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc);
@@ -350,6 +530,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Setup rq reference for inline dev if present */
+		rc = roc_nix_inl_dev_rq_get(rq);
+		if (rc)
+			goto free_mem;
+	}
+
 	plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, mp->name, nb_desc,
 		    cq->nb_desc);
 
@@ -370,6 +557,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	}
 
 	return 0;
+free_mem:
+	plt_free(rxq_sp);
 rq_fini:
 	rc |= roc_nix_rq_fini(rq);
 cq_fini:
@@ -394,11 +583,15 @@ cnxk_nix_rx_queue_release(void *rxq)
 	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
 	dev = rxq_sp->dev;
 	qid = rxq_sp->qid;
+	rq = &dev->rqs[qid];
 
 	plt_nix_dbg("Releasing rxq %u", qid);
 
+	/* Release rq reference for inline dev if present */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		roc_nix_inl_dev_rq_put(rq);
+
 	/* Cleanup ROC RQ */
-	rq = &dev->rqs[qid];
 	rc = roc_nix_rq_fini(rq);
 	if (rc)
 		plt_err("Failed to cleanup rq, rc=%d", rc);
@@ -804,6 +997,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		rc = nix_store_queue_cfg_and_then_release(eth_dev);
 		if (rc)
 			goto fail_configure;
+
+		/* Cleanup security support */
+		rc = nix_security_release(dev);
+		if (rc)
+			goto fail_configure;
+
 		roc_nix_tm_fini(nix);
 		roc_nix_lf_free(nix);
 	}
@@ -958,6 +1157,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		plt_err("Failed to initialize flow control rc=%d", rc);
 		goto cq_fini;
 	}
+
+	/* Setup Inline security support */
+	rc = nix_security_setup(dev);
+	if (rc)
+		goto cq_fini;
+
 	/*
 	 * Restore queue config when reconfigure followed by
 	 * reconfigure and no queue configure invoked from application case.
@@ -965,7 +1170,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	if (dev->configured == 1) {
 		rc = nix_restore_queue_cfg(eth_dev);
 		if (rc)
-			goto cq_fini;
+			goto sec_release;
 	}
 
 	/* Update the mac address */
@@ -987,6 +1192,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	dev->nb_txq = data->nb_tx_queues;
 	return 0;
 
+sec_release:
+	rc |= nix_security_release(dev);
 cq_fini:
 	roc_nix_unregister_cq_irqs(nix);
 q_irq_fini:
@@ -1282,12 +1489,25 @@ static int
 cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ctx *sec_ctx;
 	struct roc_nix *nix = &dev->nix;
 	struct rte_pci_device *pci_dev;
 	int rc, max_entries;
 
 	eth_dev->dev_ops = &cnxk_eth_dev_ops;
 
+	/* Alloc security context */
+	sec_ctx = plt_zmalloc(sizeof(struct rte_security_ctx), 0);
+	if (!sec_ctx)
+		return -ENOMEM;
+	sec_ctx->device = eth_dev;
+	sec_ctx->ops = &cnxk_eth_sec_ops;
+	sec_ctx->flags =
+		(RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
+	eth_dev->security_ctx = sec_ctx;
+	TAILQ_INIT(&dev->inb.list);
+	TAILQ_INIT(&dev->outb.list);
+
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -1404,6 +1624,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 	struct roc_nix *nix = &dev->nix;
 	int rc, i;
 
+	plt_free(eth_dev->security_ctx);
+	eth_dev->security_ctx = NULL;
+
 	/* Nothing to be done for secondary processes */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -1438,6 +1661,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 	}
 	eth_dev->data->nb_rx_queues = 0;
 
+	/* Free security resources */
+	nix_security_release(dev);
+
 	/* Free tm resources */
 	roc_nix_tm_fini(nix);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 27920c8..b233010 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -13,6 +13,9 @@
 #include <rte_mbuf.h>
 #include <rte_mbuf_pool_ops.h>
 #include <rte_mempool.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+#include <rte_tailq.h>
 #include <rte_time.h>
 
 #include "roc_api.h"
@@ -70,14 +73,14 @@
 	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
 	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
 	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM)
+	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
 	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
 	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
 	 DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
 	 DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |                  \
-	 DEV_RX_OFFLOAD_VLAN_STRIP)
+	 DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
 	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
@@ -112,6 +115,11 @@
 #define PTYPE_TUNNEL_ARRAY_SZ	  BIT(PTYPE_TUNNEL_WIDTH)
 #define PTYPE_ARRAY_SZ                                                         \
 	((PTYPE_NON_TUNNEL_ARRAY_SZ + PTYPE_TUNNEL_ARRAY_SZ) * sizeof(uint16_t))
+
+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
+#define ERRCODE_ERRLEN_WIDTH 12
+#define ERR_ARRAY_SZ	     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
+
 /* Fastpath lookup */
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
@@ -119,6 +127,9 @@
 	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
 	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
 
+/* Subtype from inline outbound error event */
+#define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
+
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
 	uint8_t rx_pause;
@@ -144,6 +155,82 @@ struct cnxk_timesync_info {
 	uint64_t *tx_tstamp;
 } __plt_cache_aligned;
 
+/* Security session private data */
+struct cnxk_eth_sec_sess {
+	/* List entry */
+	TAILQ_ENTRY(cnxk_eth_sec_sess) entry;
+
+	/* Inbound SA is from NIX_RX_IPSEC_SA_BASE or
+	 * Outbound SA from roc_nix_inl_outb_sa_base_get()
+	 */
+	void *sa;
+
+	/* SA index */
+	uint32_t sa_idx;
+
+	/* SPI */
+	uint32_t spi;
+
+	/* Back pointer to session */
+	struct rte_security_session *sess;
+
+	/* Inbound */
+	bool inb;
+
+	/* Inbound session on inl dev */
+	bool inl_dev;
+};
+
+TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess);
+
+/* Inbound security data */
+struct cnxk_eth_dev_sec_inb {
+	/* IPSec inbound max SPI */
+	uint16_t max_spi;
+
+	/* Using inbound with inline device */
+	bool inl_dev;
+
+	/* Device argument to force inline device for inb */
+	bool force_inl_dev;
+
+	/* Active sessions */
+	uint16_t nb_sess;
+
+	/* List of sessions */
+	struct cnxk_eth_sec_sess_list list;
+};
+
+/* Outbound security data */
+struct cnxk_eth_dev_sec_outb {
+	/* IPSec outbound max SA */
+	uint16_t max_sa;
+
+	/* Per CPT LF descriptor count */
+	uint32_t nb_desc;
+
+	/* SA Bitmap */
+	struct plt_bitmap *sa_bmap;
+
+	/* SA bitmap memory */
+	void *sa_bmap_mem;
+
+	/* SA base */
+	uint64_t sa_base;
+
+	/* CPT LF base */
+	struct roc_cpt_lf *lf_base;
+
+	/* Crypto queues => CPT lf count */
+	uint16_t nb_crypto_qs;
+
+	/* Active sessions */
+	uint16_t nb_sess;
+
+	/* List of sessions */
+	struct cnxk_eth_sec_sess_list list;
+};
+
 struct cnxk_eth_dev {
 	/* ROC NIX */
 	struct roc_nix nix;
@@ -159,6 +246,7 @@ struct cnxk_eth_dev {
 	/* Configured queue count */
 	uint16_t nb_rxq;
 	uint16_t nb_txq;
+	uint16_t nb_rxq_sso;
 	uint8_t configured;
 
 	/* Max macfilter entries */
@@ -223,6 +311,10 @@ struct cnxk_eth_dev {
 	/* Per queue statistics counters */
 	uint32_t txq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
 	uint32_t rxq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+
+	/* Security data */
+	struct cnxk_eth_dev_sec_inb inb;
+	struct cnxk_eth_dev_sec_outb outb;
 };
 
 struct cnxk_eth_rxq_sp {
@@ -261,6 +353,9 @@ extern struct eth_dev_ops cnxk_eth_dev_ops;
 /* Common flow ops */
 extern struct rte_flow_ops cnxk_flow_ops;
 
+/* Common security ops */
+extern struct rte_security_ops cnxk_eth_sec_ops;
+
 /* Ops */
 int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
 		   struct rte_pci_device *pci_dev);
@@ -385,6 +480,18 @@ int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs,
 /* Debug */
 int cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
+/* Security */
+int cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p);
+int cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx);
+int cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev);
+int cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev);
+__rte_internal
+int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);
+struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
+						       uint32_t spi, bool inb);
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
+			      struct rte_security_session *sess);
 
 /* Other private functions */
 int nix_recalc_mtu(struct rte_eth_dev *eth_dev);
@@ -495,4 +602,14 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 	}
 }
 
+static __rte_always_inline uintptr_t
+cnxk_nix_sa_base_get(uint16_t port, const void *lookup_mem)
+{
+	uintptr_t sa_base_tbl;
+
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	return *((const uintptr_t *)sa_base_tbl + port);
+}
+
 #endif /* __CNXK_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb..c0b949e 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -8,6 +8,61 @@
 #include "cnxk_ethdev.h"
 
 static int
+parse_outb_nb_desc(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_outb_nb_crypto_qs(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	if (val < 1 || val > 64)
+		return -EINVAL;
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_ipsec_out_max_sa(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
 parse_flow_max_priority(const char *key, const char *value, void *extra_args)
 {
 	RTE_SET_USED(key);
@@ -117,15 +172,25 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
 #define CNXK_SWITCH_HEADER_TYPE "switch_header"
 #define CNXK_RSS_TAG_AS_XOR	"tag_as_xor"
 #define CNXK_LOCK_RX_CTX	"lock_rx_ctx"
+#define CNXK_IPSEC_IN_MAX_SPI	"ipsec_in_max_spi"
+#define CNXK_IPSEC_OUT_MAX_SA	"ipsec_out_max_sa"
+#define CNXK_OUTB_NB_DESC	"outb_nb_desc"
+#define CNXK_FORCE_INB_INL_DEV	"force_inb_inl_dev"
+#define CNXK_OUTB_NB_CRYPTO_QS	"outb_nb_crypto_qs"
 
 int
 cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
 {
 	uint16_t reta_sz = ROC_NIX_RSS_RETA_SZ_64;
 	uint16_t sqb_count = CNXK_NIX_TX_MAX_SQB;
+	uint16_t ipsec_in_max_spi = BIT(8) - 1;
+	uint16_t ipsec_out_max_sa = BIT(12);
 	uint16_t flow_prealloc_size = 1;
 	uint16_t switch_header_type = 0;
 	uint16_t flow_max_priority = 3;
+	uint16_t force_inb_inl_dev = 0;
+	uint16_t outb_nb_crypto_qs = 1;
+	uint16_t outb_nb_desc = 8200;
 	uint16_t rss_tag_as_xor = 0;
 	uint16_t scalar_enable = 0;
 	uint8_t lock_rx_ctx = 0;
@@ -153,10 +218,27 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
 	rte_kvargs_process(kvlist, CNXK_RSS_TAG_AS_XOR, &parse_flag,
 			   &rss_tag_as_xor);
 	rte_kvargs_process(kvlist, CNXK_LOCK_RX_CTX, &parse_flag, &lock_rx_ctx);
+	rte_kvargs_process(kvlist, CNXK_IPSEC_IN_MAX_SPI,
+			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_process(kvlist, CNXK_IPSEC_OUT_MAX_SA,
+			   &parse_ipsec_out_max_sa, &ipsec_out_max_sa);
+	rte_kvargs_process(kvlist, CNXK_OUTB_NB_DESC, &parse_outb_nb_desc,
+			   &outb_nb_desc);
+	rte_kvargs_process(kvlist, CNXK_OUTB_NB_CRYPTO_QS,
+			   &parse_outb_nb_crypto_qs, &outb_nb_crypto_qs);
+	rte_kvargs_process(kvlist, CNXK_FORCE_INB_INL_DEV, &parse_flag,
+			   &force_inb_inl_dev);
 	rte_kvargs_free(kvlist);
 
 null_devargs:
 	dev->scalar_ena = !!scalar_enable;
+	dev->inb.force_inl_dev = !!force_inb_inl_dev;
+	dev->inb.max_spi = ipsec_in_max_spi;
+	dev->outb.max_sa = ipsec_out_max_sa;
+	dev->outb.nb_desc = outb_nb_desc;
+	dev->outb.nb_crypto_qs = outb_nb_crypto_qs;
+	dev->nix.ipsec_in_max_spi = ipsec_in_max_spi;
+	dev->nix.ipsec_out_max_sa = ipsec_out_max_sa;
 	dev->nix.rss_tag_as_xor = !!rss_tag_as_xor;
 	dev->nix.max_sqb_count = sqb_count;
 	dev->nix.reta_sz = reta_sz;
@@ -177,4 +259,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk,
 			      CNXK_FLOW_PREALLOC_SIZE "=<1-32>"
 			      CNXK_FLOW_MAX_PRIORITY "=<1-32>"
 			      CNXK_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b>"
-			      CNXK_RSS_TAG_AS_XOR "=1");
+			      CNXK_RSS_TAG_AS_XOR "=1"
+			      CNXK_IPSEC_IN_MAX_SPI "=<1-65535>"
+			      CNXK_OUTB_NB_DESC "=<1-65535>"
+			      CNXK_OUTB_NB_CRYPTO_QS "=<1-64>"
+			      CNXK_FORCE_INB_INL_DEV "=1");
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
new file mode 100644
index 0000000..c76e230
--- /dev/null
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <cnxk_ethdev.h>
+
+#define CNXK_NIX_INL_SELFTEST	      "selftest"
+#define CNXK_NIX_INL_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
+
+#define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)
+#define CNXK_NIX_INL_DEV_NAME_LEN                                              \
+	(sizeof(CNXK_NIX_INL_DEV_NAME) + PCI_PRI_STR_SIZE)
+
+static inline int
+bitmap_ctzll(uint64_t slab)
+{
+	if (slab == 0)
+		return 0;
+
+	return __builtin_ctzll(slab);
+}
+
+int
+cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p)
+{
+	uint32_t pos, idx;
+	uint64_t slab;
+	int rc;
+
+	if (!dev->outb.sa_bmap)
+		return -ENOTSUP;
+
+	pos = 0;
+	slab = 0;
+	/* Scan from the beginning */
+	plt_bitmap_scan_init(dev->outb.sa_bmap);
+	/* Scan bitmap to get the free sa index */
+	rc = plt_bitmap_scan(dev->outb.sa_bmap, &pos, &slab);
+	/* Empty bitmap */
+	if (rc == 0) {
+		plt_err("Outbound SA' exhausted, use 'ipsec_out_max_sa' "
+			"devargs to increase");
+		return -ERANGE;
+	}
+
+	/* Get free SA index */
+	idx = pos + bitmap_ctzll(slab);
+	plt_bitmap_clear(dev->outb.sa_bmap, idx);
+	*idx_p = idx;
+	return 0;
+}
+
+int
+cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx)
+{
+	if (idx >= dev->outb.max_sa)
+		return -EINVAL;
+
+	/* Check if it is already free */
+	if (plt_bitmap_get(dev->outb.sa_bmap, idx))
+		return -EINVAL;
+
+	/* Mark index as free */
+	plt_bitmap_set(dev->outb.sa_bmap, idx);
+	return 0;
+}
+
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev, uint32_t spi, bool inb)
+{
+	struct cnxk_eth_sec_sess_list *list;
+	struct cnxk_eth_sec_sess *eth_sec;
+
+	list = inb ? &dev->inb.list : &dev->outb.list;
+	TAILQ_FOREACH(eth_sec, list, entry) {
+		if (eth_sec->spi == spi)
+			return eth_sec;
+	}
+
+	return NULL;
+}
+
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
+			      struct rte_security_session *sess)
+{
+	struct cnxk_eth_sec_sess *eth_sec = NULL;
+
+	/* Search in inbound list */
+	TAILQ_FOREACH(eth_sec, &dev->inb.list, entry) {
+		if (eth_sec->sess == sess)
+			return eth_sec;
+	}
+
+	/* Search in outbound list */
+	TAILQ_FOREACH(eth_sec, &dev->outb.list, entry) {
+		if (eth_sec->sess == sess)
+			return eth_sec;
+	}
+
+	return NULL;
+}
+
+static unsigned int
+cnxk_eth_sec_session_get_size(void *device __rte_unused)
+{
+	return sizeof(struct cnxk_eth_sec_sess);
+}
+
+struct rte_security_ops cnxk_eth_sec_ops = {
+	.session_get_size = cnxk_eth_sec_session_get_size
+};
+
+static int
+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_selftest(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint8_t *)extra_args = !!(val == 1);
+	return 0;
+}
+
+static int
+nix_inl_parse_devargs(struct rte_devargs *devargs,
+		      struct roc_nix_inl_dev *inl_dev)
+{
+	uint32_t ipsec_in_max_spi = BIT(8) - 1;
+	struct rte_kvargs *kvlist;
+	uint8_t selftest = 0;
+
+	if (devargs == NULL)
+		goto null_devargs;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (kvlist == NULL)
+		goto exit;
+
+	rte_kvargs_process(kvlist, CNXK_NIX_INL_SELFTEST, &parse_selftest,
+			   &selftest);
+	rte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,
+			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_free(kvlist);
+
+null_devargs:
+	inl_dev->ipsec_in_max_spi = ipsec_in_max_spi;
+	inl_dev->selftest = selftest;
+	return 0;
+exit:
+	return -EINVAL;
+}
+
+static inline char *
+nix_inl_dev_to_name(struct rte_pci_device *pci_dev, char *name)
+{
+	snprintf(name, CNXK_NIX_INL_DEV_NAME_LEN,
+		 CNXK_NIX_INL_DEV_NAME PCI_PRI_FMT, pci_dev->addr.domain,
+		 pci_dev->addr.bus, pci_dev->addr.devid,
+		 pci_dev->addr.function);
+
+	return name;
+}
+
+static int
+cnxk_nix_inl_dev_remove(struct rte_pci_device *pci_dev)
+{
+	char name[CNXK_NIX_INL_DEV_NAME_LEN];
+	const struct rte_memzone *mz;
+	struct roc_nix_inl_dev *dev;
+	int rc;
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	mz = rte_memzone_lookup(nix_inl_dev_to_name(pci_dev, name));
+	if (!mz)
+		return 0;
+
+	dev = mz->addr;
+
+	/* Cleanup inline dev */
+	rc = roc_nix_inl_dev_fini(dev);
+	if (rc) {
+		plt_err("Failed to cleanup inl dev, rc=%d(%s)", rc,
+			roc_error_msg_get(rc));
+		return rc;
+	}
+
+	rte_memzone_free(mz);
+	return 0;
+}
+
+static int
+cnxk_nix_inl_dev_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	char name[CNXK_NIX_INL_DEV_NAME_LEN];
+	struct roc_nix_inl_dev *inl_dev;
+	const struct rte_memzone *mz;
+	int rc = -ENOMEM;
+
+	RTE_SET_USED(pci_drv);
+
+	rc = roc_plt_init();
+	if (rc) {
+		plt_err("Failed to initialize platform model, rc=%d", rc);
+		return rc;
+	}
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	mz = rte_memzone_reserve_aligned(nix_inl_dev_to_name(pci_dev, name),
+					 sizeof(*inl_dev), SOCKET_ID_ANY, 0,
+					 RTE_CACHE_LINE_SIZE);
+	if (mz == NULL)
+		return rc;
+
+	inl_dev = mz->addr;
+	inl_dev->pci_dev = pci_dev;
+
+	/* Parse devargs string */
+	rc = nix_inl_parse_devargs(pci_dev->device.devargs, inl_dev);
+	if (rc) {
+		plt_err("Failed to parse devargs rc=%d", rc);
+		goto free_mem;
+	}
+
+	rc = roc_nix_inl_dev_init(inl_dev);
+	if (rc) {
+		plt_err("Failed to init nix inl device, rc=%d(%s)", rc,
+			roc_error_msg_get(rc));
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	rte_memzone_free(mz);
+	return rc;
+}
+
+static const struct rte_pci_id cnxk_nix_inl_pci_map[] = {
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_PF)},
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_VF)},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct rte_pci_driver cnxk_nix_inl_pci = {
+	.id_table = cnxk_nix_inl_pci_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+	.probe = cnxk_nix_inl_dev_probe,
+	.remove = cnxk_nix_inl_dev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(cnxk_nix_inl, cnxk_nix_inl_pci);
+RTE_PMD_REGISTER_PCI_TABLE(cnxk_nix_inl, cnxk_nix_inl_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, "vfio-pci");
+
+RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,
+			      CNXK_NIX_INL_SELFTEST "=1"
+			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>");
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index 0152ad9..f6ec768 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -7,12 +7,8 @@
 
 #include "cnxk_ethdev.h"
 
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ	     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
-
-#define SA_TBL_SZ	(RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_TBL_SZ)
+#define SA_BASE_TBL_SZ	(RTE_MAX_ETHPORTS * sizeof(uintptr_t))
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ)
 const uint32_t *
 cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 {
@@ -324,3 +320,45 @@ cnxk_nix_fastpath_lookup_mem_get(void)
 	}
 	return NULL;
 }
+
+int
+cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev)
+{
+	void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+	uint16_t port = dev->eth_dev->data->port_id;
+	uintptr_t sa_base_tbl;
+	uintptr_t sa_base;
+	uint8_t sa_w;
+
+	if (!lookup_mem)
+		return -EIO;
+
+	sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix, dev->inb.inl_dev);
+	if (!sa_base)
+		return -ENOTSUP;
+
+	sa_w = plt_log2_u32(dev->nix.ipsec_in_max_spi + 1);
+
+	/* Set SA Base in lookup mem */
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	*((uintptr_t *)sa_base_tbl + port) = sa_base | sa_w;
+	return 0;
+}
+
+int
+cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev)
+{
+	void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+	uint16_t port = dev->eth_dev->data->port_id;
+	uintptr_t sa_base_tbl;
+
+	if (!lookup_mem)
+		return -EIO;
+
+	/* Set SA Base in lookup mem */
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	*((uintptr_t *)sa_base_tbl + port) = 0;
+	return 0;
+}
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index d4cdd17..6cc30c3 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files(
         'cnxk_ethdev.c',
         'cnxk_ethdev_devargs.c',
         'cnxk_ethdev_ops.c',
+        'cnxk_ethdev_sec.c',
         'cnxk_link.c',
         'cnxk_lookup.c',
         'cnxk_ptp.c',
@@ -22,6 +23,7 @@ sources = files(
 # CN9K
 sources += files(
         'cn9k_ethdev.c',
+        'cn9k_ethdev_sec.c',
         'cn9k_rte_flow.c',
         'cn9k_rx.c',
         'cn9k_rx_mseg.c',
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index c2e0723..b9da6b1 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -1,3 +1,8 @@
 DPDK_22 {
 	local: *;
 };
+
+INTERNAL {
+	global:
+	cnxk_nix_inb_mode_set;
+};
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 17/28] net/cnxk: support inline security setup for cn10k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (15 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
                     ` (11 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov
  Cc: dev

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

This patch also changes dpdk-devbind.py to list new inline
device as misc device.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 doc/guides/nics/cnxk.rst                 | 102 ++++++++
 doc/guides/nics/features/cnxk.ini        |   1 +
 doc/guides/nics/features/cnxk_vec.ini    |   1 +
 doc/guides/nics/features/cnxk_vf.ini     |   1 +
 doc/guides/rel_notes/release_21_11.rst   |   2 +
 drivers/event/cnxk/cnxk_eventdev_adptr.c |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.c          |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.h          |  43 ++++
 drivers/net/cnxk/cn10k_ethdev_sec.c      | 426 +++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn10k_rx.h              |   1 +
 drivers/net/cnxk/cn10k_tx.h              |   1 +
 drivers/net/cnxk/meson.build             |   1 +
 usertools/dpdk-devbind.py                |   8 +-
 13 files changed, 654 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 90d27db..b542437 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -34,6 +34,7 @@ Features of the CNXK Ethdev PMD are:
 - Vector Poll mode driver
 - Debug utilities - Context dump and error interrupt support
 - Support Rx interrupt
+- Inline IPsec processing support
 
 Prerequisites
 -------------
@@ -185,6 +186,74 @@ Runtime Config Options
 
       -a 0002:02:00.0,tag_as_xor=1
 
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127).
+
+- ``Max SA's for outbound inline IPsec`` (default ``4096``)
+
+   Max number of SA's supported for outbound inline IPsec processing can be
+   specified by ``ipsec_out_max_sa`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_out_max_sa=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 outbound SAs.
+
+- ``Outbound CPT LF queue size`` (default ``8200``)
+
+   Size of Outbound CPT LF queue in number of descriptors can be specified by
+   ``outb_nb_desc`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_desc=16384
+
+    With the above configuration, Outbound CPT LF will be created to accommodate
+    at max 16384 descriptors at any given time.
+
+- ``Outbound CPT LF count`` (default ``1``)
+
+   Number of CPT LF's to attach for Outbound processing can be specified by
+   ``outb_nb_crypto_qs`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_crypto_qs=2
+
+   With the above confiuration, two CPT LF's are setup and distributed among
+   all the Tx queues for outbound processing.
+
+- ``Force using inline ipsec device for inbound`` (default ``0``)
+
+   In CN10K, in event mode, driver can work in two modes,
+
+   1. Inbound encrypted traffic received by probed ipsec inline device while
+      plain traffic post decryption is received by ethdev.
+
+   2. Both Inbound encrypted traffic and plain traffic post decryption are
+      received by ethdev.
+
+   By default event mode works without using inline device i.e mode ``2``.
+   This behaviour can be changed to pick mode ``1`` by using
+   ``force_inb_inl_dev`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,force_inb_inl_dev=1 -a 0002:03:00.0,force_inb_inl_dev=1
+
+   With the above configuration, inbound encrypted traffic from both the ports
+   is received by ipsec inline device.
 
 .. note::
 
@@ -250,6 +319,39 @@ Example usage in testpmd::
    testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
           spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
 
+Inline device support for CN10K
+-------------------------------
+
+CN10K HW provides a misc device Inline device that supports ethernet devices in
+providing following features.
+
+  - Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet
+    devices to be processed by the single inline IPSec device. This allows
+    single rte security session to accept traffic from multiple ports.
+
+  - Support for event generation on outbound inline IPsec processing errors.
+
+  - Support CN106xx poll mode of operation for inline IPSec inbound processing.
+
+Inline IPsec device is identified by PCI PF vendid:devid ``177D:A0F0`` or
+VF ``177D:A0F1``.
+
+Runtime Config Options for inline device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127) for traffic aggregated on inline device.
+
+
 Debugging Options
 -----------------
 
diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini
index 5d45625..1ced3ee 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -27,6 +27,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Flow control         = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini
index abf2b8d..12ca0a5 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -26,6 +26,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Flow control         = Y
 Jumbo frame          = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini
index 7b4299f..139d9b9 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -22,6 +22,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f85dc99..8116f1e 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -65,6 +65,8 @@ New Features
 * **Updated Marvell cnxk ethdev driver.**
 
   * Added rte_flow support for dual VLAN insert and strip actions.
+  * Added support for Inline IPsec for CN9K event mode and CN10K
+    poll mode and event mode.
 
 * **Updated Marvell cnxk crypto PMD.**
 
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index baf2f2a..a34efbb 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -123,7 +123,9 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		    uint16_t port_id, const struct rte_event *ev,
 		    uint8_t custom_flowid)
 {
+	struct roc_nix *nix = &cnxk_eth_dev->nix;
 	struct roc_nix_rq *rq;
+	int rc;
 
 	rq = &cnxk_eth_dev->rqs[rq_id];
 	rq->sso_ena = 1;
@@ -140,7 +142,24 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		rq->tag_mask |= ev->flow_id;
 	}
 
-	return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	rc = roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	if (rc)
+		return rc;
+
+	if (rq_id == 0 && roc_nix_inl_inb_is_enabled(nix)) {
+		uint32_t sec_tag_const;
+
+		/* IPSec tag const is 8-bit left shifted value of tag_mask
+		 * as it applies to bit 32:8 of tag only.
+		 */
+		sec_tag_const = rq->tag_mask >> 8;
+		rc = roc_nix_inl_inb_tag_update(nix, sec_tag_const,
+						ev->sched_type);
+		if (rc)
+			plt_err("Failed to set tag conf for ipsec, rc=%d", rc);
+	}
+
+	return rc;
 }
 
 static int
@@ -186,6 +205,7 @@ cnxk_sso_rx_adapter_queue_add(
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, true,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso++;
 	}
 
 	if (rc < 0) {
@@ -196,6 +216,14 @@ cnxk_sso_rx_adapter_queue_add(
 
 	dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags;
 
+	/* Switch to use PF/VF's NIX LF instead of inline device for inbound
+	 * when all the RQ's are switched to event dev mode. We do this only
+	 * when using inline device is not forced by dev args.
+	 */
+	if (!cnxk_eth_dev->inb.force_inl_dev &&
+	    cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq)
+		cnxk_nix_inb_mode_set(cnxk_eth_dev, false);
+
 	return 0;
 }
 
@@ -220,12 +248,18 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, false,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso--;
 	}
 
 	if (rc < 0)
 		plt_err("Failed to clear Rx adapter config port=%d, q=%d",
 			eth_dev->data->port_id, rx_queue_id);
 
+	/* Removing RQ from Rx adapter implies need to use
+	 * inline device for CQ/Poll mode.
+	 */
+	cnxk_nix_inb_mode_set(cnxk_eth_dev, true);
+
 	return rc;
 }
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6c..fa2343c 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (conf & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -181,8 +187,11 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
+	struct roc_cpt_lf *inl_lf;
 	struct cn10k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -198,11 +207,24 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq = eth_dev->data->tx_queues[qid];
 	txq->fc_mem = sq->fc;
 	/* Store lmt base in tx queue for easy access */
-	txq->lmt_base = dev->nix.lmt_base;
+	txq->lmt_base = nix->lmt_base;
 	txq->io_addr = sq->io_addr;
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(ROC_NIX_INL_SA_BASE_ALIGN == BIT_ULL(16));
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -215,6 +237,7 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct cn10k_eth_rxq *rxq;
 	struct roc_nix_rq *rq;
 	struct roc_nix_cq *cq;
@@ -250,6 +273,15 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq->data_off = rq->first_skip;
 	rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
 
+	/* Setup security related info */
+	if (dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		rxq->lmt_base = dev->nix.lmt_base;
+		rxq->sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix,
+							   dev->inb.inl_dev);
+	}
+	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
+	rxq->aura_handle = rxq_sp->qconf.mp->pool_id;
+
 	/* Lookup mem */
 	rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
 	return 0;
@@ -500,6 +532,8 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn10k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 8b6e0f2..a888364 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN10K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn10k_eth_txq {
 	uint64_t send_hdr_w0;
@@ -15,6 +16,10 @@ struct cn10k_eth_txq {
 	rte_iova_t io_addr;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 	uint64_t cmd[4];
 	uint64_t lso_tun_fmt;
 } __plt_cache_aligned;
@@ -30,12 +35,50 @@ struct cn10k_eth_rxq {
 	uint32_t qmask;
 	uint32_t available;
 	uint16_t data_off;
+	uint64_t sa_base;
+	uint64_t lmt_base;
+	uint64_t aura_handle;
 	uint16_t rq;
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_ot_ipsec_inb_sa */
+struct cn10k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_ot_ipsec_outb_sa */
+struct cn10k_outb_priv_data {
+	void *userdata;
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+	/* Back pinter to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+	/* SA index */
+	uint32_t sa_idx;
+};
+
+struct cn10k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn10k_eth_sec_ops_override(void);
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
new file mode 100644
index 0000000..3ffd824
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_eventdev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn10k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static void
+cn10k_eth_sec_sso_work_cb(uint64_t *gw, void *args)
+{
+	struct rte_eth_event_ipsec_desc desc;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct cn10k_outb_priv_data *priv;
+	struct roc_ot_ipsec_outb_sa *sa;
+	struct cpt_cn10k_res_s *res;
+	struct rte_eth_dev *eth_dev;
+	struct cnxk_eth_dev *dev;
+	uint16_t dlen_adj, rlen;
+	struct rte_mbuf *mbuf;
+	uintptr_t sa_base;
+	uintptr_t nixtx;
+	uint8_t port;
+
+	RTE_SET_USED(args);
+
+	switch ((gw[0] >> 28) & 0xF) {
+	case RTE_EVENT_TYPE_ETHDEV:
+		/* Event from inbound inline dev due to IPSEC packet bad L4 */
+		mbuf = (struct rte_mbuf *)(gw[1] - sizeof(struct rte_mbuf));
+		plt_nix_dbg("Received mbuf %p from inline dev inbound", mbuf);
+		rte_pktmbuf_free(mbuf);
+		return;
+	case RTE_EVENT_TYPE_CPU:
+		/* Check for subtype */
+		if (((gw[0] >> 20) & 0xFF) == CNXK_ETHDEV_SEC_OUTB_EV_SUB) {
+			/* Event from outbound inline error */
+			mbuf = (struct rte_mbuf *)gw[1];
+			break;
+		}
+		/* Fall through */
+	default:
+		plt_err("Unknown event gw[0] = 0x%016lx, gw[1] = 0x%016lx",
+			gw[0], gw[1]);
+		return;
+	}
+
+	/* Get ethdev port from tag */
+	port = gw[0] & 0xFF;
+	eth_dev = &rte_eth_devices[port];
+	dev = cnxk_eth_pmd_priv(eth_dev);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+	/* Calculate dlen adj */
+	dlen_adj = mbuf->pkt_len - mbuf->l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Find the res area residing on next cacheline after end of data */
+	nixtx = rte_pktmbuf_mtod(mbuf, uintptr_t) + mbuf->pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+	res = (struct cpt_cn10k_res_s *)nixtx;
+
+	plt_nix_dbg("Outbound error, mbuf %p, sa_index %u, compcode %x uc %x",
+		    mbuf, sess_priv.sa_idx, res->compcode, res->uc_compcode);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+
+	sa_base = dev->outb.sa_base;
+	sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(sa);
+
+	memset(&desc, 0, sizeof(desc));
+
+	switch (res->uc_compcode) {
+	case ROC_IE_OT_UCC_ERR_SA_OVERFLOW:
+		desc.subtype = RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW;
+		break;
+	default:
+		plt_warn("Outbound error, mbuf %p, sa_index %u, "
+			 "compcode %x uc %x", mbuf, sess_priv.sa_idx,
+			 res->compcode, res->uc_compcode);
+		desc.subtype = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+		break;
+	}
+
+	desc.metadata = (uint64_t)priv->userdata;
+	rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_IPSEC, &desc);
+	rte_pktmbuf_free(mbuf);
+}
+
+static int
+cn10k_eth_sec_session_create(void *device,
+			     struct rte_security_session_conf *conf,
+			     struct rte_security_session *sess,
+			     struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound, inl_dev;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		roc_nix_inl_cb_register(cn10k_eth_sec_sso_work_cb, NULL);
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+	inl_dev = !!dev->inb.inl_dev;
+
+	/* Search if a session already exits */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	/* Acquire lock on inline dev for inbound */
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (inbound) {
+		struct cn10k_inb_priv_data *inb_priv;
+		struct roc_ot_ipsec_inb_sa *inb_sa;
+		uintptr_t sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_inb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE */
+		sa = roc_nix_inl_inb_sa_get(&dev->nix, inl_dev, ipsec->spi);
+		if (!sa && dev->inb.inl_dev) {
+			plt_err("Failed to create ingress sa, inline dev "
+				"not found or spi not in range");
+			rc = -ENOTSUP;
+			goto mempool_put;
+		} else if (!sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		inb_sa = (struct roc_ot_ipsec_inb_sa *)sa;
+
+		/* Check if SA is already in use */
+		if (inb_sa->w2.s.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_ot_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		/* Save SA index/SPI in cookie for now */
+		inb_sa->w1.s.cookie = rte_cpu_to_be_32(ipsec->spi);
+
+		/* Prepare session priv */
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inl_dev = !!dev->inb.inl_dev;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn10k_outb_priv_data *outb_priv;
+		struct roc_ot_ipsec_outb_sa *outb_sa;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint64_t sa_base = dev->outb.sa_base;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_outb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_ot_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		/* Prepare session priv */
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u inl_dev=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_ot_ipsec_inb_sa *inb_sa;
+	struct roc_ot_ipsec_outb_sa *outb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->w2.s.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->w2.s.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u, inl_dev=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn10k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn10k_eth_sec_capabilities;
+}
+
+void
+cn10k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 68219b8..d27a231 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -16,6 +16,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f75cae0..8577a7b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index 6cc30c3..d1d4b4e 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -37,6 +37,7 @@ sources += files(
 # CN10K
 sources += files(
         'cn10k_ethdev.c',
+        'cn10k_ethdev_sec.c',
         'cn10k_rte_flow.c',
         'cn10k_rx.c',
         'cn10k_rx_mseg.c',
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 74d16e4..5f0e817 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -49,6 +49,8 @@
              'SVendor': None, 'SDevice': None}
 cnxk_bphy_cgx = {'Class': '08', 'Vendor': '177d', 'Device': 'a059,a060',
                  'SVendor': None, 'SDevice': None}
+cnxk_inl_dev = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f0,a0f1',
+                'SVendor': None, 'SDevice': None}
 
 intel_dlb = {'Class': '0b', 'Vendor': '8086', 'Device': '270b,2710,2714',
              'SVendor': None, 'SDevice': None}
@@ -73,9 +75,9 @@
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
 regex_devices = [octeontx2_ree]
-misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
-                intel_ntb_skx, intel_ntb_icx,
-                octeontx2_dma]
+misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ioat_bdw,
+	        intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx,
+		intel_ntb_icx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 18/28] net/cnxk: support Rx security offload on cn9k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (16 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 19/28] net/cnxk: support Tx " Nithin Dabilpuram
                     ` (10 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to receive CPT processed packets on Rx for
CN9K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn9k_eventdev.c              | 153 ++++----
 drivers/event/cnxk/cn9k_worker.h                |   7 +-
 drivers/event/cnxk/cn9k_worker_deq.c            |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_burst.c      |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_ca.c         |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_tmo.c        |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq.c       |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_burst.c |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c    |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c   |   2 +-
 drivers/net/cnxk/cn9k_rx.c                      |  31 +-
 drivers/net/cnxk/cn9k_rx.h                      | 440 +++++++++++++++++++-----
 drivers/net/cnxk/cn9k_rx_mseg.c                 |   2 +-
 drivers/net/cnxk/cn9k_rx_vec.c                  |   2 +-
 drivers/net/cnxk/cn9k_rx_vec_mseg.c             |   2 +-
 drivers/net/cnxk/cnxk_ethdev.h                  |   3 +
 16 files changed, 461 insertions(+), 195 deletions(-)

diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 59a3dc2..64d9ded 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -10,7 +10,8 @@
 #define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
 
 #define CN9K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops)                            \
-	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
+	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]    \
+			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]      \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]    \
@@ -330,178 +331,184 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
 	/* Single WS modes */
-	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
+	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
+	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name,
+	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name,
+	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
+	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name,
+		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name,
+		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	/* Dual WS modes */
-	const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
+	const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_dual_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_dual_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name,
+	const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name,
+		sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name,
+	const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name,
+		sso_hws_dual_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
+		sso_hws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_burst_##name,
+		sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_burst_##name,
+		sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 3e8f214..f1d2e47 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -5,6 +5,9 @@
 #ifndef __CN9K_WORKER_H__
 #define __CN9K_WORKER_H__
 
+#include <rte_eventdev.h>
+#include <rte_vect.h>
+
 #include "cnxk_ethdev.h"
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
@@ -380,7 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
 uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
 					    uint16_t nb_events);
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_##name(                            \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name(                      \
@@ -415,7 +418,7 @@ uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
 NIX_RX_FASTPATH_MODES
 #undef R
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name(                 \
diff --git a/drivers/event/cnxk/cn9k_worker_deq.c b/drivers/event/cnxk/cn9k_worker_deq.c
index 51ccaf4..d65c72a 100644
--- a/drivers/event/cnxk/cn9k_worker_deq.c
+++ b/drivers/event/cnxk/cn9k_worker_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_##name(                            \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_burst.c b/drivers/event/cnxk/cn9k_worker_deq_burst.c
index 4e28014..42dc59b 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_burst.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name(                      \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c
index dbdbba1..b5d0263 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_ca.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name(                         \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_deq_tmo.c
index 9713d1e..b41a590 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_tmo.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_tmo_##name(                        \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq.c b/drivers/event/cnxk/cn9k_worker_dual_deq.c
index 709fa2d..440b66e 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
index d50e1cf..4d913f9 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name(                 \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
index dc9191f..b66e2cf 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name(                    \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
index a0508fd..78a4b3d 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_##name(                   \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd..5c4387e 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(	       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -17,12 +17,13 @@ NIX_RX_FASTPATH_MODES
 
 static inline void
 pick_rx_func(struct rte_eth_dev *eth_dev,
-	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [TSP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
 	eth_dev->rx_pkt_burst = rx_burst
+		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
@@ -38,33 +39,33 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
+	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
@@ -73,7 +74,7 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 	/* Copy multi seg version with no offload for tear down sequence */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		dev->rx_pkt_burst_no_offload =
-			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
+			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
 		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 59545af..bdedeab 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -166,24 +166,104 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 	mbuf->next = NULL;
 }
 
+static __rte_always_inline uint64_t
+nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
+		       uintptr_t sa_base, uint64_t *rearm_val, uint16_t *len)
+{
+	uintptr_t res_sg0 = ((uintptr_t)cq + ROC_ONF_IPSEC_INB_RES_OFF - 8);
+	const union nix_rx_parse_u *rx =
+		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+	struct cn9k_inb_priv_data *sa_priv;
+	struct roc_onf_ipsec_inb_sa *sa;
+	uint8_t lcptr = rx->lcptr;
+	struct rte_ipv4_hdr *ipv4;
+	uint16_t data_off, res;
+	uint32_t spi_mask;
+	uint32_t spi;
+	uintptr_t data;
+	__uint128_t dw;
+	uint8_t sa_w;
+
+	res = *(uint64_t *)(res_sg0 + 8);
+	data_off = *rearm_val & (BIT_ULL(16) - 1);
+	data = (uintptr_t)m->buf_addr;
+	data += data_off;
+
+	rte_prefetch0((void *)data);
+
+	if (unlikely(res != (CPT_COMP_GOOD | ROC_IE_ONF_UCC_SUCCESS << 8)))
+		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+
+	data += lcptr;
+	/* 20 bits of tag would have the SPI */
+	spi = cq->tag & CNXK_ETHDEV_SPI_TAG_MASK;
+
+	/* Get SA */
+	sa_w = sa_base & (ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	spi_mask = (1ULL << sa_w) - 1;
+	sa = roc_nix_inl_onf_ipsec_inb_sa(sa_base, spi & spi_mask);
+
+	/* Update dynamic field with userdata */
+	sa_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(sa);
+	dw = *(__uint128_t *)sa_priv;
+	*rte_security_dynfield(m) = (uint64_t)dw;
+
+	/* Get total length from IPv4 header. We can assume only IPv4 */
+	ipv4 = (struct rte_ipv4_hdr *)(data + ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
+				       ROC_ONF_IPSEC_INB_MAX_L2_SZ);
+
+	/* Update data offset */
+	data_off += (ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
+		     ROC_ONF_IPSEC_INB_MAX_L2_SZ);
+	*rearm_val = *rearm_val & ~(BIT_ULL(16) - 1);
+	*rearm_val |= data_off;
+
+	*len = rte_be_to_cpu_16(ipv4->total_length) + lcptr;
+	return PKT_RX_SEC_OFFLOAD;
+}
+
 static __rte_always_inline void
 cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		     struct rte_mbuf *mbuf, const void *lookup_mem,
-		     const uint64_t val, const uint16_t flag)
+		     uint64_t val, const uint16_t flag)
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
-	const uint16_t len = rx->cn9k.pkt_lenm1 + 1;
+	uint16_t len = rx->cn9k.pkt_lenm1 + 1;
 	const uint64_t w1 = *(const uint64_t *)rx;
+	uint32_t packet_type;
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
 	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
-		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
+		packet_type = nix_ptype_get(lookup_mem, w1);
 	else
-		mbuf->packet_type = 0;
+		packet_type = 0;
+
+	if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
+	    cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
+		uint16_t port = val >> 48;
+		uintptr_t sa_base;
+
+		/* Get SA Base from lookup mem */
+		sa_base = cnxk_nix_sa_base_get(port, lookup_mem);
+
+		ol_flags |= nix_rx_sec_mbuf_update(cq, mbuf, sa_base, &val,
+						   &len);
+
+		/* Only Tunnel inner IPv4 is supported */
+		packet_type = (packet_type &
+			       ~(RTE_PTYPE_L3_MASK | RTE_PTYPE_TUNNEL_MASK));
+		packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+		mbuf->packet_type = packet_type;
+		goto skip_parse;
+	}
+
+	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
+		mbuf->packet_type = packet_type;
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
@@ -193,6 +273,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
 		ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
 
+skip_parse:
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
 			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
@@ -208,11 +289,12 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		ol_flags =
 			nix_update_match_id(rx->cn9k.match_id, ol_flags, mbuf);
 
-	mbuf->ol_flags = ol_flags;
 	mbuf->pkt_len = len;
 	mbuf->data_len = len;
 	*(uint64_t *)(&mbuf->rearm_data) = val;
 
+	mbuf->ol_flags = ol_flags;
+
 	if (flag & NIX_RX_MULTI_SEG_F)
 		nix_cqe_xtract_mseg(rx, mbuf, val, flag);
 	else
@@ -670,98 +752,268 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 #define MARK_F	  NIX_RX_OFFLOAD_MARK_UPDATE_F
 #define TS_F	  NIX_RX_OFFLOAD_TSTAMP_F
 #define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define R_SEC_F   NIX_RX_OFFLOAD_SECURITY_F
 
-/* [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
+/* [R_SEC_F] [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
 #define NIX_RX_FASTPATH_MODES						       \
-R(no_offload,			0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE)	       \
-R(rss,				0, 0, 0, 0, 0, 1, RSS_F)		       \
-R(ptype,			0, 0, 0, 0, 1, 0, PTYPE_F)		       \
-R(ptype_rss,			0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F)	       \
-R(cksum,			0, 0, 0, 1, 0, 0, CKSUM_F)		       \
-R(cksum_rss,			0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F)	       \
-R(cksum_ptype,			0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F)	       \
-R(cksum_ptype_rss,		0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)   \
-R(mark,				0, 0, 1, 0, 0, 0, MARK_F)		       \
-R(mark_rss,			0, 0, 1, 0, 0, 1, MARK_F | RSS_F)	       \
-R(mark_ptype,			0, 0, 1, 0, 1, 0, MARK_F | PTYPE_F)	       \
-R(mark_ptype_rss,		0, 0, 1, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)    \
-R(mark_cksum,			0, 0, 1, 1, 0, 0, MARK_F | CKSUM_F)	       \
-R(mark_cksum_rss,		0, 0, 1, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)    \
-R(mark_cksum_ptype,		0, 0, 1, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)  \
-R(mark_cksum_ptype_rss,		0, 0, 1, 1, 1, 1,			       \
-			MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts,				0, 1, 0, 0, 0, 0, TS_F)			       \
-R(ts_rss,			0, 1, 0, 0, 0, 1, TS_F | RSS_F)		       \
-R(ts_ptype,			0, 1, 0, 0, 1, 0, TS_F | PTYPE_F)	       \
-R(ts_ptype_rss,			0, 1, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)      \
-R(ts_cksum,			0, 1, 0, 1, 0, 0, TS_F | CKSUM_F)	       \
-R(ts_cksum_rss,			0, 1, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)      \
-R(ts_cksum_ptype,		0, 1, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)    \
-R(ts_cksum_ptype_rss,		0, 1, 0, 1, 1, 1,			       \
-			TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts_mark,			0, 1, 1, 0, 0, 0, TS_F | MARK_F)	       \
-R(ts_mark_rss,			0, 1, 1, 0, 0, 1, TS_F | MARK_F | RSS_F)       \
-R(ts_mark_ptype,		0, 1, 1, 0, 1, 0, TS_F | MARK_F | PTYPE_F)     \
-R(ts_mark_ptype_rss,		0, 1, 1, 0, 1, 1,			       \
-			TS_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(ts_mark_cksum,		0, 1, 1, 1, 0, 0, TS_F | MARK_F | CKSUM_F)     \
-R(ts_mark_cksum_rss,		0, 1, 1, 1, 0, 1,			       \
-			TS_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(ts_mark_cksum_ptype,		0, 1, 1, 1, 1, 0,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan,				1, 0, 0, 0, 0, 0, RX_VLAN_F)		       \
-R(vlan_rss,			1, 0, 0, 0, 0, 1, RX_VLAN_F | RSS_F)	       \
-R(vlan_ptype,			1, 0, 0, 0, 1, 0, RX_VLAN_F | PTYPE_F)	       \
-R(vlan_ptype_rss,		1, 0, 0, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum,			1, 0, 0, 1, 0, 0, RX_VLAN_F | CKSUM_F)	       \
-R(vlan_cksum_rss,		1, 0, 0, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype,		1, 0, 0, 1, 1, 0,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F)			       \
-R(vlan_cksum_ptype_rss,		1, 0, 0, 1, 1, 1,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark,			1, 0, 1, 0, 0, 0, RX_VLAN_F | MARK_F)	       \
-R(vlan_mark_rss,		1, 0, 1, 0, 0, 1, RX_VLAN_F | MARK_F | RSS_F)  \
-R(vlan_mark_ptype,		1, 0, 1, 0, 1, 0, RX_VLAN_F | MARK_F | PTYPE_F)\
-R(vlan_mark_ptype_rss,		1, 0, 1, 0, 1, 1,			       \
-			RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark_cksum,		1, 0, 1, 1, 0, 0, RX_VLAN_F | MARK_F | CKSUM_F)\
-R(vlan_mark_cksum_rss,		1, 0, 1, 1, 0, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(vlan_mark_cksum_ptype,	1, 0, 1, 1, 1, 0,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts,			1, 1, 0, 0, 0, 0, RX_VLAN_F | TS_F)	       \
-R(vlan_ts_rss,			1, 1, 0, 0, 0, 1, RX_VLAN_F | TS_F | RSS_F)    \
-R(vlan_ts_ptype,		1, 1, 0, 0, 1, 0, RX_VLAN_F | TS_F | PTYPE_F)  \
-R(vlan_ts_ptype_rss,		1, 1, 0, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
-R(vlan_ts_cksum,		1, 1, 0, 1, 0, 0, RX_VLAN_F | TS_F | CKSUM_F)  \
-R(vlan_ts_cksum_rss,		1, 1, 0, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
-R(vlan_ts_cksum_ptype,		1, 1, 0, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_ts_cksum_ptype_rss,	1, 1, 0, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark,			1, 1, 1, 0, 0, 0, RX_VLAN_F | TS_F | MARK_F)   \
-R(vlan_ts_mark_rss,		1, 1, 1, 0, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
-R(vlan_ts_mark_ptype,		1, 1, 1, 0, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
-R(vlan_ts_mark_ptype_rss,	1, 1, 1, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark_cksum,		1, 1, 1, 1, 0, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
-R(vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
-R(vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)	       \
-R(vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
+R(no_offload,			0, 0, 0, 0, 0, 0, 0,			       \
+		NIX_RX_OFFLOAD_NONE)					       \
+R(rss,				0, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(ptype,			0, 0, 0, 0, 0, 1, 0,			       \
+		PTYPE_F)						       \
+R(ptype_rss,			0, 0, 0, 0, 0, 1, 1,			       \
+		PTYPE_F | RSS_F)					       \
+R(cksum,			0, 0, 0, 0, 1, 0, 0,			       \
+		CKSUM_F)						       \
+R(cksum_rss,			0, 0, 0, 0, 1, 0, 1,			       \
+		CKSUM_F | RSS_F)					       \
+R(cksum_ptype,			0, 0, 0, 0, 1, 1, 0,			       \
+		CKSUM_F | PTYPE_F)					       \
+R(cksum_ptype_rss,		0, 0, 0, 0, 1, 1, 1,			       \
+		CKSUM_F | PTYPE_F | RSS_F)				       \
+R(mark,				0, 0, 0, 1, 0, 0, 0,			       \
+		MARK_F)							       \
+R(mark_rss,			0, 0, 0, 1, 0, 0, 1,			       \
+		MARK_F | RSS_F)						       \
+R(mark_ptype,			0, 0, 0, 1, 0, 1, 0,			       \
+		MARK_F | PTYPE_F)					       \
+R(mark_ptype_rss,		0, 0, 0, 1, 0, 1, 1,			       \
+		MARK_F | PTYPE_F | RSS_F)				       \
+R(mark_cksum,			0, 0, 0, 1, 1, 0, 0,			       \
+		MARK_F | CKSUM_F)					       \
+R(mark_cksum_rss,		0, 0, 0, 1, 1, 0, 1,			       \
+		MARK_F | CKSUM_F | RSS_F)				       \
+R(mark_cksum_ptype,		0, 0, 0, 1, 1, 1, 0,			       \
+		MARK_F | CKSUM_F | PTYPE_F)				       \
+R(mark_cksum_ptype_rss,		0, 0, 0, 1, 1, 1, 1,			       \
+		MARK_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts,				0, 0, 1, 0, 0, 0, 0,			       \
+		TS_F)							       \
+R(ts_rss,			0, 0, 1, 0, 0, 0, 1,			       \
+		TS_F | RSS_F)						       \
+R(ts_ptype,			0, 0, 1, 0, 0, 1, 0,			       \
+		TS_F | PTYPE_F)						       \
+R(ts_ptype_rss,			0, 0, 1, 0, 0, 1, 1,			       \
+		TS_F | PTYPE_F | RSS_F)					       \
+R(ts_cksum,			0, 0, 1, 0, 1, 0, 0,			       \
+		TS_F | CKSUM_F)						       \
+R(ts_cksum_rss,			0, 0, 1, 0, 1, 0, 1,			       \
+		TS_F | CKSUM_F | RSS_F)					       \
+R(ts_cksum_ptype,		0, 0, 1, 0, 1, 1, 0,			       \
+		TS_F | CKSUM_F | PTYPE_F)				       \
+R(ts_cksum_ptype_rss,		0, 0, 1, 0, 1, 1, 1,			       \
+		TS_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts_mark,			0, 0, 1, 1, 0, 0, 0,			       \
+		TS_F | MARK_F)						       \
+R(ts_mark_rss,			0, 0, 1, 1, 0, 0, 1,			       \
+		TS_F | MARK_F | RSS_F)					       \
+R(ts_mark_ptype,		0, 0, 1, 1, 0, 1, 0,			       \
+		TS_F | MARK_F | PTYPE_F)				       \
+R(ts_mark_ptype_rss,		0, 0, 1, 1, 0, 1, 1,			       \
+		TS_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(ts_mark_cksum,		0, 0, 1, 1, 1, 0, 0,			       \
+		TS_F | MARK_F | CKSUM_F)				       \
+R(ts_mark_cksum_rss,		0, 0, 1, 1, 1, 0, 1,			       \
+		TS_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(ts_mark_cksum_ptype,		0, 0, 1, 1, 1, 1, 0,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(ts_mark_cksum_ptype_rss,	0, 0, 1, 1, 1, 1, 1,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan,				0, 1, 0, 0, 0, 0, 0,			       \
+		RX_VLAN_F)						       \
+R(vlan_rss,			0, 1, 0, 0, 0, 0, 1,			       \
+		RX_VLAN_F | RSS_F)					       \
+R(vlan_ptype,			0, 1, 0, 0, 0, 1, 0,			       \
+		RX_VLAN_F | PTYPE_F)					       \
+R(vlan_ptype_rss,		0, 1, 0, 0, 0, 1, 1,			       \
+		RX_VLAN_F | PTYPE_F | RSS_F)				       \
+R(vlan_cksum,			0, 1, 0, 0, 1, 0, 0,			       \
+		RX_VLAN_F | CKSUM_F)					       \
+R(vlan_cksum_rss,		0, 1, 0, 0, 1, 0, 1,			       \
+		RX_VLAN_F | CKSUM_F | RSS_F)				       \
+R(vlan_cksum_ptype,		0, 1, 0, 0, 1, 1, 0,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F)				       \
+R(vlan_cksum_ptype_rss,		0, 1, 0, 0, 1, 1, 1,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark,			0, 1, 0, 1, 0, 0, 0,			       \
+		RX_VLAN_F | MARK_F)					       \
+R(vlan_mark_rss,		0, 1, 0, 1, 0, 0, 1,			       \
+		RX_VLAN_F | MARK_F | RSS_F)				       \
+R(vlan_mark_ptype,		0, 1, 0, 1, 0, 1, 0,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F)				       \
+R(vlan_mark_ptype_rss,		0, 1, 0, 1, 0, 1, 1,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark_cksum,		0, 1, 0, 1, 1, 0, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F)				       \
+R(vlan_mark_cksum_rss,		0, 1, 0, 1, 1, 0, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(vlan_mark_cksum_ptype,	0, 1, 0, 1, 1, 1, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_mark_cksum_ptype_rss,	0, 1, 0, 1, 1, 1, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts,			0, 1, 1, 0, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F)					       \
+R(vlan_ts_rss,			0, 1, 1, 0, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | RSS_F)				       \
+R(vlan_ts_ptype,		0, 1, 1, 0, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | PTYPE_F)				       \
+R(vlan_ts_ptype_rss,		0, 1, 1, 0, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | PTYPE_F | RSS_F)			       \
+R(vlan_ts_cksum,		0, 1, 1, 0, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F)				       \
+R(vlan_ts_cksum_rss,		0, 1, 1, 0, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | RSS_F)			       \
+R(vlan_ts_cksum_ptype,		0, 1, 1, 0, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_ts_cksum_ptype_rss,	0, 1, 1, 0, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark,			0, 1, 1, 1, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F)				       \
+R(vlan_ts_mark_rss,		0, 1, 1, 1, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | RSS_F)			       \
+R(vlan_ts_mark_ptype,		0, 1, 1, 1, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F)			       \
+R(vlan_ts_mark_ptype_rss,	0, 1, 1, 1, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark_cksum,		0, 1, 1, 1, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F)			       \
+R(vlan_ts_mark_cksum_rss,	0, 1, 1, 1, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(vlan_ts_mark_cksum_ptype,	0, 1, 1, 1, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(vlan_ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec,				1, 0, 0, 0, 0, 0, 0,			       \
+		R_SEC_F)						       \
+R(sec_rss,			1, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(sec_ptype,			1, 0, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | PTYPE_F)					       \
+R(sec_ptype_rss,		1, 0, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | PTYPE_F | RSS_F)				       \
+R(sec_cksum,			1, 0, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | CKSUM_F)					       \
+R(sec_cksum_rss,		1, 0, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | CKSUM_F | RSS_F)				       \
+R(sec_cksum_ptype,		1, 0, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F)				       \
+R(sec_cksum_ptype_rss,		1, 0, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(sec_mark,			1, 0, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | MARK_F)					       \
+R(sec_mark_rss,			1, 0, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | MARK_F | RSS_F)				       \
+R(sec_mark_ptype,		1, 0, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | MARK_F | PTYPE_F)				       \
+R(sec_mark_ptype_rss,		1, 0, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(sec_mark_cksum,		1, 0, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F)				       \
+R(sec_mark_cksum_rss,		1, 0, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(sec_mark_cksum_ptype,		1, 0, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(sec_mark_cksum_ptype_rss,	1, 0, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts,			1, 0, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | TS_F)						       \
+R(sec_ts_rss,			1, 0, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | TS_F | RSS_F)					       \
+R(sec_ts_ptype,			1, 0, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | TS_F | PTYPE_F)				       \
+R(sec_ts_ptype_rss,		1, 0, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | TS_F | PTYPE_F | RSS_F)			       \
+R(sec_ts_cksum,			1, 0, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F)				       \
+R(sec_ts_cksum_rss,		1, 0, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | RSS_F)			       \
+R(sec_ts_cksum_ptype,		1, 0, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(sec_ts_cksum_ptype_rss,	1, 0, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark,			1, 0, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F)				       \
+R(sec_ts_mark_rss,		1, 0, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | RSS_F)			       \
+R(sec_ts_mark_ptype,		1, 0, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F)			       \
+R(sec_ts_mark_ptype_rss,	1, 0, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark_cksum,		1, 0, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F)			       \
+R(sec_ts_mark_cksum_rss,	1, 0, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_ts_mark_cksum_ptype,	1, 0, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(sec_ts_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan,			1, 1, 0, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F)					       \
+R(sec_vlan_rss,			1, 1, 0, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | RSS_F)				       \
+R(sec_vlan_ptype,		1, 1, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F)				       \
+R(sec_vlan_ptype_rss,		1, 1, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F)			       \
+R(sec_vlan_cksum,		1, 1, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F)				       \
+R(sec_vlan_cksum_rss,		1, 1, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F)			       \
+R(sec_vlan_cksum_ptype,		1, 1, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_cksum_ptype_rss,	1, 1, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_mark,		1, 1, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F)				       \
+R(sec_vlan_mark_rss,		1, 1, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | RSS_F)			       \
+R(sec_vlan_mark_ptype,		1, 1, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F)			       \
+R(sec_vlan_mark_ptype_rss,	1, 1, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_mark_cksum,		1, 1, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F)			       \
+R(sec_vlan_mark_cksum_rss,	1, 1, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_mark_cksum_ptype,	1, 1, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)	       \
+R(sec_vlan_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)      \
+R(sec_vlan_ts,			1, 1, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F)				       \
+R(sec_vlan_ts_rss,		1, 1, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | RSS_F)			       \
+R(sec_vlan_ts_ptype,		1, 1, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F)			       \
+R(sec_vlan_ts_ptype_rss,	1, 1, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_ts_cksum,		1, 1, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F)			       \
+R(sec_vlan_ts_cksum_rss,	1, 1, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_ts_cksum_ptype,	1, 1, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_ts_cksum_ptype_rss,	1, 1, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark,		1, 1, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F)			       \
+R(sec_vlan_ts_mark_rss,		1, 1, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
+R(sec_vlan_ts_mark_ptype,	1, 1, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
+R(sec_vlan_ts_mark_ptype_rss,	1, 1, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum,	1, 1, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
+R(sec_vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)       \
+R(sec_vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1, 1,		       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(           \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn9k_rx_mseg.c b/drivers/net/cnxk/cn9k_rx_mseg.c
index d7e19b1..06509e8 100644
--- a/drivers/net/cnxk/cn9k_rx_mseg.c
+++ b/drivers/net/cnxk/cn9k_rx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_mseg_##name(      \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx_vec.c b/drivers/net/cnxk/cn9k_rx_vec.c
index ef5f771..c96f61c 100644
--- a/drivers/net/cnxk/cn9k_rx_vec.c
+++ b/drivers/net/cnxk/cn9k_rx_vec.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_vec_##name(       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx_vec_mseg.c b/drivers/net/cnxk/cn9k_rx_vec_mseg.c
index e46d8a4..938b1c0 100644
--- a/drivers/net/cnxk/cn9k_rx_vec_mseg.c
+++ b/drivers/net/cnxk/cn9k_rx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_vec_mseg_##name(  \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index b233010..a2bcea2 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -130,6 +130,9 @@
 /* Subtype from inline outbound error event */
 #define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
 
+/* SPI will be in 20 bits of tag */
+#define CNXK_ETHDEV_SPI_TAG_MASK 0xFFFFFUL
+
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
 	uint8_t rx_pause;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 19/28] net/cnxk: support Tx security offload on cn9k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (17 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
                     ` (9 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to create and submit CPT instructions on Tx
on CN9K SoC.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 drivers/event/cnxk/cn9k_eventdev.c               |  29 +-
 drivers/event/cnxk/cn9k_worker.h                 | 163 +++++++++-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |   2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq.c          |   2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |   2 +-
 drivers/net/cnxk/cn9k_tx.c                       |  29 +-
 drivers/net/cnxk/cn9k_tx.h                       | 392 +++++++++++++++--------
 drivers/net/cnxk/cn9k_tx_mseg.c                  |   2 +-
 drivers/net/cnxk/cn9k_tx_vec.c                   |   2 +-
 drivers/net/cnxk/cn9k_tx_vec_mseg.c              |   2 +-
 11 files changed, 459 insertions(+), 168 deletions(-)

diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 64d9ded..806dcb0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -19,8 +19,8 @@
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)])
 
 #define CN9K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                            \
-	(enq_op =                                                              \
-		 enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
+	(enq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]    \
+			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \
@@ -515,33 +515,34 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	/* Tx modes */
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
+		sso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
+		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
+		sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
+		sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index f1d2e47..6be9be0 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -478,6 +478,145 @@ cn9k_sso_hws_prepare_pkt(const struct cn9k_eth_txq *txq, struct rte_mbuf *m,
 	cn9k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline void
+cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base,
+			  struct rte_mbuf *m, uint64_t *cmd,
+			  uint32_t flags)
+{
+	struct cn9k_outb_priv_data *outb_priv;
+	rte_iova_t io_addr = txq->cpt_io_addr;
+	uint64_t *lmt_addr = txq->lmt_addr;
+	struct cn9k_sec_sess_priv mdata;
+	struct nix_send_hdr_s *send_hdr;
+	uint64_t sa_base = txq->sa_base;
+	uint32_t pkt_len, dlen_adj, rlen;
+	uint64x2_t cmd01, cmd23;
+	uint64_t lmt_status, sa;
+	union nix_send_sg_s *sg;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t esn, *iv;
+	uint8_t l2_len;
+
+	mdata.u64 = *rte_security_dynfield(m);
+	send_hdr = (struct nix_send_hdr_s *)cmd;
+	if (flags & NIX_TX_NEED_EXT_HDR)
+		sg = (union nix_send_sg_s *)&cmd[4];
+	else
+		sg = (union nix_send_sg_s *)&cmd[2];
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = cmd[1] & 0xFF;
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = *(uint64_t *)(sg + 1);
+	pkt_len = send_hdr->w0.total;
+
+	/* Calculate rlen */
+	rlen = pkt_len - l2_len;
+	rlen = (rlen + mdata.roundup_len) + (mdata.roundup_byte - 1);
+	rlen &= ~(uint64_t)(mdata.roundup_byte - 1);
+	rlen += mdata.partial_len;
+	dlen_adj = rlen - pkt_len + l2_len;
+
+	/* Update send descriptors. Security is single segment only */
+	send_hdr->w0.total = pkt_len + dlen_adj;
+	sg->seg1_size = pkt_len + dlen_adj;
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	roc_lmt_mov((void *)(nixtx + 16), cmd, cn9k_nix_tx_ext_subs(flags));
+
+	/* Load opcode and cptr already prepared at pkt metadata set */
+	pkt_len -= l2_len;
+	pkt_len += sizeof(struct roc_onf_ipsec_outb_hdr) +
+		    ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ;
+	sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+
+	sa = (uintptr_t)roc_nix_inl_onf_ipsec_outb_sa(sa_base, mdata.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | sa);
+	ucode_cmd[0] = (ROC_IE_ONF_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
+			0x40UL << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1 */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn9k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len - ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ -
+		sizeof(struct roc_onf_ipsec_outb_hdr);
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Update IV to zero and l2 sz */
+	*(uint16_t *)(dptr + sizeof(struct roc_onf_ipsec_outb_hdr)) =
+		rte_cpu_to_be_16(ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ);
+	iv = (uint64_t *)(dptr + 8);
+	iv[0] = 0;
+	iv[1] = 0;
+
+	/* Head wait if needed */
+	if (base)
+		roc_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);
+
+	/* ESN */
+	outb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd((void *)sa);
+	esn = outb_priv->esn;
+	outb_priv->esn = esn + 1;
+
+	ucode_cmd[0] |= (esn >> 32) << 16;
+	esn = rte_cpu_to_be_32(esn & (BIT_ULL(32) - 1));
+
+	/* Update ESN and IPID and IV */
+	*(uint64_t *)dptr = esn << 32 | esn;
+
+	rte_io_wmb();
+	cn9k_sso_txq_fc_wait(txq);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(lmt_addr, cmd01);
+	vst1q_u64(lmt_addr + 2, cmd23);
+
+	roc_lmt_mov_seg(lmt_addr + 4, ucode_cmd, 2);
+
+	if (roc_lmt_submit_ldeor(io_addr) == 0) {
+		do {
+			vst1q_u64(lmt_addr, cmd01);
+			vst1q_u64(lmt_addr + 2, cmd23);
+			roc_lmt_mov_seg(lmt_addr + 4, ucode_cmd, 2);
+
+			lmt_status = roc_lmt_submit_ldeor(io_addr);
+		} while (lmt_status == 0);
+	}
+}
+#else
+
+static inline void
+cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base,
+			  struct rte_mbuf *m, uint64_t *cmd,
+			  uint32_t flags)
+{
+	RTE_SET_USED(txq);
+	RTE_SET_USED(base);
+	RTE_SET_USED(m);
+	RTE_SET_USED(cmd);
+	RTE_SET_USED(flags);
+}
+#endif
+
 static __rte_always_inline uint16_t
 cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 		      const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
@@ -494,11 +633,30 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	 * In case of fast free is not set, both cn9k_nix_prepare_mseg()
 	 * and cn9k_nix_xmit_prepare() has a barrier after refcnt update.
 	 */
-	if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+	if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
+	    !(flags & NIX_TX_OFFLOAD_SECURITY_F))
 		rte_io_wmb();
 	txq = cn9k_sso_hws_xtract_meta(m, txq_data);
 	cn9k_sso_hws_prepare_pkt(txq, m, cmd, flags);
 
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		uint64_t ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+			uintptr_t ssow_base = base;
+
+			if (ev->sched_type)
+				ssow_base = 0;
+
+			cn9k_sso_hws_xmit_sec_one(txq, ssow_base, m, cmd,
+						  flags);
+			goto done;
+		}
+
+		if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+			rte_io_wmb();
+	}
+
 	if (flags & NIX_TX_MULTI_SEG_F) {
 		const uint16_t segdw = cn9k_nix_prepare_mseg(m, cmd, flags);
 		if (!CNXK_TT_FROM_EVENT(ev->event)) {
@@ -526,6 +684,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 		}
 	}
 
+done:
 	if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
 		if (ref_cnt > 1)
 			return 1;
@@ -537,7 +696,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	return 1;
 }
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \
 		void *port, struct rte_event ev[], uint16_t nb_events);        \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
index 92e2981..db045d0 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name(              \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
index dfb574c..95d711f 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name(          \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq.c b/drivers/event/cnxk/cn9k_worker_tx_enq.c
index 3df649c..026cef8 100644
--- a/drivers/event/cnxk/cn9k_worker_tx_enq.c
+++ b/drivers/event/cnxk/cn9k_worker_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
index 0efe291..97cd7c7 100644
--- a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a1..e5691a2 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(	       \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -23,12 +23,13 @@ NIX_TX_FASTPATH_MODES
 
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
-	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])
+	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [TS] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
 	eth_dev->tx_pkt_burst = tx_burst
+		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
@@ -42,33 +43,33 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
+	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index a27ff76..44273ec 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -1819,139 +1819,269 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 #define NOFF_F	     NIX_TX_OFFLOAD_MBUF_NOFF_F
 #define TSO_F	     NIX_TX_OFFLOAD_TSO_F
 #define TSP_F	     NIX_TX_OFFLOAD_TSTAMP_F
+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F
 
-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES						       \
-T(no_offload,				0, 0, 0, 0, 0, 0,	4,	       \
-		NIX_TX_OFFLOAD_NONE)					       \
-T(l3l4csum,				0, 0, 0, 0, 0, 1,	4,	       \
-		L3L4CSUM_F)						       \
-T(ol3ol4csum,				0, 0, 0, 0, 1, 0,	4,	       \
-		OL3OL4CSUM_F)						       \
-T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 1, 1,	4,	       \
-		OL3OL4CSUM_F | L3L4CSUM_F)				       \
-T(vlan,					0, 0, 0, 1, 0, 0,	6,	       \
-		VLAN_F)							       \
-T(vlan_l3l4csum,			0, 0, 0, 1, 0, 1,	6,	       \
-		VLAN_F | L3L4CSUM_F)					       \
-T(vlan_ol3ol4csum,			0, 0, 0, 1, 1, 0,	6,	       \
-		VLAN_F | OL3OL4CSUM_F)					       \
-T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 1, 1,	6,	       \
-		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			       \
-T(noff,					0, 0, 1, 0, 0, 0,	4,	       \
-		NOFF_F)							       \
-T(noff_l3l4csum,			0, 0, 1, 0, 0, 1,	4,	       \
-		NOFF_F | L3L4CSUM_F)					       \
-T(noff_ol3ol4csum,			0, 0, 1, 0, 1, 0,	4,	       \
-		NOFF_F | OL3OL4CSUM_F)					       \
-T(noff_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1,	4,	       \
-		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			       \
-T(noff_vlan,				0, 0, 1, 1, 0, 0,	6,	       \
-		NOFF_F | VLAN_F)					       \
-T(noff_vlan_l3l4csum,			0, 0, 1, 1, 0, 1,	6,	       \
-		NOFF_F | VLAN_F | L3L4CSUM_F)				       \
-T(noff_vlan_ol3ol4csum,			0, 0, 1, 1, 1, 0,	6,	       \
-		NOFF_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1,	6,	       \
-		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		       \
-T(tso,					0, 1, 0, 0, 0, 0,	6,	       \
-		TSO_F)							       \
-T(tso_l3l4csum,				0, 1, 0, 0, 0, 1,	6,	       \
-		TSO_F | L3L4CSUM_F)					       \
-T(tso_ol3ol4csum,			0, 1, 0, 0, 1, 0,	6,	       \
-		TSO_F | OL3OL4CSUM_F)					       \
-T(tso_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1,	6,	       \
-		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			       \
-T(tso_vlan,				0, 1, 0, 1, 0, 0,	6,	       \
-		TSO_F | VLAN_F)						       \
-T(tso_vlan_l3l4csum,			0, 1, 0, 1, 0, 1,	6,	       \
-		TSO_F | VLAN_F | L3L4CSUM_F)				       \
-T(tso_vlan_ol3ol4csum,			0, 1, 0, 1, 1, 0,	6,	       \
-		TSO_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(tso_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 1, 1,	6,	       \
-		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(tso_noff,				0, 1, 1, 0, 0, 0,	6,	       \
-		TSO_F | NOFF_F)						       \
-T(tso_noff_l3l4csum,			0, 1, 1, 0, 0, 1,	6,	       \
-		TSO_F | NOFF_F | L3L4CSUM_F)				       \
-T(tso_noff_ol3ol4csum,			0, 1, 1, 0, 1, 0,	6,	       \
-		TSO_F | NOFF_F | OL3OL4CSUM_F)				       \
-T(tso_noff_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 1, 1,	6,	       \
-		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(tso_noff_vlan,			0, 1, 1, 1, 0, 0,	6,	       \
-		TSO_F | NOFF_F | VLAN_F)				       \
-T(tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 0, 1,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			       \
-T(tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 0,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	       \
-T(ts,					1, 0, 0, 0, 0, 0,	8,	       \
-		TSP_F)							       \
-T(ts_l3l4csum,				1, 0, 0, 0, 0, 1,	8,	       \
-		TSP_F | L3L4CSUM_F)					       \
-T(ts_ol3ol4csum,			1, 0, 0, 0, 1, 0,	8,	       \
-		TSP_F | OL3OL4CSUM_F)					       \
-T(ts_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1,	8,	       \
-		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			       \
-T(ts_vlan,				1, 0, 0, 1, 0, 0,	8,	       \
-		TSP_F | VLAN_F)						       \
-T(ts_vlan_l3l4csum,			1, 0, 0, 1, 0, 1,	8,	       \
-		TSP_F | VLAN_F | L3L4CSUM_F)				       \
-T(ts_vlan_ol3ol4csum,			1, 0, 0, 1, 1, 0,	8,	       \
-		TSP_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(ts_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 1, 1,	8,	       \
-		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(ts_noff,				1, 0, 1, 0, 0, 0,	8,	       \
-		TSP_F | NOFF_F)						       \
-T(ts_noff_l3l4csum,			1, 0, 1, 0, 0, 1,	8,	       \
-		TSP_F | NOFF_F | L3L4CSUM_F)				       \
-T(ts_noff_ol3ol4csum,			1, 0, 1, 0, 1, 0,	8,	       \
-		TSP_F | NOFF_F | OL3OL4CSUM_F)				       \
-T(ts_noff_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 1, 1,	8,	       \
-		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(ts_noff_vlan,				1, 0, 1, 1, 0, 0,	8,	       \
-		TSP_F | NOFF_F | VLAN_F)				       \
-T(ts_noff_vlan_l3l4csum,		1, 0, 1, 1, 0, 1,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			       \
-T(ts_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 0,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 1, 1,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	       \
-T(ts_tso,				1, 1, 0, 0, 0, 0,	8,	       \
-		TSP_F | TSO_F)						       \
-T(ts_tso_l3l4csum,			1, 1, 0, 0, 0, 1,	8,	       \
-		TSP_F | TSO_F | L3L4CSUM_F)				       \
-T(ts_tso_ol3ol4csum,			1, 1, 0, 0, 1, 0,	8,	       \
-		TSP_F | TSO_F | OL3OL4CSUM_F)				       \
-T(ts_tso_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 1, 1,	8,	       \
-		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		       \
-T(ts_tso_vlan,				1, 1, 0, 1, 0, 0,	8,	       \
-		TSP_F | TSO_F | VLAN_F)					       \
-T(ts_tso_vlan_l3l4csum,			1, 1, 0, 1, 0, 1,	8,	       \
-		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			       \
-T(ts_tso_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 0,	8,	       \
-		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1,	8,	       \
-		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	       \
-T(ts_tso_noff,				1, 1, 1, 0, 0, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F)					       \
-T(ts_tso_noff_l3l4csum,			1, 1, 1, 0, 0, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			       \
-T(ts_tso_noff_ol3ol4csum,		1, 1, 1, 0, 1, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			       \
-T(ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	       \
-T(ts_tso_noff_vlan,			1, 1, 1, 1, 0, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F)			       \
-T(ts_tso_noff_vlan_l3l4csum,		1, 1, 1, 1, 0, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		       \
-T(ts_tso_noff_vlan_ol3ol4csum,		1, 1, 1, 1, 1, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		       \
-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 1, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+#define NIX_TX_FASTPATH_MODES						\
+T(no_offload,				0, 0, 0, 0, 0, 0, 0,	4,	\
+		NIX_TX_OFFLOAD_NONE)					\
+T(l3l4csum,				0, 0, 0, 0, 0, 0, 1,	4,	\
+		L3L4CSUM_F)						\
+T(ol3ol4csum,				0, 0, 0, 0, 0, 1, 0,	4,	\
+		OL3OL4CSUM_F)						\
+T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 0, 1, 1,	4,	\
+		OL3OL4CSUM_F | L3L4CSUM_F)				\
+T(vlan,					0, 0, 0, 0, 1, 0, 0,	6,	\
+		VLAN_F)							\
+T(vlan_l3l4csum,			0, 0, 0, 0, 1, 0, 1,	6,	\
+		VLAN_F | L3L4CSUM_F)					\
+T(vlan_ol3ol4csum,			0, 0, 0, 0, 1, 1, 0,	6,	\
+		VLAN_F | OL3OL4CSUM_F)					\
+T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 0, 1, 1, 1,	6,	\
+		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
+T(noff,					0, 0, 0, 1, 0, 0, 0,	4,	\
+		NOFF_F)							\
+T(noff_l3l4csum,			0, 0, 0, 1, 0, 0, 1,	4,	\
+		NOFF_F | L3L4CSUM_F)					\
+T(noff_ol3ol4csum,			0, 0, 0, 1, 0, 1, 0,	4,	\
+		NOFF_F | OL3OL4CSUM_F)					\
+T(noff_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 0, 1, 1,	4,	\
+		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
+T(noff_vlan,				0, 0, 0, 1, 1, 0, 0,	6,	\
+		NOFF_F | VLAN_F)					\
+T(noff_vlan_l3l4csum,			0, 0, 0, 1, 1, 0, 1,	6,	\
+		NOFF_F | VLAN_F | L3L4CSUM_F)				\
+T(noff_vlan_ol3ol4csum,			0, 0, 0, 1, 1, 1, 0,	6,	\
+		NOFF_F | VLAN_F | OL3OL4CSUM_F)				\
+T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 0, 1, 1, 1, 1,	6,	\
+		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(tso,					0, 0, 1, 0, 0, 0, 0,	6,	\
+		TSO_F)							\
+T(tso_l3l4csum,				0, 0, 1, 0, 0, 0, 1,	6,	\
+		TSO_F | L3L4CSUM_F)					\
+T(tso_ol3ol4csum,			0, 0, 1, 0, 0, 1, 0,	6,	\
+		TSO_F | OL3OL4CSUM_F)					\
+T(tso_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 0, 1, 1,	6,	\
+		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(tso_vlan,				0, 0, 1, 0, 1, 0, 0,	6,	\
+		TSO_F | VLAN_F)						\
+T(tso_vlan_l3l4csum,			0, 0, 1, 0, 1, 0, 1,	6,	\
+		TSO_F | VLAN_F | L3L4CSUM_F)				\
+T(tso_vlan_ol3ol4csum,			0, 0, 1, 0, 1, 1, 0,	6,	\
+		TSO_F | VLAN_F | OL3OL4CSUM_F)				\
+T(tso_vlan_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1, 1,	6,	\
+		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(tso_noff,				0, 0, 1, 1, 0, 0, 0,	6,	\
+		TSO_F | NOFF_F)						\
+T(tso_noff_l3l4csum,			0, 0, 1, 1, 0, 0, 1,	6,	\
+		TSO_F | NOFF_F | L3L4CSUM_F)				\
+T(tso_noff_ol3ol4csum,			0, 0, 1, 1, 0, 1, 0,	6,	\
+		TSO_F | NOFF_F | OL3OL4CSUM_F)				\
+T(tso_noff_ol3ol4csum_l3l4csum,		0, 0, 1, 1, 0, 1, 1,	6,	\
+		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(tso_noff_vlan,			0, 0, 1, 1, 1, 0, 0,	6,	\
+		TSO_F | NOFF_F | VLAN_F)				\
+T(tso_noff_vlan_l3l4csum,		0, 0, 1, 1, 1, 0, 1,	6,	\
+		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(tso_noff_vlan_ol3ol4csum,		0, 0, 1, 1, 1, 1, 0,	6,	\
+		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
+T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1, 1,	6,	\
+		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(ts,					0, 1, 0, 0, 0, 0, 0,	8,	\
+		TSP_F)							\
+T(ts_l3l4csum,				0, 1, 0, 0, 0, 0, 1,	8,	\
+		TSP_F | L3L4CSUM_F)					\
+T(ts_ol3ol4csum,			0, 1, 0, 0, 0, 1, 0,	8,	\
+		TSP_F | OL3OL4CSUM_F)					\
+T(ts_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 0, 1, 1,	8,	\
+		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(ts_vlan,				0, 1, 0, 0, 1, 0, 0,	8,	\
+		TSP_F | VLAN_F)						\
+T(ts_vlan_l3l4csum,			0, 1, 0, 0, 1, 0, 1,	8,	\
+		TSP_F | VLAN_F | L3L4CSUM_F)				\
+T(ts_vlan_ol3ol4csum,			0, 1, 0, 0, 1, 1, 0,	8,	\
+		TSP_F | VLAN_F | OL3OL4CSUM_F)				\
+T(ts_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1, 1,	8,	\
+		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(ts_noff,				0, 1, 0, 1, 0, 0, 0,	8,	\
+		TSP_F | NOFF_F)						\
+T(ts_noff_l3l4csum,			0, 1, 0, 1, 0, 0, 1,	8,	\
+		TSP_F | NOFF_F | L3L4CSUM_F)				\
+T(ts_noff_ol3ol4csum,			0, 1, 0, 1, 0, 1, 0,	8,	\
+		TSP_F | NOFF_F | OL3OL4CSUM_F)				\
+T(ts_noff_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 0, 1, 1,	8,	\
+		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(ts_noff_vlan,				0, 1, 0, 1, 1, 0, 0,	8,	\
+		TSP_F | NOFF_F | VLAN_F)				\
+T(ts_noff_vlan_l3l4csum,		0, 1, 0, 1, 1, 0, 1,	8,	\
+		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(ts_noff_vlan_ol3ol4csum,		0, 1, 0, 1, 1, 1, 0,	8,	\
+		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
+T(ts_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 0, 1, 1, 1, 1,	8,	\
+		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(ts_tso,				0, 1, 1, 0, 0, 0, 0,	8,	\
+		TSP_F | TSO_F)						\
+T(ts_tso_l3l4csum,			0, 1, 1, 0, 0, 0, 1,	8,	\
+		TSP_F | TSO_F | L3L4CSUM_F)				\
+T(ts_tso_ol3ol4csum,			0, 1, 1, 0, 0, 1, 0,	8,	\
+		TSP_F | TSO_F | OL3OL4CSUM_F)				\
+T(ts_tso_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 0, 1, 1,	8,	\
+		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(ts_tso_vlan,				0, 1, 1, 0, 1, 0, 0,	8,	\
+		TSP_F | TSO_F | VLAN_F)					\
+T(ts_tso_vlan_l3l4csum,			0, 1, 1, 0, 1, 0, 1,	8,	\
+		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(ts_tso_vlan_ol3ol4csum,		0, 1, 1, 0, 1, 1, 0,	8,	\
+		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			\
+T(ts_tso_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 0, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(ts_tso_noff,				0, 1, 1, 1, 0, 0, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F)					\
+T(ts_tso_noff_l3l4csum,			0, 1, 1, 1, 0, 0, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(ts_tso_noff_ol3ol4csum,		0, 1, 1, 1, 0, 1, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			\
+T(ts_tso_noff_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 0, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(ts_tso_noff_vlan,			0, 1, 1, 1, 1, 0, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F)			\
+T(ts_tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 1, 0, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(ts_tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 1, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec,					1, 0, 0, 0, 0, 0, 0,	4,	\
+		T_SEC_F)						\
+T(sec_l3l4csum,				1, 0, 0, 0, 0, 0, 1,	4,	\
+		T_SEC_F | L3L4CSUM_F)					\
+T(sec_ol3ol4csum,			1, 0, 0, 0, 0, 1, 0,	4,	\
+		T_SEC_F | OL3OL4CSUM_F)					\
+T(sec_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 0, 1, 1,	4,	\
+		T_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(sec_vlan,				1, 0, 0, 0, 1, 0, 0,	6,	\
+		T_SEC_F | VLAN_F)					\
+T(sec_vlan_l3l4csum,			1, 0, 0, 0, 1, 0, 1,	6,	\
+		T_SEC_F | VLAN_F | L3L4CSUM_F)				\
+T(sec_vlan_ol3ol4csum,			1, 0, 0, 0, 1, 1, 0,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F)			\
+T(sec_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1, 1,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff,				1, 0, 0, 1, 0, 0, 0,	4,	\
+		T_SEC_F | NOFF_F)					\
+T(sec_noff_l3l4csum,			1, 0, 0, 1, 0, 0, 1,	4,	\
+		T_SEC_F | NOFF_F | L3L4CSUM_F)				\
+T(sec_noff_ol3ol4csum,			1, 0, 0, 1, 0, 1, 0,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F)			\
+T(sec_noff_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 0, 1, 1,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff_vlan,			1, 0, 0, 1, 1, 0, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F)				\
+T(sec_noff_vlan_l3l4csum,		1, 0, 0, 1, 1, 0, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_noff_vlan_ol3ol4csum,		1, 0, 0, 1, 1, 1, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 0, 1, 1, 1, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso,				1, 0, 1, 0, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F)					\
+T(sec_tso_l3l4csum,			1, 0, 1, 0, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | L3L4CSUM_F)				\
+T(sec_tso_ol3ol4csum,			1, 0, 1, 0, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F)				\
+T(sec_tso_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_tso_vlan,				1, 0, 1, 0, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F)				\
+T(sec_tso_vlan_l3l4csum,		1, 0, 1, 0, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_tso_vlan_ol3ol4csum,		1, 0, 1, 0, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_tso_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 0, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff,				1, 0, 1, 1, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F)				\
+T(sec_tso_noff_l3l4csum,		1, 0, 1, 1, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_tso_noff_ol3ol4csum,		1, 0, 1, 1, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_tso_noff_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff_vlan,			1, 0, 1, 1, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F)			\
+T(sec_tso_noff_vlan_l3l4csum,		1, 0, 1, 1, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_tso_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts,				1, 1, 0, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F)					\
+T(sec_ts_l3l4csum,			1, 1, 0, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | L3L4CSUM_F)				\
+T(sec_ts_ol3ol4csum,			1, 1, 0, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F)				\
+T(sec_ts_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_ts_vlan,				1, 1, 0, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F)				\
+T(sec_ts_vlan_l3l4csum,			1, 1, 0, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_ts_vlan_ol3ol4csum,		1, 1, 0, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_ts_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff,				1, 1, 0, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F)				\
+T(sec_ts_noff_l3l4csum,			1, 1, 0, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_ts_noff_ol3ol4csum,		1, 1, 0, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_ts_noff_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff_vlan,			1, 1, 0, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F)			\
+T(sec_ts_noff_vlan_l3l4csum,		1, 1, 0, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_noff_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso,				1, 1, 1, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F)				\
+T(sec_ts_tso_l3l4csum,			1, 1, 1, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum,		1, 1, 1, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_tso_vlan,			1, 1, 1, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F)			\
+T(sec_ts_tso_vlan_l3l4csum,		1, 1, 1, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_tso_vlan_ol3ol4csum,		1, 1, 1, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff,			1, 1, 1, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F)			\
+T(sec_ts_tso_noff_l3l4csum,		1, 1, 1, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)		\
+T(sec_ts_tso_noff_ol3ol4csum,		1, 1, 1, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff_vlan,			1, 1, 1, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)		\
+T(sec_ts_tso_noff_vlan_l3l4csum,	1, 1, 1, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)	\
+T(sec_ts_tso_noff_vlan_ol3ol4csum,	1, 1, 1, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\
+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
+		L3L4CSUM_F)
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(           \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn9k_tx_mseg.c b/drivers/net/cnxk/cn9k_tx_mseg.c
index f3c427c..37cba78 100644
--- a/drivers/net/cnxk/cn9k_tx_mseg.c
+++ b/drivers/net/cnxk/cn9k_tx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn9k_nix_xmit_pkts_mseg_##name(void *tx_queue,                 \
 					       struct rte_mbuf **tx_pkts,      \
diff --git a/drivers/net/cnxk/cn9k_tx_vec.c b/drivers/net/cnxk/cn9k_tx_vec.c
index 56a3e25..b424f95 100644
--- a/drivers/net/cnxk/cn9k_tx_vec.c
+++ b/drivers/net/cnxk/cn9k_tx_vec.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn9k_nix_xmit_pkts_vec_##name(void *tx_queue,                  \
 					      struct rte_mbuf **tx_pkts,       \
diff --git a/drivers/net/cnxk/cn9k_tx_vec_mseg.c b/drivers/net/cnxk/cn9k_tx_vec_mseg.c
index 0256efd..5fdf0a9 100644
--- a/drivers/net/cnxk/cn9k_tx_vec_mseg.c
+++ b/drivers/net/cnxk/cn9k_tx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_vec_mseg_##name(  \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 20/28] net/cnxk: support Rx security offload on cn10k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (18 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 19/28] net/cnxk: support Tx " Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 21/28] net/cnxk: support Tx " Nithin Dabilpuram
                     ` (8 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to receive CPT processed packets on Rx via
second pass on CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn10k_eventdev.c         |  80 ++--
 drivers/event/cnxk/cn10k_worker.h           |  73 +++-
 drivers/event/cnxk/cn10k_worker_deq.c       |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_burst.c |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_ca.c    |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_tmo.c   |   2 +-
 drivers/net/cnxk/cn10k_ethdev.h             |   4 +
 drivers/net/cnxk/cn10k_rx.c                 |  31 +-
 drivers/net/cnxk/cn10k_rx.h                 | 648 +++++++++++++++++++++++-----
 drivers/net/cnxk/cn10k_rx_mseg.c            |   2 +-
 drivers/net/cnxk/cn10k_rx_vec.c             |   4 +-
 drivers/net/cnxk/cn10k_rx_vec_mseg.c        |   4 +-
 drivers/net/cnxk/cn10k_tx.h                 |   3 -
 13 files changed, 688 insertions(+), 169 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 8af273a..9c0d84b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -7,7 +7,8 @@
 #include "cnxk_worker.h"
 
 #define CN10K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops)                           \
-	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
+	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]    \
+			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]      \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]    \
@@ -288,88 +289,91 @@ static void
 cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
+	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                            \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
+	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name,
+	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name,
+	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
+	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name,
+		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name,
+		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
@@ -385,7 +389,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	const event_tx_adapter_enqueue
 		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \
 	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e5ed043..b79bd90 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -106,12 +106,17 @@ cn10k_wqe_to_mbuf(uint64_t wqe, const uint64_t mbuf, uint8_t port_id,
 
 static __rte_always_inline void
 cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
-		   void *lookup_mem, void *tstamp)
+		   void *lookup_mem, void *tstamp, uintptr_t lbase)
 {
 	uint64_t mbuf_init = 0x100010000ULL | RTE_PKTMBUF_HEADROOM |
 			     (flags & NIX_RX_OFFLOAD_TSTAMP_F ? 8 : 0);
 	struct rte_event_vector *vec;
+	uint64_t aura_handle, laddr;
 	uint16_t nb_mbufs, non_vec;
+	uint16_t lmt_id, d_off;
+	struct rte_mbuf *mbuf;
+	uint8_t loff = 0;
+	uint64_t sa_base;
 	uint64_t **wqe;
 
 	mbuf_init |= ((uint64_t)port_id) << 48;
@@ -121,17 +126,41 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
 	nb_mbufs = RTE_ALIGN_FLOOR(vec->nb_elem, NIX_DESCS_PER_LOOP);
 	nb_mbufs = cn10k_nix_recv_pkts_vector(&mbuf_init, vec->mbufs, nb_mbufs,
 					      flags | NIX_RX_VWQE_F, lookup_mem,
-					      tstamp);
+					      tstamp, lbase);
 	wqe += nb_mbufs;
 	non_vec = vec->nb_elem - nb_mbufs;
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && non_vec) {
+		mbuf = (struct rte_mbuf *)((uintptr_t)wqe[0] -
+					   sizeof(struct rte_mbuf));
+		/* Pick first mbuf's aura handle assuming all
+		 * mbufs are from a vec and are from same RQ.
+		 */
+		aura_handle = mbuf->pool->pool_id;
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		laddr = lbase;
+		laddr += 8;
+		d_off = ((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+		d_off += (mbuf_init & 0xFFFF);
+		sa_base = cnxk_nix_sa_base_get(mbuf_init >> 48, lookup_mem);
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	}
+
 	while (non_vec) {
 		struct nix_cqe_hdr_s *cqe = (struct nix_cqe_hdr_s *)wqe[0];
-		struct rte_mbuf *mbuf;
 		uint64_t tstamp_ptr;
 
 		mbuf = (struct rte_mbuf *)((char *)cqe -
 					   sizeof(struct rte_mbuf));
+
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			const uint64_t cq_w1 = *((const uint64_t *)cqe + 1);
+
+			mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, sa_base, laddr,
+						       &loff, mbuf, d_off);
+		}
+
 		cn10k_nix_cqe_to_mbuf(cqe, cqe->tag, mbuf, lookup_mem,
 				      mbuf_init, flags);
 		/* Extracting tstamp, if PTP enabled*/
@@ -145,6 +174,12 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
 		non_vec--;
 		wqe++;
 	}
+
+	/* Free remaining meta buffers if any */
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id, loff, aura_handle);
+		plt_io_wmb();
+	}
 }
 
 static __rte_always_inline uint16_t
@@ -188,6 +223,34 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
 			   RTE_EVENT_TYPE_ETHDEV) {
 			uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
 
+			if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+				struct rte_mbuf *m;
+				uintptr_t sa_base;
+				uint64_t iova = 0;
+				uint8_t loff = 0;
+				uint16_t d_off;
+				uint64_t cq_w1;
+
+				m = (struct rte_mbuf *)mbuf;
+				d_off = (uintptr_t)(m->buf_addr) - (uintptr_t)m;
+				d_off += RTE_PKTMBUF_HEADROOM;
+
+				cq_w1 = *(uint64_t *)(gw.u64[1] + 8);
+
+				sa_base = cnxk_nix_sa_base_get(port,
+							       lookup_mem);
+				sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+
+				mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc(cq_w1,
+						sa_base, (uintptr_t)&iova,
+						&loff, (struct rte_mbuf *)mbuf,
+						d_off);
+				if (loff)
+					roc_npa_aura_op_free(m->pool->pool_id,
+							     0, iova);
+
+			}
+
 			gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
 			cn10k_wqe_to_mbuf(gw.u64[1], mbuf, port,
 					  gw.u64[0] & 0xFFFFF, flags,
@@ -212,7 +275,7 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
 				   ((uint64_t)port << 32);
 			*(uint64_t *)gw.u64[1] = (uint64_t)vwqe_hdr;
 			cn10k_process_vwqe(gw.u64[1], port, flags, lookup_mem,
-					   ws->tstamp);
+					   ws->tstamp, ws->lmt_base);
 		}
 	}
 
@@ -290,7 +353,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
 uint16_t __rte_hot cn10k_sso_hws_ca_enq(void *port, struct rte_event ev[],
 					uint16_t nb_events);
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_##name(                           \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name(                     \
diff --git a/drivers/event/cnxk/cn10k_worker_deq.c b/drivers/event/cnxk/cn10k_worker_deq.c
index 36ec454..6083f69 100644
--- a/drivers/event/cnxk/cn10k_worker_deq.c
+++ b/drivers/event/cnxk/cn10k_worker_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_##name(                           \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_burst.c b/drivers/event/cnxk/cn10k_worker_deq_burst.c
index 29ecc55..8539d5d 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_burst.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name(                     \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_ca.c b/drivers/event/cnxk/cn10k_worker_deq_ca.c
index c90f6a9..15c698e 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_ca.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_ca_##name(                        \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_tmo.c b/drivers/event/cnxk/cn10k_worker_deq_tmo.c
index c8524a2..537ae37 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_tmo.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_hot cn10k_sso_hws_deq_tmo_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index a888364..200cd93 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -81,4 +81,8 @@ void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 /* Security context setup */
 void cn10k_eth_sec_ops_override(void);
 
+#define LMT_OFF(lmt_addr, lmt_num, offset)                                     \
+	(void *)((uintptr_t)(lmt_addr) +                                       \
+		 ((uint64_t)(lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset))
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767a..d6af54b 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(	       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -17,12 +17,13 @@ NIX_RX_FASTPATH_MODES
 
 static inline void
 pick_rx_func(struct rte_eth_dev *eth_dev,
-	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [VLAN] [TSP] [MARK] [CKSUM] [PTYPE] [RSS] */
 	eth_dev->rx_pkt_burst = rx_burst
+		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
@@ -38,33 +39,33 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
+	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                            \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
@@ -73,7 +74,7 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 	/* Copy multi seg version with no offload for tear down sequence */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		dev->rx_pkt_burst_no_offload =
-			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
+			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
 		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index d27a231..fcc451a 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -65,6 +65,130 @@ nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
 	return (struct rte_mbuf *)(buff - data_off);
 }
 
+static __rte_always_inline void
+nix_sec_flush_meta(uintptr_t laddr, uint16_t lmt_id, uint8_t loff,
+		   uintptr_t aura_handle)
+{
+	uint64_t pa;
+
+	/* laddr is pointing to first pointer */
+	laddr -= 8;
+
+	/* Trigger free either on lmtline full or different aura handle */
+	pa = roc_npa_aura_handle_to_base(aura_handle) + NPA_LF_AURA_BATCH_FREE0;
+
+	/* Update aura handle */
+	*(uint64_t *)laddr = (((uint64_t)(loff & 0x1) << 32) |
+			      roc_npa_aura_handle_to_aura(aura_handle));
+
+	pa |= ((loff >> 1) << 4);
+	roc_lmt_submit_steorl(lmt_id, pa);
+}
+
+static __rte_always_inline struct rte_mbuf *
+nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, const uint64_t sa_base, uintptr_t laddr,
+			uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off)
+{
+	const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
+	const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+	struct cn10k_inb_priv_data *inb_priv;
+	struct rte_mbuf *inner;
+	uint32_t sa_idx;
+	void *inb_sa;
+	uint64_t w0;
+
+	if (cq_w1 & BIT(11)) {
+		inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
+					    sizeof(struct rte_mbuf));
+
+		/* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+		w0 = hdr->w0.u64;
+		sa_idx = w0 >> 32;
+
+		inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+
+		/* Update dynamic field with userdata */
+		*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
+
+		/* Update l2 hdr length first */
+		inner->pkt_len = (hdr->w2.il3_off -
+				  sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7));
+
+		/* Store meta in lmtline to free
+		 * Assume all meta's from same aura.
+		 */
+		*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
+		*loff = *loff + 1;
+
+		return inner;
+	}
+	return mbuf;
+}
+
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline struct rte_mbuf *
+nix_sec_meta_to_mbuf(uint64_t cq_w1, uintptr_t sa_base, uintptr_t laddr,
+		     uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off,
+		     uint8x16_t *rx_desc_field1, uint64_t *ol_flags)
+{
+	const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
+	const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+	struct cn10k_inb_priv_data *inb_priv;
+	struct rte_mbuf *inner;
+	uint64_t *sg, res_w1;
+	uint32_t sa_idx;
+	void *inb_sa;
+	uint16_t len;
+	uint64_t w0;
+
+	if (cq_w1 & BIT(11)) {
+		inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
+					    sizeof(struct rte_mbuf));
+		/* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+		w0 = hdr->w0.u64;
+		sa_idx = w0 >> 32;
+
+		inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+
+		/* Update dynamic field with userdata */
+		*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
+
+		/* CPT result(struct cpt_cn10k_res_s) is at
+		 * after first IOVA in meta
+		 */
+		sg = (uint64_t *)(inner + 1);
+		res_w1 = sg[10];
+
+		/* Clear checksum flags and update security flag */
+		*ol_flags &= ~(PKT_RX_L4_CKSUM_MASK | PKT_RX_IP_CKSUM_MASK);
+		*ol_flags |= (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
+			      PKT_RX_SEC_OFFLOAD :
+			      (PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED));
+		/* Calculate inner packet length */
+		len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
+			sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
+		/* Update pkt_len and data_len */
+		*rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 2);
+		*rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 4);
+
+		/* Store meta in lmtline to free
+		 * Assume all meta's from same aura.
+		 */
+		*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
+		*loff = *loff + 1;
+
+		/* Return inner mbuf */
+		return inner;
+	}
+
+	/* Return same mbuf as it is not a decrypted pkt */
+	return mbuf;
+}
+#endif
+
 static __rte_always_inline uint32_t
 nix_ptype_get(const void *const lookup_mem, const uint64_t in)
 {
@@ -177,8 +301,8 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
-	const uint16_t len = rx->pkt_lenm1 + 1;
 	const uint64_t w1 = *(const uint64_t *)rx;
+	uint16_t len = rx->pkt_lenm1 + 1;
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
@@ -194,8 +318,30 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		ol_flags |= PKT_RX_RSS_HASH;
 	}
 
-	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
-		ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+	/* Process Security packets */
+	if (flag & NIX_RX_OFFLOAD_SECURITY_F) {
+		if (w1 & BIT(11)) {
+			/* CPT result(struct cpt_cn10k_res_s) is at
+			 * after first IOVA in meta
+			 */
+			const uint64_t *sg = (const uint64_t *)(mbuf + 1);
+			const uint64_t res_w1 = sg[10];
+			const uint16_t uc_cc = res_w1 & 0xFF;
+
+			/* Rlen */
+			len = ((res_w1 >> 16) & 0xFFFF) + mbuf->pkt_len;
+			ol_flags |= ((uc_cc == CPT_COMP_WARN) ?
+						   PKT_RX_SEC_OFFLOAD :
+						   (PKT_RX_SEC_OFFLOAD |
+					      PKT_RX_SEC_OFFLOAD_FAILED));
+		} else {
+			if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+				ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+		}
+	} else {
+		if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+			ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+	}
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
@@ -263,13 +409,28 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 	const uintptr_t desc = rxq->desc;
 	const uint64_t wdata = rxq->wdata;
 	const uint32_t qmask = rxq->qmask;
+	uint64_t lbase = rxq->lmt_base;
 	uint16_t packets = 0, nb_pkts;
+	uint8_t loff = 0, lnum = 0;
 	uint32_t head = rxq->head;
 	struct nix_cqe_hdr_s *cq;
 	struct rte_mbuf *mbuf;
+	uint64_t aura_handle;
+	uint64_t sa_base;
+	uint16_t lmt_id;
+	uint64_t laddr;
 
 	nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		aura_handle = rxq->aura_handle;
+		sa_base = rxq->sa_base;
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		laddr = lbase;
+		laddr += 8;
+	}
+
 	while (packets < nb_pkts) {
 		/* Prefetch N desc ahead */
 		rte_prefetch_non_temporal(
@@ -278,6 +439,14 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 
 		mbuf = nix_get_mbuf_from_cqe(cq, data_off);
 
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			const uint64_t cq_w1 = *((const uint64_t *)cq + 1);
+
+			mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, sa_base, laddr,
+						       &loff, mbuf, data_off);
+		}
+
 		cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
 				      flags);
 		cnxk_nix_mbuf_to_tstamp(mbuf, rxq->tstamp,
@@ -289,6 +458,20 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 		roc_prefetch_store_keep(mbuf);
 		head++;
 		head &= qmask;
+
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Flush when we don't have space for 4 meta */
+			if ((15 - loff) < 1) {
+				nix_sec_flush_meta(laddr, lmt_id + lnum, loff,
+						   aura_handle);
+				lnum++;
+				lnum &= BIT_ULL(ROC_LMT_LINES_PER_CORE_LOG2) -
+					1;
+				/* First pointer starts at 8B offset */
+				laddr = (uintptr_t)LMT_OFF(lbase, lnum, 8);
+				loff = 0;
+			}
+		}
 	}
 
 	rxq->head = head;
@@ -297,6 +480,12 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 	/* Free all the CQs that we've processed */
 	plt_write64((wdata | nb_pkts), rxq->cq_door);
 
+	/* Free remaining meta buffers if any */
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
+		plt_io_wmb();
+	}
+
 	return nb_pkts;
 }
 
@@ -327,7 +516,8 @@ nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 static __rte_always_inline uint16_t
 cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			   const uint16_t flags, void *lookup_mem,
-			   struct cnxk_timesync_info *tstamp)
+			   struct cnxk_timesync_info *tstamp,
+			   uintptr_t lmt_base)
 {
 	struct cn10k_eth_rxq *rxq = args;
 	const uint64_t mbuf_initializer = (flags & NIX_RX_VWQE_F) ?
@@ -346,9 +536,13 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 	uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
 	uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
 	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint64_t aura_handle, lbase, laddr;
+	uint8_t loff = 0, lnum = 0;
 	uint8x16_t f0, f1, f2, f3;
+	uint16_t lmt_id, d_off;
 	uint16_t packets = 0;
 	uint16_t pkts_left;
+	uintptr_t sa_base;
 	uint32_t head;
 	uintptr_t cq0;
 
@@ -366,6 +560,38 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		RTE_SET_USED(head);
 	}
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		if (flags & NIX_RX_VWQE_F) {
+			uint16_t port;
+
+			mbuf0 = (struct rte_mbuf *)((uintptr_t)mbufs[0] -
+						    sizeof(struct rte_mbuf));
+			/* Pick first mbuf's aura handle assuming all
+			 * mbufs are from a vec and are from same RQ.
+			 */
+			aura_handle = mbuf0->pool->pool_id;
+			/* Calculate offset from mbuf to actual data area */
+			d_off = ((uintptr_t)mbuf0->buf_addr - (uintptr_t)mbuf0);
+			d_off += (mbuf_initializer & 0xFFFF);
+
+			/* Get SA Base from lookup tbl using port_id */
+			port = mbuf_initializer >> 48;
+			sa_base = cnxk_nix_sa_base_get(port, lookup_mem);
+
+			lbase = lmt_base;
+		} else {
+			aura_handle = rxq->aura_handle;
+			d_off = rxq->data_off;
+			sa_base = rxq->sa_base;
+			lbase = rxq->lmt_base;
+		}
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		lnum = 0;
+		laddr = lbase;
+		laddr += 8;
+	}
+
 	while (packets < pkts) {
 		if (!(flags & NIX_RX_VWQE_F)) {
 			/* Exit loop if head is about to wrap and become
@@ -428,6 +654,14 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
 		f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
 
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Prefetch probable CPT parse header area */
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf0, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf1, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf2, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf3, d_off));
+		}
+
 		/* Load CQE word0 and word 1 */
 		const uint64_t cq0_w0 = *CQE_PTR_OFF(cq0, 0, 0, flags);
 		const uint64_t cq0_w1 = *CQE_PTR_OFF(cq0, 0, 8, flags);
@@ -474,6 +708,30 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
 		}
 
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Checksum ol_flags will be cleared if mbuf is meta */
+			mbuf0 = nix_sec_meta_to_mbuf(cq0_w1, sa_base, laddr,
+						     &loff, mbuf0, d_off, &f0,
+						     &ol_flags0);
+			mbuf01 = vsetq_lane_u64((uint64_t)mbuf0, mbuf01, 0);
+
+			mbuf1 = nix_sec_meta_to_mbuf(cq1_w1, sa_base, laddr,
+						     &loff, mbuf1, d_off, &f1,
+						     &ol_flags1);
+			mbuf01 = vsetq_lane_u64((uint64_t)mbuf1, mbuf01, 1);
+
+			mbuf2 = nix_sec_meta_to_mbuf(cq2_w1, sa_base, laddr,
+						     &loff, mbuf2, d_off, &f2,
+						     &ol_flags2);
+			mbuf23 = vsetq_lane_u64((uint64_t)mbuf2, mbuf23, 0);
+
+			mbuf3 = nix_sec_meta_to_mbuf(cq3_w1, sa_base, laddr,
+						     &loff, mbuf3, d_off, &f3,
+						     &ol_flags3);
+			mbuf23 = vsetq_lane_u64((uint64_t)mbuf3, mbuf23, 1);
+		}
+
 		if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 			uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
 			uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
@@ -659,6 +917,26 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			head += NIX_DESCS_PER_LOOP;
 			head &= qmask;
 		}
+
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Flush when we don't have space for 4 meta */
+			if ((15 - loff) < 4) {
+				nix_sec_flush_meta(laddr, lmt_id + lnum, loff,
+						   aura_handle);
+				lnum++;
+				lnum &= BIT_ULL(ROC_LMT_LINES_PER_CORE_LOG2) -
+					1;
+				/* First pointer starts at 8B offset */
+				laddr = (uintptr_t)LMT_OFF(lbase, lnum, 8);
+				loff = 0;
+			}
+		}
+	}
+
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
+		if (flags & NIX_RX_VWQE_F)
+			plt_io_wmb();
 	}
 
 	if (flags & NIX_RX_VWQE_F)
@@ -681,16 +959,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 #else
 
 static inline uint16_t
-cn10k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
-			   uint16_t pkts, const uint16_t flags,
-			   void *lookup_mem, void *tstamp)
+cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
+			   const uint16_t flags, void *lookup_mem,
+			   struct cnxk_timesync_info *tstamp,
+			   uintptr_t lmt_base)
 {
-	RTE_SET_USED(lookup_mem);
-	RTE_SET_USED(rx_queue);
-	RTE_SET_USED(rx_pkts);
+	RTE_SET_USED(args);
+	RTE_SET_USED(mbufs);
 	RTE_SET_USED(pkts);
 	RTE_SET_USED(flags);
+	RTE_SET_USED(lookup_mem);
 	RTE_SET_USED(tstamp);
+	RTE_SET_USED(lmt_base);
 
 	return 0;
 }
@@ -704,98 +984,268 @@ cn10k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 #define MARK_F	  NIX_RX_OFFLOAD_MARK_UPDATE_F
 #define TS_F      NIX_RX_OFFLOAD_TSTAMP_F
 #define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define R_SEC_F   NIX_RX_OFFLOAD_SECURITY_F
 
-/* [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
+/* [R_SEC_F] [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
 #define NIX_RX_FASTPATH_MODES						       \
-R(no_offload,			0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE)	       \
-R(rss,				0, 0, 0, 0, 0, 1, RSS_F)		       \
-R(ptype,			0, 0, 0, 0, 1, 0, PTYPE_F)		       \
-R(ptype_rss,			0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F)	       \
-R(cksum,			0, 0, 0, 1, 0, 0, CKSUM_F)		       \
-R(cksum_rss,			0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F)	       \
-R(cksum_ptype,			0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F)	       \
-R(cksum_ptype_rss,		0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)   \
-R(mark,				0, 0, 1, 0, 0, 0, MARK_F)		       \
-R(mark_rss,			0, 0, 1, 0, 0, 1, MARK_F | RSS_F)	       \
-R(mark_ptype,			0, 0, 1, 0, 1, 0, MARK_F | PTYPE_F)	       \
-R(mark_ptype_rss,		0, 0, 1, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)    \
-R(mark_cksum,			0, 0, 1, 1, 0, 0, MARK_F | CKSUM_F)	       \
-R(mark_cksum_rss,		0, 0, 1, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)    \
-R(mark_cksum_ptype,		0, 0, 1, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)  \
-R(mark_cksum_ptype_rss,		0, 0, 1, 1, 1, 1,			       \
-			MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts,				0, 1, 0, 0, 0, 0, TS_F)			       \
-R(ts_rss,			0, 1, 0, 0, 0, 1, TS_F | RSS_F)		       \
-R(ts_ptype,			0, 1, 0, 0, 1, 0, TS_F | PTYPE_F)	       \
-R(ts_ptype_rss,			0, 1, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)      \
-R(ts_cksum,			0, 1, 0, 1, 0, 0, TS_F | CKSUM_F)	       \
-R(ts_cksum_rss,			0, 1, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)      \
-R(ts_cksum_ptype,		0, 1, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)    \
-R(ts_cksum_ptype_rss,		0, 1, 0, 1, 1, 1,			       \
-			TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts_mark,			0, 1, 1, 0, 0, 0, TS_F | MARK_F)	       \
-R(ts_mark_rss,			0, 1, 1, 0, 0, 1, TS_F | MARK_F | RSS_F)       \
-R(ts_mark_ptype,		0, 1, 1, 0, 1, 0, TS_F | MARK_F | PTYPE_F)     \
-R(ts_mark_ptype_rss,		0, 1, 1, 0, 1, 1,			       \
-			TS_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(ts_mark_cksum,		0, 1, 1, 1, 0, 0, TS_F | MARK_F | CKSUM_F)     \
-R(ts_mark_cksum_rss,		0, 1, 1, 1, 0, 1,			       \
-			TS_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(ts_mark_cksum_ptype,		0, 1, 1, 1, 1, 0,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan,				1, 0, 0, 0, 0, 0, RX_VLAN_F)		       \
-R(vlan_rss,			1, 0, 0, 0, 0, 1, RX_VLAN_F | RSS_F)	       \
-R(vlan_ptype,			1, 0, 0, 0, 1, 0, RX_VLAN_F | PTYPE_F)	       \
-R(vlan_ptype_rss,		1, 0, 0, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum,			1, 0, 0, 1, 0, 0, RX_VLAN_F | CKSUM_F)	       \
-R(vlan_cksum_rss,		1, 0, 0, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype,		1, 0, 0, 1, 1, 0,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F)			       \
-R(vlan_cksum_ptype_rss,		1, 0, 0, 1, 1, 1,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark,			1, 0, 1, 0, 0, 0, RX_VLAN_F | MARK_F)	       \
-R(vlan_mark_rss,		1, 0, 1, 0, 0, 1, RX_VLAN_F | MARK_F | RSS_F)  \
-R(vlan_mark_ptype,		1, 0, 1, 0, 1, 0, RX_VLAN_F | MARK_F | PTYPE_F)\
-R(vlan_mark_ptype_rss,		1, 0, 1, 0, 1, 1,			       \
-			RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark_cksum,		1, 0, 1, 1, 0, 0, RX_VLAN_F | MARK_F | CKSUM_F)\
-R(vlan_mark_cksum_rss,		1, 0, 1, 1, 0, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(vlan_mark_cksum_ptype,	1, 0, 1, 1, 1, 0,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts,			1, 1, 0, 0, 0, 0, RX_VLAN_F | TS_F)	       \
-R(vlan_ts_rss,			1, 1, 0, 0, 0, 1, RX_VLAN_F | TS_F | RSS_F)    \
-R(vlan_ts_ptype,		1, 1, 0, 0, 1, 0, RX_VLAN_F | TS_F | PTYPE_F)  \
-R(vlan_ts_ptype_rss,		1, 1, 0, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
-R(vlan_ts_cksum,		1, 1, 0, 1, 0, 0, RX_VLAN_F | TS_F | CKSUM_F)  \
-R(vlan_ts_cksum_rss,		1, 1, 0, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
-R(vlan_ts_cksum_ptype,		1, 1, 0, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_ts_cksum_ptype_rss,	1, 1, 0, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark,			1, 1, 1, 0, 0, 0, RX_VLAN_F | TS_F | MARK_F)   \
-R(vlan_ts_mark_rss,		1, 1, 1, 0, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
-R(vlan_ts_mark_ptype,		1, 1, 1, 0, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
-R(vlan_ts_mark_ptype_rss,	1, 1, 1, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark_cksum,		1, 1, 1, 1, 0, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
-R(vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
-R(vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)	       \
-R(vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
+R(no_offload,			0, 0, 0, 0, 0, 0, 0,			       \
+		NIX_RX_OFFLOAD_NONE)					       \
+R(rss,				0, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(ptype,			0, 0, 0, 0, 0, 1, 0,			       \
+		PTYPE_F)						       \
+R(ptype_rss,			0, 0, 0, 0, 0, 1, 1,			       \
+		PTYPE_F | RSS_F)					       \
+R(cksum,			0, 0, 0, 0, 1, 0, 0,			       \
+		CKSUM_F)						       \
+R(cksum_rss,			0, 0, 0, 0, 1, 0, 1,			       \
+		CKSUM_F | RSS_F)					       \
+R(cksum_ptype,			0, 0, 0, 0, 1, 1, 0,			       \
+		CKSUM_F | PTYPE_F)					       \
+R(cksum_ptype_rss,		0, 0, 0, 0, 1, 1, 1,			       \
+		CKSUM_F | PTYPE_F | RSS_F)				       \
+R(mark,				0, 0, 0, 1, 0, 0, 0,			       \
+		MARK_F)							       \
+R(mark_rss,			0, 0, 0, 1, 0, 0, 1,			       \
+		MARK_F | RSS_F)						       \
+R(mark_ptype,			0, 0, 0, 1, 0, 1, 0,			       \
+		MARK_F | PTYPE_F)					       \
+R(mark_ptype_rss,		0, 0, 0, 1, 0, 1, 1,			       \
+		MARK_F | PTYPE_F | RSS_F)				       \
+R(mark_cksum,			0, 0, 0, 1, 1, 0, 0,			       \
+		MARK_F | CKSUM_F)					       \
+R(mark_cksum_rss,		0, 0, 0, 1, 1, 0, 1,			       \
+		MARK_F | CKSUM_F | RSS_F)				       \
+R(mark_cksum_ptype,		0, 0, 0, 1, 1, 1, 0,			       \
+		MARK_F | CKSUM_F | PTYPE_F)				       \
+R(mark_cksum_ptype_rss,		0, 0, 0, 1, 1, 1, 1,			       \
+		MARK_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts,				0, 0, 1, 0, 0, 0, 0,			       \
+		TS_F)							       \
+R(ts_rss,			0, 0, 1, 0, 0, 0, 1,			       \
+		TS_F | RSS_F)						       \
+R(ts_ptype,			0, 0, 1, 0, 0, 1, 0,			       \
+		TS_F | PTYPE_F)						       \
+R(ts_ptype_rss,			0, 0, 1, 0, 0, 1, 1,			       \
+		TS_F | PTYPE_F | RSS_F)					       \
+R(ts_cksum,			0, 0, 1, 0, 1, 0, 0,			       \
+		TS_F | CKSUM_F)						       \
+R(ts_cksum_rss,			0, 0, 1, 0, 1, 0, 1,			       \
+		TS_F | CKSUM_F | RSS_F)					       \
+R(ts_cksum_ptype,		0, 0, 1, 0, 1, 1, 0,			       \
+		TS_F | CKSUM_F | PTYPE_F)				       \
+R(ts_cksum_ptype_rss,		0, 0, 1, 0, 1, 1, 1,			       \
+		TS_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts_mark,			0, 0, 1, 1, 0, 0, 0,			       \
+		TS_F | MARK_F)						       \
+R(ts_mark_rss,			0, 0, 1, 1, 0, 0, 1,			       \
+		TS_F | MARK_F | RSS_F)					       \
+R(ts_mark_ptype,		0, 0, 1, 1, 0, 1, 0,			       \
+		TS_F | MARK_F | PTYPE_F)				       \
+R(ts_mark_ptype_rss,		0, 0, 1, 1, 0, 1, 1,			       \
+		TS_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(ts_mark_cksum,		0, 0, 1, 1, 1, 0, 0,			       \
+		TS_F | MARK_F | CKSUM_F)				       \
+R(ts_mark_cksum_rss,		0, 0, 1, 1, 1, 0, 1,			       \
+		TS_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(ts_mark_cksum_ptype,		0, 0, 1, 1, 1, 1, 0,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(ts_mark_cksum_ptype_rss,	0, 0, 1, 1, 1, 1, 1,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan,				0, 1, 0, 0, 0, 0, 0,			       \
+		RX_VLAN_F)						       \
+R(vlan_rss,			0, 1, 0, 0, 0, 0, 1,			       \
+		RX_VLAN_F | RSS_F)					       \
+R(vlan_ptype,			0, 1, 0, 0, 0, 1, 0,			       \
+		RX_VLAN_F | PTYPE_F)					       \
+R(vlan_ptype_rss,		0, 1, 0, 0, 0, 1, 1,			       \
+		RX_VLAN_F | PTYPE_F | RSS_F)				       \
+R(vlan_cksum,			0, 1, 0, 0, 1, 0, 0,			       \
+		RX_VLAN_F | CKSUM_F)					       \
+R(vlan_cksum_rss,		0, 1, 0, 0, 1, 0, 1,			       \
+		RX_VLAN_F | CKSUM_F | RSS_F)				       \
+R(vlan_cksum_ptype,		0, 1, 0, 0, 1, 1, 0,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F)				       \
+R(vlan_cksum_ptype_rss,		0, 1, 0, 0, 1, 1, 1,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark,			0, 1, 0, 1, 0, 0, 0,			       \
+		RX_VLAN_F | MARK_F)					       \
+R(vlan_mark_rss,		0, 1, 0, 1, 0, 0, 1,			       \
+		RX_VLAN_F | MARK_F | RSS_F)				       \
+R(vlan_mark_ptype,		0, 1, 0, 1, 0, 1, 0,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F)				       \
+R(vlan_mark_ptype_rss,		0, 1, 0, 1, 0, 1, 1,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark_cksum,		0, 1, 0, 1, 1, 0, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F)				       \
+R(vlan_mark_cksum_rss,		0, 1, 0, 1, 1, 0, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(vlan_mark_cksum_ptype,	0, 1, 0, 1, 1, 1, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_mark_cksum_ptype_rss,	0, 1, 0, 1, 1, 1, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts,			0, 1, 1, 0, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F)					       \
+R(vlan_ts_rss,			0, 1, 1, 0, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | RSS_F)				       \
+R(vlan_ts_ptype,		0, 1, 1, 0, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | PTYPE_F)				       \
+R(vlan_ts_ptype_rss,		0, 1, 1, 0, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | PTYPE_F | RSS_F)			       \
+R(vlan_ts_cksum,		0, 1, 1, 0, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F)				       \
+R(vlan_ts_cksum_rss,		0, 1, 1, 0, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | RSS_F)			       \
+R(vlan_ts_cksum_ptype,		0, 1, 1, 0, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_ts_cksum_ptype_rss,	0, 1, 1, 0, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark,			0, 1, 1, 1, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F)				       \
+R(vlan_ts_mark_rss,		0, 1, 1, 1, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | RSS_F)			       \
+R(vlan_ts_mark_ptype,		0, 1, 1, 1, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F)			       \
+R(vlan_ts_mark_ptype_rss,	0, 1, 1, 1, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark_cksum,		0, 1, 1, 1, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F)			       \
+R(vlan_ts_mark_cksum_rss,	0, 1, 1, 1, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(vlan_ts_mark_cksum_ptype,	0, 1, 1, 1, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(vlan_ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec,				1, 0, 0, 0, 0, 0, 0,			       \
+		R_SEC_F)						       \
+R(sec_rss,			1, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(sec_ptype,			1, 0, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | PTYPE_F)					       \
+R(sec_ptype_rss,		1, 0, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | PTYPE_F | RSS_F)				       \
+R(sec_cksum,			1, 0, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | CKSUM_F)					       \
+R(sec_cksum_rss,		1, 0, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | CKSUM_F | RSS_F)				       \
+R(sec_cksum_ptype,		1, 0, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F)				       \
+R(sec_cksum_ptype_rss,		1, 0, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(sec_mark,			1, 0, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | MARK_F)					       \
+R(sec_mark_rss,			1, 0, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | MARK_F | RSS_F)				       \
+R(sec_mark_ptype,		1, 0, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | MARK_F | PTYPE_F)				       \
+R(sec_mark_ptype_rss,		1, 0, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(sec_mark_cksum,		1, 0, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F)				       \
+R(sec_mark_cksum_rss,		1, 0, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(sec_mark_cksum_ptype,		1, 0, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(sec_mark_cksum_ptype_rss,	1, 0, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts,			1, 0, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | TS_F)						       \
+R(sec_ts_rss,			1, 0, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | TS_F | RSS_F)					       \
+R(sec_ts_ptype,			1, 0, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | TS_F | PTYPE_F)				       \
+R(sec_ts_ptype_rss,		1, 0, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | TS_F | PTYPE_F | RSS_F)			       \
+R(sec_ts_cksum,			1, 0, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F)				       \
+R(sec_ts_cksum_rss,		1, 0, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | RSS_F)			       \
+R(sec_ts_cksum_ptype,		1, 0, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(sec_ts_cksum_ptype_rss,	1, 0, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark,			1, 0, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F)				       \
+R(sec_ts_mark_rss,		1, 0, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | RSS_F)			       \
+R(sec_ts_mark_ptype,		1, 0, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F)			       \
+R(sec_ts_mark_ptype_rss,	1, 0, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark_cksum,		1, 0, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F)			       \
+R(sec_ts_mark_cksum_rss,	1, 0, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_ts_mark_cksum_ptype,	1, 0, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(sec_ts_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan,			1, 1, 0, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F)					       \
+R(sec_vlan_rss,			1, 1, 0, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | RSS_F)				       \
+R(sec_vlan_ptype,		1, 1, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F)				       \
+R(sec_vlan_ptype_rss,		1, 1, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F)			       \
+R(sec_vlan_cksum,		1, 1, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F)				       \
+R(sec_vlan_cksum_rss,		1, 1, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F)			       \
+R(sec_vlan_cksum_ptype,		1, 1, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_cksum_ptype_rss,	1, 1, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_mark,		1, 1, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F)				       \
+R(sec_vlan_mark_rss,		1, 1, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | RSS_F)			       \
+R(sec_vlan_mark_ptype,		1, 1, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F)			       \
+R(sec_vlan_mark_ptype_rss,	1, 1, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_mark_cksum,		1, 1, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F)			       \
+R(sec_vlan_mark_cksum_rss,	1, 1, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_mark_cksum_ptype,	1, 1, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)	       \
+R(sec_vlan_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)      \
+R(sec_vlan_ts,			1, 1, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F)				       \
+R(sec_vlan_ts_rss,		1, 1, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | RSS_F)			       \
+R(sec_vlan_ts_ptype,		1, 1, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F)			       \
+R(sec_vlan_ts_ptype_rss,	1, 1, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_ts_cksum,		1, 1, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F)			       \
+R(sec_vlan_ts_cksum_rss,	1, 1, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_ts_cksum_ptype,	1, 1, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_ts_cksum_ptype_rss,	1, 1, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark,		1, 1, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F)			       \
+R(sec_vlan_ts_mark_rss,		1, 1, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
+R(sec_vlan_ts_mark_ptype,	1, 1, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
+R(sec_vlan_ts_mark_ptype_rss,	1, 1, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum,	1, 1, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
+R(sec_vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)       \
+R(sec_vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1, 1,		       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(          \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn10k_rx_mseg.c b/drivers/net/cnxk/cn10k_rx_mseg.c
index 3340771..e7c2321 100644
--- a/drivers/net/cnxk/cn10k_rx_mseg.c
+++ b/drivers/net/cnxk/cn10k_rx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_mseg_##name(     \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_rx_vec.c b/drivers/net/cnxk/cn10k_rx_vec.c
index 166735a..0ccc4df 100644
--- a/drivers/net/cnxk/cn10k_rx_vec.c
+++ b/drivers/net/cnxk/cn10k_rx_vec.c
@@ -5,14 +5,14 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_recv_pkts_vec_##name(void *rx_queue,                 \
 					       struct rte_mbuf **rx_pkts,      \
 					       uint16_t pkts)                  \
 	{                                                                      \
 		return cn10k_nix_recv_pkts_vector(rx_queue, rx_pkts, pkts,     \
-						  (flags), NULL, NULL);        \
+						  (flags), NULL, NULL, 0);     \
 	}
 
 NIX_RX_FASTPATH_MODES
diff --git a/drivers/net/cnxk/cn10k_rx_vec_mseg.c b/drivers/net/cnxk/cn10k_rx_vec_mseg.c
index 1f44ddd..38e0ec3 100644
--- a/drivers/net/cnxk/cn10k_rx_vec_mseg.c
+++ b/drivers/net/cnxk/cn10k_rx_vec_mseg.c
@@ -5,13 +5,13 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_vec_mseg_##name( \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
 		return cn10k_nix_recv_pkts_vector(                             \
 			rx_queue, rx_pkts, pkts, (flags) | NIX_RX_MULTI_SEG_F, \
-			NULL, NULL);                                           \
+			NULL, NULL, 0);                                        \
 	}
 
 NIX_RX_FASTPATH_MODES
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 8577a7b..c81a612 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -51,9 +51,6 @@
 
 #define NIX_NB_SEGS_TO_SEGDW(x) ((NIX_SEGDW_MAGIC >> ((x) << 2)) & 0xF)
 
-#define LMT_OFF(lmt_addr, lmt_num, offset)                                     \
-	(void *)((lmt_addr) + ((lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset))
-
 /* Function to determine no of tx subdesc required in case ext
  * sub desc is enabled.
  */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 21/28] net/cnxk: support Tx security offload on cn10k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (19 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
                     ` (7 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to create and submit CPT instructions on Tx
on CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn10k_eventdev.c          |  15 +-
 drivers/event/cnxk/cn10k_worker.h            |  74 +-
 drivers/event/cnxk/cn10k_worker_tx_enq.c     |   2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq_seg.c |   2 +-
 drivers/net/cnxk/cn10k_tx.c                  |  31 +-
 drivers/net/cnxk/cn10k_tx.h                  | 981 +++++++++++++++++++++++----
 drivers/net/cnxk/cn10k_tx_mseg.c             |   2 +-
 drivers/net/cnxk/cn10k_tx_vec.c              |   2 +-
 drivers/net/cnxk/cn10k_tx_vec_mseg.c         |   2 +-
 9 files changed, 929 insertions(+), 182 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9c0d84b..dec1653 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -17,7 +17,8 @@
 
 #define CN10K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                           \
 	(enq_op =                                                              \
-		 enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
+		 enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]     \
+			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \
@@ -380,17 +381,17 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	/* Tx modes */
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
+		sso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
+		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b79bd90..1255662 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -423,7 +423,11 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,
 		    ((queue[0] ^ queue[1]) & (queue[2] ^ queue[3]))) {
 
 			for (j = 0; j < 4; j++) {
+				uint8_t lnum = 0, loff = 0, shft = 0;
 				struct rte_mbuf *m = mbufs[i + j];
+				uintptr_t laddr;
+				uint16_t segdw;
+				bool sec;
 
 				txq = (struct cn10k_eth_txq *)
 					txq_data[port[j]][queue[j]];
@@ -434,19 +438,35 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,
 				if (flags & NIX_TX_OFFLOAD_TSO_F)
 					cn10k_nix_xmit_prepare_tso(m, flags);
 
-				cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags,
-						       txq->lso_tun_fmt);
+				cn10k_nix_xmit_prepare(m, cmd, flags,
+						       txq->lso_tun_fmt, &sec);
+
+				laddr = lmt_addr;
+				/* Prepare CPT instruction and get nixtx addr if
+				 * it is for CPT on same lmtline.
+				 */
+				if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+					cn10k_nix_prep_sec(m, cmd, &laddr,
+							   lmt_addr, &lnum,
+							   &loff, &shft,
+							   txq->sa_base, flags);
+
+				/* Move NIX desc to LMT/NIXTX area */
+				cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+
 				if (flags & NIX_TX_MULTI_SEG_F) {
-					const uint16_t segdw =
-						cn10k_nix_prepare_mseg(
-							m, (uint64_t *)lmt_addr,
-							flags);
-					pa = txq->io_addr | ((segdw - 1) << 4);
+					segdw = cn10k_nix_prepare_mseg(m,
+						(uint64_t *)laddr, flags);
 				} else {
-					pa = txq->io_addr |
-					     (cn10k_nix_tx_ext_subs(flags) + 1)
-						     << 4;
+					segdw = cn10k_nix_tx_ext_subs(flags) +
+						2;
 				}
+
+				if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+					pa = txq->cpt_io_addr | 3 << 4;
+				else
+					pa = txq->io_addr | ((segdw - 1) << 4);
+
 				if (!sched_type)
 					roc_sso_hws_head_wait(base +
 							      SSOW_LF_GWS_TAG);
@@ -469,15 +489,19 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 		       const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
 		       const uint32_t flags)
 {
+	uint8_t lnum = 0, loff = 0, shft = 0;
 	struct cn10k_eth_txq *txq;
+	uint16_t ref_cnt, segdw;
 	struct rte_mbuf *m;
 	uintptr_t lmt_addr;
-	uint16_t ref_cnt;
+	uintptr_t c_laddr;
 	uint16_t lmt_id;
 	uintptr_t pa;
+	bool sec;
 
 	lmt_addr = ws->lmt_base;
 	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	c_laddr = lmt_addr;
 
 	if (ev->event_type & RTE_EVENT_TYPE_VECTOR) {
 		struct rte_mbuf **mbufs = ev->vec->mbufs;
@@ -508,14 +532,28 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 	if (flags & NIX_TX_OFFLOAD_TSO_F)
 		cn10k_nix_xmit_prepare_tso(m, flags);
 
-	cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags, txq->lso_tun_fmt);
+	cn10k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt, &sec);
+
+	/* Prepare CPT instruction and get nixtx addr if
+	 * it is for CPT on same lmtline.
+	 */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+		cn10k_nix_prep_sec(m, cmd, &lmt_addr, c_laddr, &lnum, &loff,
+				   &shft, txq->sa_base, flags);
+
+	/* Move NIX desc to LMT/NIXTX area */
+	cn10k_nix_xmit_mv_lmt_base(lmt_addr, cmd, flags);
 	if (flags & NIX_TX_MULTI_SEG_F) {
-		const uint16_t segdw =
-			cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+		segdw = cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+	} else {
+		segdw = cn10k_nix_tx_ext_subs(flags) + 2;
+	}
+
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+		pa = txq->cpt_io_addr | 3 << 4;
+	else
 		pa = txq->io_addr | ((segdw - 1) << 4);
-	} else {
-		pa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4;
-	}
+
 	if (!ev->sched_type)
 		roc_sso_hws_head_wait(ws->tx_base + SSOW_LF_GWS_TAG);
 
@@ -531,7 +569,7 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 	return 1;
 }
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \
 		void *port, struct rte_event ev[], uint16_t nb_events);        \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \
diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq.c b/drivers/event/cnxk/cn10k_worker_tx_enq.c
index f9968ac..f14c7fc 100644
--- a/drivers/event/cnxk/cn10k_worker_tx_enq.c
+++ b/drivers/event/cnxk/cn10k_worker_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn10k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
index a24fc42..2ea61e5 100644
--- a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn10k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c..eb962ef 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(	       \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -24,12 +24,13 @@ NIX_TX_FASTPATH_MODES
 
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
-	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])
+	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	/* [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+	/* [SEC] [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
 	eth_dev->tx_pkt_burst = tx_burst
+		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
@@ -43,33 +44,33 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,
+	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c81a612..52bb71d 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -6,6 +6,8 @@
 
 #include <rte_vect.h>
 
+#include <rte_eventdev.h>
+
 #define NIX_TX_OFFLOAD_NONE	      (0)
 #define NIX_TX_OFFLOAD_L3_L4_CSUM_F   BIT(0)
 #define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
@@ -57,12 +59,22 @@
 static __rte_always_inline int
 cn10k_nix_tx_ext_subs(const uint16_t flags)
 {
-	return (flags & NIX_TX_OFFLOAD_TSTAMP_F)
-		       ? 2
-		       : ((flags &
-			   (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F))
-				  ? 1
-				  : 0);
+	return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ?
+			     2 :
+			     ((flags &
+			 (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
+				      1 :
+				      0);
+}
+
+static __rte_always_inline uint8_t
+cn10k_nix_tx_dwords(const uint16_t flags, const uint8_t segdw)
+{
+	if (!(flags & NIX_TX_MULTI_SEG_F))
+		return cn10k_nix_tx_ext_subs(flags) + 2;
+
+	/* Already everything is accounted for in segdw */
+	return segdw;
 }
 
 static __rte_always_inline uint8_t
@@ -144,6 +156,34 @@ cn10k_nix_tx_steor_vec_data(const uint16_t flags)
 	return data;
 }
 
+static __rte_always_inline uint64_t
+cn10k_cpt_tx_steor_data(void)
+{
+	/* We have two CPT instructions per LMTLine */
+	const uint64_t dw_m1 = ROC_CN10K_TWO_CPT_INST_DW_M1;
+	uint64_t data;
+
+	/* This will be moved to addr area */
+	data = dw_m1 << 16;
+	data |= dw_m1 << 19;
+	data |= dw_m1 << 22;
+	data |= dw_m1 << 25;
+	data |= dw_m1 << 28;
+	data |= dw_m1 << 31;
+	data |= dw_m1 << 34;
+	data |= dw_m1 << 37;
+	data |= dw_m1 << 40;
+	data |= dw_m1 << 43;
+	data |= dw_m1 << 46;
+	data |= dw_m1 << 49;
+	data |= dw_m1 << 52;
+	data |= dw_m1 << 55;
+	data |= dw_m1 << 58;
+	data |= dw_m1 << 61;
+
+	return data;
+}
+
 static __rte_always_inline void
 cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,
 		      const uint16_t flags)
@@ -165,6 +205,236 @@ cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,
 }
 
 static __rte_always_inline void
+cn10k_nix_sec_steorl(uintptr_t io_addr, uint32_t lmt_id, uint8_t lnum,
+		     uint8_t loff, uint8_t shft)
+{
+	uint64_t data;
+	uintptr_t pa;
+
+	/* Check if there is any CPT instruction to submit */
+	if (!lnum && !loff)
+		return;
+
+	data = cn10k_cpt_tx_steor_data();
+	/* Update lmtline use for partial end line */
+	if (loff) {
+		data &= ~(0x7ULL << shft);
+		/* Update it to half full i.e 64B */
+		data |= (0x3UL << shft);
+	}
+
+	pa = io_addr | ((data >> 16) & 0x7) << 4;
+	data &= ~(0x7ULL << 16);
+	/* Update lines - 1 that contain valid data */
+	data |= ((uint64_t)(lnum + loff - 1)) << 12;
+	data |= lmt_id;
+
+	/* STEOR */
+	roc_lmt_submit_steorl(data, pa);
+}
+
+#if defined(RTE_ARCH_ARM64)
+static __rte_always_inline void
+cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
+		       uintptr_t *nixtx_addr, uintptr_t lbase, uint8_t *lnum,
+		       uint8_t *loff, uint8_t *shft, uint64_t sa_base,
+		       const uint16_t flags)
+{
+	struct cn10k_sec_sess_priv sess_priv;
+	uint32_t pkt_len, dlen_adj, rlen;
+	uint64x2_t cmd01, cmd23;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t *laddr;
+	uint8_t l2_len;
+	uint16_t tag;
+	uint64_t sa;
+
+	sess_priv.u64 = *rte_security_dynfield(m);
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = vgetq_lane_u8(*cmd0, 8);
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = vgetq_lane_u64(*cmd1, 1);
+	pkt_len = vgetq_lane_u16(*cmd0, 0);
+
+	/* Calculate dlen adj */
+	dlen_adj = pkt_len - l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Update send descriptors. Security is single segment only */
+	*cmd0 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd0, 0);
+	*cmd1 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd1, 0);
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	/* Return nixtx addr */
+	*nixtx_addr = (nixtx + 16);
+
+	/* DLEN passed is excluding L2HDR */
+	pkt_len -= l2_len;
+	tag = sa_base & 0xFFFFUL;
+	sa_base &= ~0xFFFFUL;
+	sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
+	ucode_cmd[0] =
+		(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1 */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len;
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Move to our line */
+	laddr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(laddr, cmd01);
+	vst1q_u64((laddr + 2), cmd23);
+
+	*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;
+	*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);
+
+	/* Move to next line for every other CPT inst */
+	*loff = !(*loff);
+	*lnum = *lnum + (*loff ? 0 : 1);
+	*shft = *shft + (*loff ? 0 : 3);
+}
+
+static __rte_always_inline void
+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+		   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,
+		   uint64_t sa_base, const uint16_t flags)
+{
+	struct cn10k_sec_sess_priv sess_priv;
+	uint32_t pkt_len, dlen_adj, rlen;
+	struct nix_send_hdr_s *send_hdr;
+	uint64x2_t cmd01, cmd23;
+	union nix_send_sg_s *sg;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t *laddr;
+	uint8_t l2_len;
+	uint16_t tag;
+	uint64_t sa;
+
+	/* Move to our line from base */
+	sess_priv.u64 = *rte_security_dynfield(m);
+	send_hdr = (struct nix_send_hdr_s *)cmd;
+	if (flags & NIX_TX_NEED_EXT_HDR)
+		sg = (union nix_send_sg_s *)&cmd[4];
+	else
+		sg = (union nix_send_sg_s *)&cmd[2];
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = cmd[1] & 0xFF;
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = *(uint64_t *)(sg + 1);
+	pkt_len = send_hdr->w0.total;
+
+	/* Calculate dlen adj */
+	dlen_adj = pkt_len - l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Update send descriptors. Security is single segment only */
+	send_hdr->w0.total = pkt_len + dlen_adj;
+	sg->seg1_size = pkt_len + dlen_adj;
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	/* Return nixtx addr */
+	*nixtx_addr = (nixtx + 16);
+
+	/* DLEN passed is excluding L2HDR */
+	pkt_len -= l2_len;
+	tag = sa_base & 0xFFFFUL;
+	sa_base &= ~0xFFFFUL;
+	sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
+	ucode_cmd[0] =
+		(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1. Assume no multi-seg support */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len;
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Move to our line */
+	laddr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(laddr, cmd01);
+	vst1q_u64((laddr + 2), cmd23);
+
+	*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;
+	*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);
+
+	/* Move to next line for every other CPT inst */
+	*loff = !(*loff);
+	*lnum = *lnum + (*loff ? 0 : 1);
+	*shft = *shft + (*loff ? 0 : 3);
+}
+
+#else
+
+static __rte_always_inline void
+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+		   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,
+		   uint64_t sa_base, const uint16_t flags)
+{
+	RTE_SET_USED(m);
+	RTE_SET_USED(cmd);
+	RTE_SET_USED(nixtx_addr);
+	RTE_SET_USED(lbase);
+	RTE_SET_USED(lnum);
+	RTE_SET_USED(loff);
+	RTE_SET_USED(shft);
+	RTE_SET_USED(sa_base);
+	RTE_SET_USED(flags);
+}
+#endif
+
+static __rte_always_inline void
 cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
@@ -217,8 +487,8 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 }
 
 static __rte_always_inline void
-cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
-		       const uint16_t flags, const uint64_t lso_tun_fmt)
+cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
+		       const uint64_t lso_tun_fmt, bool *sec)
 {
 	struct nix_send_ext_s *send_hdr_ext;
 	struct nix_send_hdr_s *send_hdr;
@@ -237,16 +507,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		sg = (union nix_send_sg_s *)(cmd + 2);
 	}
 
-	if (flags & NIX_TX_NEED_SEND_HDR_W1) {
+	if (flags & (NIX_TX_NEED_SEND_HDR_W1 | NIX_TX_OFFLOAD_SECURITY_F)) {
 		ol_flags = m->ol_flags;
 		w1.u = 0;
 	}
 
-	if (!(flags & NIX_TX_MULTI_SEG_F)) {
+	if (!(flags & NIX_TX_MULTI_SEG_F))
 		send_hdr->w0.total = m->data_len;
-		send_hdr->w0.aura =
-			roc_npa_aura_handle_to_aura(m->pool->pool_id);
-	}
+	else
+		send_hdr->w0.total = m->pkt_len;
+	send_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
 
 	/*
 	 * L3type:  2 => IPV4
@@ -376,7 +646,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		send_hdr->w1.u = w1.u;
 
 	if (!(flags & NIX_TX_MULTI_SEG_F)) {
-		sg->seg1_size = m->data_len;
+		sg->seg1_size = send_hdr->w0.total;
 		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
 
 		if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
@@ -389,17 +659,38 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
 			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	} else {
+		sg->seg1_size = m->data_len;
+		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
+
+		/* NOFF is handled later for multi-seg */
 	}
 
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+		*sec = !!(ol_flags & PKT_TX_SEC_OFFLOAD);
+}
+
+static __rte_always_inline void
+cn10k_nix_xmit_mv_lmt_base(uintptr_t lmt_addr, uint64_t *cmd,
+			   const uint16_t flags)
+{
+	struct nix_send_ext_s *send_hdr_ext;
+	union nix_send_sg_s *sg;
+
 	/* With minimal offloads, 'cmd' being local could be optimized out to
 	 * registers. In other cases, 'cmd' will be in stack. Intent is
 	 * 'cmd' stores content from txq->cmd which is copied only once.
 	 */
-	*((struct nix_send_hdr_s *)lmt_addr) = *send_hdr;
+	*((struct nix_send_hdr_s *)lmt_addr) = *(struct nix_send_hdr_s *)cmd;
 	lmt_addr += 16;
 	if (flags & NIX_TX_NEED_EXT_HDR) {
+		send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
 		*((struct nix_send_ext_s *)lmt_addr) = *send_hdr_ext;
 		lmt_addr += 16;
+
+		sg = (union nix_send_sg_s *)(cmd + 4);
+	} else {
+		sg = (union nix_send_sg_s *)(cmd + 2);
 	}
 	/* In case of multi-seg, sg template is stored here */
 	*((union nix_send_sg_s *)lmt_addr) = *sg;
@@ -414,7 +705,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
-					(struct nix_send_ext_s *)lmt_addr + 16;
+			(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
 		uint16_t off = (no_segdw - 1) << 1;
 		struct nix_send_mem_s *send_mem;
@@ -457,8 +748,6 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 	uint8_t off, i;
 
 	send_hdr = (struct nix_send_hdr_s *)cmd;
-	send_hdr->w0.total = m->pkt_len;
-	send_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
 
 	if (flags & NIX_TX_NEED_EXT_HDR)
 		off = 2;
@@ -466,13 +755,27 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		off = 0;
 
 	sg = (union nix_send_sg_s *)&cmd[2 + off];
-	/* Clear sg->u header before use */
-	sg->u &= 0xFC00000000000000;
+
+	/* Start from second segment, first segment is already there */
+	i = 1;
 	sg_u = sg->u;
-	slist = &cmd[3 + off];
+	nb_segs = m->nb_segs - 1;
+	m_next = m->next;
+	slist = &cmd[3 + off + 1];
 
-	i = 0;
-	nb_segs = m->nb_segs;
+	/* Set invert df if buffer is not to be freed by H/W */
+	if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+		sg_u |= (cnxk_nix_prefree_seg(m) << 55);
+
+		/* Mark mempool object as "put" since it is freed by NIX */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (!(sg_u & (1ULL << 55)))
+		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	rte_io_wmb();
+#endif
+	m = m_next;
+	if (!m)
+		goto done;
 
 	/* Fill mbuf segments */
 	do {
@@ -504,6 +807,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		m = m_next;
 	} while (nb_segs);
 
+done:
 	sg->u = sg_u;
 	sg->segs = i;
 	segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
@@ -522,10 +826,17 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 {
 	struct cn10k_eth_txq *txq = tx_queue;
 	const rte_iova_t io_addr = txq->io_addr;
-	uintptr_t pa, lmt_addr = txq->lmt_base;
+	uint8_t lnum, c_lnum, c_shft, c_loff;
+	uintptr_t pa, lbase = txq->lmt_base;
 	uint16_t lmt_id, burst, left, i;
+	uintptr_t c_lbase = lbase;
+	rte_iova_t c_io_addr;
 	uint64_t lso_tun_fmt;
+	uint16_t c_lmt_id;
+	uint64_t sa_base;
+	uintptr_t laddr;
 	uint64_t data;
+	bool sec;
 
 	if (!(flags & NIX_TX_VWQE_F)) {
 		NIX_XMIT_FC_OR_RETURN(txq, pkts);
@@ -540,10 +851,24 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		lso_tun_fmt = txq->lso_tun_fmt;
 
 	/* Get LMT base address and LMT ID as lcore id */
-	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	burst = left > 32 ? 32 : left;
+
+	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		c_lnum = 0;
+		c_loff = 0;
+		c_shft = 16;
+	}
+
 	for (i = 0; i < burst; i++) {
 		/* Perform header writes for TSO, barrier at
 		 * lmt steorl will suffice.
@@ -551,16 +876,39 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		if (flags & NIX_TX_OFFLOAD_TSO_F)
 			cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
 
-		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,
-				       lso_tun_fmt);
-		cn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],
+		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,
+				       &sec);
+
+		laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
+
+		/* Prepare CPT instruction and get nixtx addr */
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+			cn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,
+					   &c_lnum, &c_loff, &c_shft, sa_base,
+					   flags);
+
+		/* Move NIX desc to LMT/NIXTX area */
+		cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+		cn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],
 					      tx_pkts[i]->ol_flags, 4, flags);
-		lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
+		if (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec)
+			lnum++;
 	}
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+	tx_pkts += burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		/* Reduce pkts to be sent to CPT */
+		burst -= ((c_lnum << 1) + c_loff);
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+	}
+
 	/* Trigger LMTST */
 	if (burst > 16) {
 		data = cn10k_nix_tx_steor_data(flags);
@@ -591,16 +939,9 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		roc_lmt_submit_steorl(data, pa);
 	}
 
-	left -= burst;
 	rte_io_wmb();
-	if (left) {
-		/* Start processing another burst */
-		tx_pkts += burst;
-		/* Reset lmt base addr */
-		lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+	if (left)
 		goto again;
-	}
 
 	return pkts;
 }
@@ -611,13 +952,20 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 const uint16_t flags)
 {
 	struct cn10k_eth_txq *txq = tx_queue;
-	uintptr_t pa0, pa1, lmt_addr = txq->lmt_base;
+	uintptr_t pa0, pa1, lbase = txq->lmt_base;
 	const rte_iova_t io_addr = txq->io_addr;
 	uint16_t segdw, lmt_id, burst, left, i;
+	uint8_t lnum, c_lnum, c_loff;
+	uintptr_t c_lbase = lbase;
 	uint64_t data0, data1;
+	rte_iova_t c_io_addr;
 	uint64_t lso_tun_fmt;
+	uint8_t shft, c_shft;
 	__uint128_t data128;
-	uint16_t shft;
+	uint16_t c_lmt_id;
+	uint64_t sa_base;
+	uintptr_t laddr;
+	bool sec;
 
 	NIX_XMIT_FC_OR_RETURN(txq, pkts);
 
@@ -630,12 +978,26 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		lso_tun_fmt = txq->lso_tun_fmt;
 
 	/* Get LMT base address and LMT ID as lcore id */
-	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	burst = left > 32 ? 32 : left;
 	shft = 16;
 	data128 = 0;
+
+	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		c_lnum = 0;
+		c_loff = 0;
+		c_shft = 16;
+	}
+
 	for (i = 0; i < burst; i++) {
 		/* Perform header writes for TSO, barrier at
 		 * lmt steorl will suffice.
@@ -643,22 +1005,47 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		if (flags & NIX_TX_OFFLOAD_TSO_F)
 			cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
 
-		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,
-				       lso_tun_fmt);
+		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,
+				       &sec);
+
+		laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
+
+		/* Prepare CPT instruction and get nixtx addr */
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+			cn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,
+					   &c_lnum, &c_loff, &c_shft, sa_base,
+					   flags);
+
+		/* Move NIX desc to LMT/NIXTX area */
+		cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+
 		/* Store sg list directly on lmt line */
-		segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)lmt_addr,
+		segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)laddr,
 					       flags);
-		cn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],
+		cn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],
 					      tx_pkts[i]->ol_flags, segdw,
 					      flags);
-		lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		data128 |= (((__uint128_t)(segdw - 1)) << shft);
-		shft += 3;
+		if (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec) {
+			lnum++;
+			data128 |= (((__uint128_t)(segdw - 1)) << shft);
+			shft += 3;
+		}
 	}
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+	tx_pkts += burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		/* Reduce pkts to be sent to CPT */
+		burst -= ((c_lnum << 1) + c_loff);
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+	}
+
 	data0 = (uint64_t)data128;
 	data1 = (uint64_t)(data128 >> 64);
 	/* Make data0 similar to data1 */
@@ -695,16 +1082,9 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		roc_lmt_submit_steorl(data0, pa0);
 	}
 
-	left -= burst;
 	rte_io_wmb();
-	if (left) {
-		/* Start processing another burst */
-		tx_pkts += burst;
-		/* Reset lmt base addr */
-		lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+	if (left)
 		goto again;
-	}
 
 	return pkts;
 }
@@ -989,6 +1369,90 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,
 	return lmt_used;
 }
 
+static __rte_always_inline void
+cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,
+		   uint8_t *shift, __uint128_t *data128, uintptr_t *next)
+{
+	/* Go to next line if we are out of space */
+	if ((*loff + (dw << 4)) > 128) {
+		*data128 = *data128 |
+			   (((__uint128_t)((*loff >> 4) - 1)) << *shift);
+		*shift = *shift + 3;
+		*loff = 0;
+		*lnum = *lnum + 1;
+	}
+
+	*next = (uintptr_t)LMT_OFF(laddr, *lnum, *loff);
+	*loff = *loff + (dw << 4);
+}
+
+static __rte_always_inline void
+cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
+		     uint64x2_t cmd0, uint64x2_t cmd1, uint64x2_t cmd2,
+		     uint64x2_t cmd3, const uint16_t flags)
+{
+	uint8_t off;
+
+	/* Handle no fast free when security is enabled without mseg */
+	if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
+	    (flags & NIX_TX_OFFLOAD_SECURITY_F) &&
+	    !(flags & NIX_TX_MULTI_SEG_F)) {
+		union nix_send_sg_s sg;
+
+		sg.u = vgetq_lane_u64(cmd1, 0);
+		sg.u |= (cnxk_nix_prefree_seg(mbuf) << 55);
+		cmd1 = vsetq_lane_u64(sg.u, cmd1, 0);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+		sg.u = vgetq_lane_u64(cmd1, 0);
+		if (!(sg.u & (1ULL << 55)))
+			__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,
+						0);
+		rte_io_wmb();
+#endif
+	}
+	if (flags & NIX_TX_MULTI_SEG_F) {
+		if ((flags & NIX_TX_NEED_EXT_HDR) &&
+		    (flags & NIX_TX_OFFLOAD_TSTAMP_F)) {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+			off = segdw - 4;
+			off <<= 4;
+			vst1q_u64(LMT_OFF(laddr, 0, 48 + off), cmd3);
+		} else if (flags & NIX_TX_NEED_EXT_HDR) {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+		} else {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 32),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);
+		}
+	} else if (flags & NIX_TX_NEED_EXT_HDR) {
+		/* Store the prepared send desc to LMT lines */
+		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+			vst1q_u64(LMT_OFF(laddr, 0, 48), cmd3);
+		} else {
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+		}
+	} else {
+		/* Store the prepared send desc to LMT lines */
+		vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+		vst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);
+	}
+}
+
 static __rte_always_inline uint16_t
 cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			   uint16_t pkts, uint64_t *cmd, uintptr_t base,
@@ -998,10 +1462,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
 	uint64x2_t cmd0[NIX_DESCS_PER_LOOP], cmd1[NIX_DESCS_PER_LOOP],
 		cmd2[NIX_DESCS_PER_LOOP], cmd3[NIX_DESCS_PER_LOOP];
+	uint16_t left, scalar, burst, i, lmt_id, c_lmt_id;
 	uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3, pa;
 	uint64x2_t senddesc01_w0, senddesc23_w0;
 	uint64x2_t senddesc01_w1, senddesc23_w1;
-	uint16_t left, scalar, burst, i, lmt_id;
 	uint64x2_t sendext01_w0, sendext23_w0;
 	uint64x2_t sendext01_w1, sendext23_w1;
 	uint64x2_t sendmem01_w0, sendmem23_w0;
@@ -1010,12 +1474,16 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint64x2_t sgdesc01_w0, sgdesc23_w0;
 	uint64x2_t sgdesc01_w1, sgdesc23_w1;
 	struct cn10k_eth_txq *txq = tx_queue;
-	uintptr_t laddr = txq->lmt_base;
 	rte_iova_t io_addr = txq->io_addr;
+	uintptr_t laddr = txq->lmt_base;
+	uint8_t c_lnum, c_shft, c_loff;
 	uint64x2_t ltypes01, ltypes23;
 	uint64x2_t xtmp128, ytmp128;
 	uint64x2_t xmask01, xmask23;
-	uint8_t lnum, shift;
+	uintptr_t c_laddr = laddr;
+	uint8_t lnum, shift, loff;
+	rte_iova_t c_io_addr;
+	uint64_t sa_base;
 	union wdata {
 		__uint128_t data128;
 		uint64_t data[2];
@@ -1061,19 +1529,36 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	/* Get LMT base address and LMT ID as lcore id */
 	ROC_LMT_BASE_ID_GET(laddr, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_laddr, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	/* Number of packets to prepare depends on offloads enabled. */
 	burst = left > cn10k_nix_pkts_per_vec_brst(flags) ?
 			      cn10k_nix_pkts_per_vec_brst(flags) :
 			      left;
-	if (flags & NIX_TX_MULTI_SEG_F) {
+	if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)) {
 		wd.data128 = 0;
 		shift = 16;
 	}
 	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		loff = 0;
+		c_loff = 0;
+		c_lnum = 0;
+		c_shft = 16;
+	}
 
 	for (i = 0; i < burst; i += NIX_DESCS_PER_LOOP) {
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && c_lnum + 2 > 16) {
+			burst = i;
+			break;
+		}
+
 		if (flags & NIX_TX_MULTI_SEG_F) {
 			uint8_t j;
 
@@ -1833,7 +2318,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
-		    !(flags & NIX_TX_MULTI_SEG_F)) {
+		    !(flags & NIX_TX_MULTI_SEG_F) &&
+		    !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
 			/* Set don't free bit if reference count > 1 */
 			xmask01 = vdupq_n_u64(0);
 			xmask23 = xmask01;
@@ -1873,7 +2359,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
 			senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-		} else if (!(flags & NIX_TX_MULTI_SEG_F)) {
+		} else if (!(flags & NIX_TX_MULTI_SEG_F) &&
+			   !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
 			/* Move mbufs to iova */
 			mbuf0 = (uint64_t *)tx_pkts[0];
 			mbuf1 = (uint64_t *)tx_pkts[1];
@@ -1918,7 +2405,84 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			cmd2[3] = vzip2q_u64(sendext23_w0, sendext23_w1);
 		}
 
-		if (flags & NIX_TX_MULTI_SEG_F) {
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+			const uint64x2_t olf = {PKT_TX_SEC_OFFLOAD,
+						PKT_TX_SEC_OFFLOAD};
+			uintptr_t next;
+			uint8_t dw;
+
+			/* Extract ol_flags. */
+			xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+			ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+			xtmp128 = vtstq_u64(olf, xtmp128);
+			ytmp128 = vtstq_u64(olf, ytmp128);
+
+			/* Process mbuf0 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[0]);
+			if (vgetq_lane_u64(xtmp128, 0))
+				cn10k_nix_prep_sec_vec(tx_pkts[0], &cmd0[0],
+						       &cmd1[0], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf0 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[0], segdw[0], next,
+					     cmd0[0], cmd1[0], cmd2[0], cmd3[0],
+					     flags);
+
+			/* Process mbuf1 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[1]);
+			if (vgetq_lane_u64(xtmp128, 1))
+				cn10k_nix_prep_sec_vec(tx_pkts[1], &cmd0[1],
+						       &cmd1[1], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf1 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[1], segdw[1], next,
+					     cmd0[1], cmd1[1], cmd2[1], cmd3[1],
+					     flags);
+
+			/* Process mbuf2 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[2]);
+			if (vgetq_lane_u64(ytmp128, 0))
+				cn10k_nix_prep_sec_vec(tx_pkts[2], &cmd0[2],
+						       &cmd1[2], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf2 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[2], segdw[2], next,
+					     cmd0[2], cmd1[2], cmd2[2], cmd3[2],
+					     flags);
+
+			/* Process mbuf3 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[3]);
+			if (vgetq_lane_u64(ytmp128, 1))
+				cn10k_nix_prep_sec_vec(tx_pkts[3], &cmd0[3],
+						       &cmd1[3], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf3 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[3], segdw[3], next,
+					     cmd0[3], cmd1[3], cmd2[3], cmd3[3],
+					     flags);
+
+		} else if (flags & NIX_TX_MULTI_SEG_F) {
 			uint8_t j;
 
 			segdw[4] = 8;
@@ -1982,21 +2546,35 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
 	}
 
-	if (flags & NIX_TX_MULTI_SEG_F)
+	/* Roundup lnum to last line if it is partial */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		lnum = lnum + !!loff;
+		wd.data128 = wd.data128 |
+			(((__uint128_t)(((loff >> 4) - 1) & 0x7) << shift));
+	}
+
+	if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 		wd.data[0] >>= 16;
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+
 	/* Trigger LMTST */
 	if (lnum > 16) {
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[0] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[0] & 0x7) << 4;
 		wd.data[0] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[0] <<= 16;
 
 		wd.data[0] |= (15ULL << 12);
@@ -2005,13 +2583,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* STEOR0 */
 		roc_lmt_submit_steorl(wd.data[0], pa);
 
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[1] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[1] & 0x7) << 4;
 		wd.data[1] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[1] <<= 16;
 
 		wd.data[1] |= ((uint64_t)(lnum - 17)) << 12;
@@ -2020,13 +2598,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* STEOR1 */
 		roc_lmt_submit_steorl(wd.data[1], pa);
 	} else if (lnum) {
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[0] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[0] & 0x7) << 4;
 		wd.data[0] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[0] <<= 16;
 
 		wd.data[0] |= ((uint64_t)(lnum - 1)) << 12;
@@ -2036,7 +2614,6 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		roc_lmt_submit_steorl(wd.data[0], pa);
 	}
 
-	left -= burst;
 	rte_io_wmb();
 	if (left)
 		goto again;
@@ -2076,139 +2653,269 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 #define NOFF_F	     NIX_TX_OFFLOAD_MBUF_NOFF_F
 #define TSO_F	     NIX_TX_OFFLOAD_TSO_F
 #define TSP_F	     NIX_TX_OFFLOAD_TSTAMP_F
+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F
 
-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
 #define NIX_TX_FASTPATH_MODES						\
-T(no_offload,				0, 0, 0, 0, 0, 0,	4,	\
+T(no_offload,				0, 0, 0, 0, 0, 0, 0,	4,	\
 		NIX_TX_OFFLOAD_NONE)					\
-T(l3l4csum,				0, 0, 0, 0, 0, 1,	4,	\
+T(l3l4csum,				0, 0, 0, 0, 0, 0, 1,	4,	\
 		L3L4CSUM_F)						\
-T(ol3ol4csum,				0, 0, 0, 0, 1, 0,	4,	\
+T(ol3ol4csum,				0, 0, 0, 0, 0, 1, 0,	4,	\
 		OL3OL4CSUM_F)						\
-T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 1, 1,	4,	\
+T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 0, 1, 1,	4,	\
 		OL3OL4CSUM_F | L3L4CSUM_F)				\
-T(vlan,					0, 0, 0, 1, 0, 0,	6,	\
+T(vlan,					0, 0, 0, 0, 1, 0, 0,	6,	\
 		VLAN_F)							\
-T(vlan_l3l4csum,			0, 0, 0, 1, 0, 1,	6,	\
+T(vlan_l3l4csum,			0, 0, 0, 0, 1, 0, 1,	6,	\
 		VLAN_F | L3L4CSUM_F)					\
-T(vlan_ol3ol4csum,			0, 0, 0, 1, 1, 0,	6,	\
+T(vlan_ol3ol4csum,			0, 0, 0, 0, 1, 1, 0,	6,	\
 		VLAN_F | OL3OL4CSUM_F)					\
-T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 1, 1,	6,	\
+T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 0, 1, 1, 1,	6,	\
 		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
-T(noff,					0, 0, 1, 0, 0, 0,	4,	\
+T(noff,					0, 0, 0, 1, 0, 0, 0,	4,	\
 		NOFF_F)							\
-T(noff_l3l4csum,			0, 0, 1, 0, 0, 1,	4,	\
+T(noff_l3l4csum,			0, 0, 0, 1, 0, 0, 1,	4,	\
 		NOFF_F | L3L4CSUM_F)					\
-T(noff_ol3ol4csum,			0, 0, 1, 0, 1, 0,	4,	\
+T(noff_ol3ol4csum,			0, 0, 0, 1, 0, 1, 0,	4,	\
 		NOFF_F | OL3OL4CSUM_F)					\
-T(noff_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1,	4,	\
+T(noff_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 0, 1, 1,	4,	\
 		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
-T(noff_vlan,				0, 0, 1, 1, 0, 0,	6,	\
+T(noff_vlan,				0, 0, 0, 1, 1, 0, 0,	6,	\
 		NOFF_F | VLAN_F)					\
-T(noff_vlan_l3l4csum,			0, 0, 1, 1, 0, 1,	6,	\
+T(noff_vlan_l3l4csum,			0, 0, 0, 1, 1, 0, 1,	6,	\
 		NOFF_F | VLAN_F | L3L4CSUM_F)				\
-T(noff_vlan_ol3ol4csum,			0, 0, 1, 1, 1, 0,	6,	\
+T(noff_vlan_ol3ol4csum,			0, 0, 0, 1, 1, 1, 0,	6,	\
 		NOFF_F | VLAN_F | OL3OL4CSUM_F)				\
-T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1,	6,	\
+T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 0, 1, 1, 1, 1,	6,	\
 		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
-T(tso,					0, 1, 0, 0, 0, 0,	6,	\
+T(tso,					0, 0, 1, 0, 0, 0, 0,	6,	\
 		TSO_F)							\
-T(tso_l3l4csum,				0, 1, 0, 0, 0, 1,	6,	\
+T(tso_l3l4csum,				0, 0, 1, 0, 0, 0, 1,	6,	\
 		TSO_F | L3L4CSUM_F)					\
-T(tso_ol3ol4csum,			0, 1, 0, 0, 1, 0,	6,	\
+T(tso_ol3ol4csum,			0, 0, 1, 0, 0, 1, 0,	6,	\
 		TSO_F | OL3OL4CSUM_F)					\
-T(tso_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1,	6,	\
+T(tso_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 0, 1, 1,	6,	\
 		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
-T(tso_vlan,				0, 1, 0, 1, 0, 0,	6,	\
+T(tso_vlan,				0, 0, 1, 0, 1, 0, 0,	6,	\
 		TSO_F | VLAN_F)						\
-T(tso_vlan_l3l4csum,			0, 1, 0, 1, 0, 1,	6,	\
+T(tso_vlan_l3l4csum,			0, 0, 1, 0, 1, 0, 1,	6,	\
 		TSO_F | VLAN_F | L3L4CSUM_F)				\
-T(tso_vlan_ol3ol4csum,			0, 1, 0, 1, 1, 0,	6,	\
+T(tso_vlan_ol3ol4csum,			0, 0, 1, 0, 1, 1, 0,	6,	\
 		TSO_F | VLAN_F | OL3OL4CSUM_F)				\
-T(tso_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 1, 1,	6,	\
+T(tso_vlan_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1, 1,	6,	\
 		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(tso_noff,				0, 1, 1, 0, 0, 0,	6,	\
+T(tso_noff,				0, 0, 1, 1, 0, 0, 0,	6,	\
 		TSO_F | NOFF_F)						\
-T(tso_noff_l3l4csum,			0, 1, 1, 0, 0, 1,	6,	\
+T(tso_noff_l3l4csum,			0, 0, 1, 1, 0, 0, 1,	6,	\
 		TSO_F | NOFF_F | L3L4CSUM_F)				\
-T(tso_noff_ol3ol4csum,			0, 1, 1, 0, 1, 0,	6,	\
+T(tso_noff_ol3ol4csum,			0, 0, 1, 1, 0, 1, 0,	6,	\
 		TSO_F | NOFF_F | OL3OL4CSUM_F)				\
-T(tso_noff_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 1, 1,	6,	\
+T(tso_noff_ol3ol4csum_l3l4csum,		0, 0, 1, 1, 0, 1, 1,	6,	\
 		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(tso_noff_vlan,			0, 1, 1, 1, 0, 0,	6,	\
+T(tso_noff_vlan,			0, 0, 1, 1, 1, 0, 0,	6,	\
 		TSO_F | NOFF_F | VLAN_F)				\
-T(tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 0, 1,	6,	\
+T(tso_noff_vlan_l3l4csum,		0, 0, 1, 1, 1, 0, 1,	6,	\
 		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
-T(tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 0,	6,	\
+T(tso_noff_vlan_ol3ol4csum,		0, 0, 1, 1, 1, 1, 0,	6,	\
 		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
-T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1,	6,	\
+T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1, 1,	6,	\
 		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
-T(ts,					1, 0, 0, 0, 0, 0,	8,	\
+T(ts,					0, 1, 0, 0, 0, 0, 0,	8,	\
 		TSP_F)							\
-T(ts_l3l4csum,				1, 0, 0, 0, 0, 1,	8,	\
+T(ts_l3l4csum,				0, 1, 0, 0, 0, 0, 1,	8,	\
 		TSP_F | L3L4CSUM_F)					\
-T(ts_ol3ol4csum,			1, 0, 0, 0, 1, 0,	8,	\
+T(ts_ol3ol4csum,			0, 1, 0, 0, 0, 1, 0,	8,	\
 		TSP_F | OL3OL4CSUM_F)					\
-T(ts_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1,	8,	\
+T(ts_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 0, 1, 1,	8,	\
 		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
-T(ts_vlan,				1, 0, 0, 1, 0, 0,	8,	\
+T(ts_vlan,				0, 1, 0, 0, 1, 0, 0,	8,	\
 		TSP_F | VLAN_F)						\
-T(ts_vlan_l3l4csum,			1, 0, 0, 1, 0, 1,	8,	\
+T(ts_vlan_l3l4csum,			0, 1, 0, 0, 1, 0, 1,	8,	\
 		TSP_F | VLAN_F | L3L4CSUM_F)				\
-T(ts_vlan_ol3ol4csum,			1, 0, 0, 1, 1, 0,	8,	\
+T(ts_vlan_ol3ol4csum,			0, 1, 0, 0, 1, 1, 0,	8,	\
 		TSP_F | VLAN_F | OL3OL4CSUM_F)				\
-T(ts_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 1, 1,	8,	\
+T(ts_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1, 1,	8,	\
 		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(ts_noff,				1, 0, 1, 0, 0, 0,	8,	\
+T(ts_noff,				0, 1, 0, 1, 0, 0, 0,	8,	\
 		TSP_F | NOFF_F)						\
-T(ts_noff_l3l4csum,			1, 0, 1, 0, 0, 1,	8,	\
+T(ts_noff_l3l4csum,			0, 1, 0, 1, 0, 0, 1,	8,	\
 		TSP_F | NOFF_F | L3L4CSUM_F)				\
-T(ts_noff_ol3ol4csum,			1, 0, 1, 0, 1, 0,	8,	\
+T(ts_noff_ol3ol4csum,			0, 1, 0, 1, 0, 1, 0,	8,	\
 		TSP_F | NOFF_F | OL3OL4CSUM_F)				\
-T(ts_noff_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 1, 1,	8,	\
+T(ts_noff_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 0, 1, 1,	8,	\
 		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(ts_noff_vlan,				1, 0, 1, 1, 0, 0,	8,	\
+T(ts_noff_vlan,				0, 1, 0, 1, 1, 0, 0,	8,	\
 		TSP_F | NOFF_F | VLAN_F)				\
-T(ts_noff_vlan_l3l4csum,		1, 0, 1, 1, 0, 1,	8,	\
+T(ts_noff_vlan_l3l4csum,		0, 1, 0, 1, 1, 0, 1,	8,	\
 		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
-T(ts_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 0,	8,	\
+T(ts_noff_vlan_ol3ol4csum,		0, 1, 0, 1, 1, 1, 0,	8,	\
 		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
-T(ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 1, 1,	8,	\
+T(ts_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 0, 1, 1, 1, 1,	8,	\
 		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
-T(ts_tso,				1, 1, 0, 0, 0, 0,	8,	\
+T(ts_tso,				0, 1, 1, 0, 0, 0, 0,	8,	\
 		TSP_F | TSO_F)						\
-T(ts_tso_l3l4csum,			1, 1, 0, 0, 0, 1,	8,	\
+T(ts_tso_l3l4csum,			0, 1, 1, 0, 0, 0, 1,	8,	\
 		TSP_F | TSO_F | L3L4CSUM_F)				\
-T(ts_tso_ol3ol4csum,			1, 1, 0, 0, 1, 0,	8,	\
+T(ts_tso_ol3ol4csum,			0, 1, 1, 0, 0, 1, 0,	8,	\
 		TSP_F | TSO_F | OL3OL4CSUM_F)				\
-T(ts_tso_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 1, 1,	8,	\
+T(ts_tso_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 0, 1, 1,	8,	\
 		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
-T(ts_tso_vlan,				1, 1, 0, 1, 0, 0,	8,	\
+T(ts_tso_vlan,				0, 1, 1, 0, 1, 0, 0,	8,	\
 		TSP_F | TSO_F | VLAN_F)					\
-T(ts_tso_vlan_l3l4csum,			1, 1, 0, 1, 0, 1,	8,	\
+T(ts_tso_vlan_l3l4csum,			0, 1, 1, 0, 1, 0, 1,	8,	\
 		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
-T(ts_tso_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 0,	8,	\
+T(ts_tso_vlan_ol3ol4csum,		0, 1, 1, 0, 1, 1, 0,	8,	\
 		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			\
-T(ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1,	8,	\
+T(ts_tso_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 0, 1, 1, 1,	8,	\
 		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
-T(ts_tso_noff,				1, 1, 1, 0, 0, 0,	8,	\
+T(ts_tso_noff,				0, 1, 1, 1, 0, 0, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F)					\
-T(ts_tso_noff_l3l4csum,			1, 1, 1, 0, 0, 1,	8,	\
+T(ts_tso_noff_l3l4csum,			0, 1, 1, 1, 0, 0, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
-T(ts_tso_noff_ol3ol4csum,		1, 1, 1, 0, 1, 0,	8,	\
+T(ts_tso_noff_ol3ol4csum,		0, 1, 1, 1, 0, 1, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			\
-T(ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1,	8,	\
+T(ts_tso_noff_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 0, 1, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
-T(ts_tso_noff_vlan,			1, 1, 1, 1, 0, 0,	8,	\
+T(ts_tso_noff_vlan,			0, 1, 1, 1, 1, 0, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F)			\
-T(ts_tso_noff_vlan_l3l4csum,		1, 1, 1, 1, 0, 1,	8,	\
+T(ts_tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 1, 0, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
-T(ts_tso_noff_vlan_ol3ol4csum,		1, 1, 1, 1, 1, 0,	8,	\
+T(ts_tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 1, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 1, 1,	8,	\
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec,					1, 0, 0, 0, 0, 0, 0,	4,	\
+		T_SEC_F)						\
+T(sec_l3l4csum,				1, 0, 0, 0, 0, 0, 1,	4,	\
+		T_SEC_F | L3L4CSUM_F)					\
+T(sec_ol3ol4csum,			1, 0, 0, 0, 0, 1, 0,	4,	\
+		T_SEC_F | OL3OL4CSUM_F)					\
+T(sec_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 0, 1, 1,	4,	\
+		T_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(sec_vlan,				1, 0, 0, 0, 1, 0, 0,	6,	\
+		T_SEC_F | VLAN_F)					\
+T(sec_vlan_l3l4csum,			1, 0, 0, 0, 1, 0, 1,	6,	\
+		T_SEC_F | VLAN_F | L3L4CSUM_F)				\
+T(sec_vlan_ol3ol4csum,			1, 0, 0, 0, 1, 1, 0,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F)			\
+T(sec_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1, 1,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff,				1, 0, 0, 1, 0, 0, 0,	4,	\
+		T_SEC_F | NOFF_F)					\
+T(sec_noff_l3l4csum,			1, 0, 0, 1, 0, 0, 1,	4,	\
+		T_SEC_F | NOFF_F | L3L4CSUM_F)				\
+T(sec_noff_ol3ol4csum,			1, 0, 0, 1, 0, 1, 0,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F)			\
+T(sec_noff_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 0, 1, 1,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff_vlan,			1, 0, 0, 1, 1, 0, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F)				\
+T(sec_noff_vlan_l3l4csum,		1, 0, 0, 1, 1, 0, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_noff_vlan_ol3ol4csum,		1, 0, 0, 1, 1, 1, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 0, 1, 1, 1, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso,				1, 0, 1, 0, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F)					\
+T(sec_tso_l3l4csum,			1, 0, 1, 0, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | L3L4CSUM_F)				\
+T(sec_tso_ol3ol4csum,			1, 0, 1, 0, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F)				\
+T(sec_tso_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_tso_vlan,				1, 0, 1, 0, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F)				\
+T(sec_tso_vlan_l3l4csum,		1, 0, 1, 0, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_tso_vlan_ol3ol4csum,		1, 0, 1, 0, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_tso_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 0, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff,				1, 0, 1, 1, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F)				\
+T(sec_tso_noff_l3l4csum,		1, 0, 1, 1, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_tso_noff_ol3ol4csum,		1, 0, 1, 1, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_tso_noff_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff_vlan,			1, 0, 1, 1, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F)			\
+T(sec_tso_noff_vlan_l3l4csum,		1, 0, 1, 1, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_tso_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts,				1, 1, 0, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F)					\
+T(sec_ts_l3l4csum,			1, 1, 0, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | L3L4CSUM_F)				\
+T(sec_ts_ol3ol4csum,			1, 1, 0, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F)				\
+T(sec_ts_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_ts_vlan,				1, 1, 0, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F)				\
+T(sec_ts_vlan_l3l4csum,			1, 1, 0, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_ts_vlan_ol3ol4csum,		1, 1, 0, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_ts_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff,				1, 1, 0, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F)				\
+T(sec_ts_noff_l3l4csum,			1, 1, 0, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_ts_noff_ol3ol4csum,		1, 1, 0, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_ts_noff_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff_vlan,			1, 1, 0, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F)			\
+T(sec_ts_noff_vlan_l3l4csum,		1, 1, 0, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_noff_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso,				1, 1, 1, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F)				\
+T(sec_ts_tso_l3l4csum,			1, 1, 1, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum,		1, 1, 1, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_tso_vlan,			1, 1, 1, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F)			\
+T(sec_ts_tso_vlan_l3l4csum,		1, 1, 1, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_tso_vlan_ol3ol4csum,		1, 1, 1, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff,			1, 1, 1, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F)			\
+T(sec_ts_tso_noff_l3l4csum,		1, 1, 1, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)		\
+T(sec_ts_tso_noff_ol3ol4csum,		1, 1, 1, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso_noff_vlan,			1, 1, 1, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)		\
+T(sec_ts_tso_noff_vlan_l3l4csum,	1, 1, 1, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)	\
+T(sec_ts_tso_noff_vlan_ol3ol4csum,	1, 1, 1, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\
+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
+		L3L4CSUM_F)
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(          \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn10k_tx_mseg.c b/drivers/net/cnxk/cn10k_tx_mseg.c
index 4ea4c8a..2b83409 100644
--- a/drivers/net/cnxk/cn10k_tx_mseg.c
+++ b/drivers/net/cnxk/cn10k_tx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_xmit_pkts_mseg_##name(void *tx_queue,                \
 						struct rte_mbuf **tx_pkts,     \
diff --git a/drivers/net/cnxk/cn10k_tx_vec.c b/drivers/net/cnxk/cn10k_tx_vec.c
index a035049..2789b13 100644
--- a/drivers/net/cnxk/cn10k_tx_vec.c
+++ b/drivers/net/cnxk/cn10k_tx_vec.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_xmit_pkts_vec_##name(void *tx_queue,                 \
 					       struct rte_mbuf **tx_pkts,      \
diff --git a/drivers/net/cnxk/cn10k_tx_vec_mseg.c b/drivers/net/cnxk/cn10k_tx_vec_mseg.c
index 7f98f79..98000df 100644
--- a/drivers/net/cnxk/cn10k_tx_vec_mseg.c
+++ b/drivers/net/cnxk/cn10k_tx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_vec_mseg_##name( \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 22/28] net/cnxk: support IPsec anti replay in cn9k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (20 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 21/28] net/cnxk: support Tx " Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
                     ` (6 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds anti replay support for cn9k platform using
SW anti replay check.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn9k_ethdev.h     |  3 +++
 drivers/net/cnxk/cn9k_ethdev_sec.c | 29 ++++++++++++++++++++
 drivers/net/cnxk/cn9k_rx.h         | 54 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index f8818b8..2b452fe 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -6,6 +6,7 @@
 
 #include <cnxk_ethdev.h>
 #include <cnxk_security.h>
+#include <cnxk_security_ar.h>
 
 struct cn9k_eth_txq {
 	uint64_t cmd[8];
@@ -40,6 +41,8 @@ struct cn9k_eth_rxq {
 /* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */
 struct cn9k_inb_priv_data {
 	void *userdata;
+	uint32_t replay_win_sz;
+	struct cnxk_on_ipsec_ar ar;
 	struct cnxk_eth_sec_sess *eth_sec;
 };
 
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
index 3ec7497..deb1daf 100644
--- a/drivers/net/cnxk/cn9k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -73,6 +73,27 @@ static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {
 	}
 };
 
+static inline int
+ar_window_init(struct cn9k_inb_priv_data *inb_priv)
+{
+	if (inb_priv->replay_win_sz > CNXK_ON_AR_WIN_SIZE_MAX) {
+		plt_err("Replay window size:%u is not supported",
+			inb_priv->replay_win_sz);
+		return -ENOTSUP;
+	}
+
+	rte_spinlock_init(&inb_priv->ar.lock);
+	/*
+	 * Set window bottom to 1, base and top to size of
+	 * window
+	 */
+	inb_priv->ar.winb = 1;
+	inb_priv->ar.wint = inb_priv->replay_win_sz;
+	inb_priv->ar.base = inb_priv->replay_win_sz;
+
+	return 0;
+}
+
 static int
 cn9k_eth_sec_session_create(void *device,
 			    struct rte_security_session_conf *conf,
@@ -158,6 +179,14 @@ cn9k_eth_sec_session_create(void *device,
 		/* Save userdata in inb private area */
 		inb_priv->userdata = conf->userdata;
 
+		inb_priv->replay_win_sz = ipsec->replay_win_sz;
+		if (inb_priv->replay_win_sz) {
+			rc = ar_window_init(inb_priv);
+			if (rc)
+				goto mempool_put;
+		}
+
+		/* Prepare session priv */
 		sess_priv.inb_sa = 1;
 		sess_priv.sa_idx = ipsec->spi;
 
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index bdedeab..7ab415a 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -31,6 +31,9 @@
 #define CQE_CAST(x)	     ((struct nix_cqe_hdr_s *)(x))
 #define CQE_SZ(x)	     ((x) * CNXK_NIX_CQ_ENTRY_SZ)
 
+#define IPSEC_SQ_LO_IDX 4
+#define IPSEC_SQ_HI_IDX 8
+
 union mbuf_initializer {
 	struct {
 		uint16_t data_off;
@@ -166,6 +169,48 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 	mbuf->next = NULL;
 }
 
+static inline int
+ipsec_antireplay_check(struct roc_onf_ipsec_inb_sa *sa,
+		       struct cn9k_inb_priv_data *priv, uintptr_t data,
+		       uint32_t win_sz)
+{
+	struct cnxk_on_ipsec_ar *ar = &priv->ar;
+	uint64_t seq_in_sa;
+	uint32_t seqh = 0;
+	uint32_t seql;
+	uint64_t seq;
+	uint8_t esn;
+	int rc;
+
+	esn = sa->ctl.esn_en;
+	seql = rte_be_to_cpu_32(*((uint32_t *)(data + IPSEC_SQ_LO_IDX)));
+
+	if (!esn) {
+		seq = (uint64_t)seql;
+	} else {
+		seqh = rte_be_to_cpu_32(*((uint32_t *)(data +
+					IPSEC_SQ_HI_IDX)));
+		seq = ((uint64_t)seqh << 32) | seql;
+	}
+
+	if (unlikely(seq == 0))
+		return -1;
+
+	rte_spinlock_lock(&ar->lock);
+	rc = cnxk_on_anti_replay_check(seq, ar, win_sz);
+	if (esn && !rc) {
+		seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
+			    rte_be_to_cpu_32(sa->esn_low);
+		if (seq > seq_in_sa) {
+			sa->esn_low = rte_cpu_to_be_32(seql);
+			sa->esn_hi = rte_cpu_to_be_32(seqh);
+		}
+	}
+	rte_spinlock_unlock(&ar->lock);
+
+	return rc;
+}
+
 static __rte_always_inline uint64_t
 nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 		       uintptr_t sa_base, uint64_t *rearm_val, uint16_t *len)
@@ -178,8 +223,8 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	uint8_t lcptr = rx->lcptr;
 	struct rte_ipv4_hdr *ipv4;
 	uint16_t data_off, res;
+	uint32_t spi, win_sz;
 	uint32_t spi_mask;
-	uint32_t spi;
 	uintptr_t data;
 	__uint128_t dw;
 	uint8_t sa_w;
@@ -209,6 +254,13 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	dw = *(__uint128_t *)sa_priv;
 	*rte_security_dynfield(m) = (uint64_t)dw;
 
+	/* Check if anti-replay is enabled */
+	win_sz = (uint32_t)(dw >> 64);
+	if (win_sz) {
+		if (ipsec_antireplay_check(sa, sa_priv, data, win_sz) < 0)
+			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+	}
+
 	/* Get total length from IPv4 header. We can assume only IPv4 */
 	ipv4 = (struct rte_ipv4_hdr *)(data + ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
 				       ROC_ONF_IPSEC_INB_MAX_L2_SZ);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 23/28] net/cnxk: support IPsec transport mode in cn10k
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (21 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
                     ` (5 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds IPsec transport mode capability to rte security
capabilities.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev_sec.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 3ffd824..dae5ea7 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -69,6 +69,30 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
 		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
 		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
 	},
+	{	/* IPsec Inline Protocol ESP Transport Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Transport Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
 	{
 		.action = RTE_SECURITY_ACTION_TYPE_NONE
 	}
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (22 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
                     ` (4 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds support to update ethertype for mixed IPsec tunnel
versions. And also sets et_overwr for inbound IPsec.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/common/cnxk/cnxk_security.c |  1 +
 drivers/net/cnxk/cn10k_ethdev.h     |  3 ++-
 drivers/net/cnxk/cn10k_ethdev_sec.c |  2 ++
 drivers/net/cnxk/cn10k_tx.h         | 19 +++++++++++++++++++
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index c117fa7..0039a9d 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -344,6 +344,7 @@ cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 	/* There are two words of CPT_CTX_HW_S for ucode to skip */
 	sa->w0.s.ctx_hdr_size = 1;
 	sa->w0.s.aop_valid = 1;
+	sa->w0.s.et_ovrwr = 1;
 
 	rte_wmb();
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 200cd93..c2a46ad 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -64,7 +64,8 @@ struct cn10k_sec_sess_priv {
 		struct {
 			uint32_t sa_idx;
 			uint8_t inb_sa : 1;
-			uint8_t rsvd1 : 2;
+			uint8_t outer_ip_ver : 1;
+			uint8_t mode : 1;
 			uint8_t roundup_byte : 5;
 			uint8_t roundup_len;
 			uint16_t partial_len;
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index dae5ea7..c66730a 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -341,6 +341,8 @@ cn10k_eth_sec_session_create(void *device,
 		sess_priv.roundup_byte = rlens->roundup_byte;
 		sess_priv.roundup_len = rlens->roundup_len;
 		sess_priv.partial_len = rlens->partial_len;
+		sess_priv.mode = outb_sa->w2.s.ipsec_mode;
+		sess_priv.outer_ip_ver = outb_sa->w2.s.outer_ip_ver;
 
 		/* Pointer from eth_sec -> outb_sa */
 		eth_sec->sa = outb_sa;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 52bb71d..ad84464 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -302,6 +302,16 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
 	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
 
 	dptr += l2_len;
+
+	if (sess_priv.mode == ROC_IE_SA_MODE_TUNNEL) {
+		if (sess_priv.outer_ip_ver == ROC_IE_SA_IP_VERSION_4)
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		else
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	}
+
 	ucode_cmd[1] = dptr;
 	ucode_cmd[2] = dptr;
 
@@ -396,6 +406,15 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
 	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
 
 	dptr += l2_len;
+
+	if (sess_priv.mode == ROC_IE_SA_MODE_TUNNEL) {
+		if (sess_priv.outer_ip_ver == ROC_IE_SA_IP_VERSION_4)
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		else
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	}
 	ucode_cmd[1] = dptr;
 	ucode_cmd[2] = dptr;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 25/28] net/cnxk: allow zero udp6 checksum for non inline device
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (23 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
                     ` (3 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Sets IP6_UDP_OPT in NIX RX config to allow optional
UDP checksum for IPv6 in case of security offload.
Also disable drop_re when inline inbound is enabled.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev.c | 5 +++++
 drivers/net/cnxk/cnxk_ethdev.c  | 9 +++++++++
 drivers/net/cnxk/cnxk_ethdev.h  | 1 +
 3 files changed, 15 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index fa2343c..9dfea99 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -553,6 +553,11 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 
 	dev = cnxk_eth_pmd_priv(eth_dev);
 
+	/* DROP_RE is not supported with inline IPSec for CN10K A0 */
+	if (roc_model_is_cn10ka_a0() || roc_model_is_cnf10ka_a0() ||
+	    roc_model_is_cnf10kb_a0())
+		dev->ipsecd_drop_re_dis = 1;
+
 	/* Register up msg callbacks for PTP information */
 	roc_nix_ptp_info_cb_register(&dev->nix, cn10k_nix_ptp_info_update_cb);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 5a64691..2367d5c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1021,6 +1021,15 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
+		/* Disable drop re if rx offload security is enabled and
+		 * platform does not support it.
+		 */
+		if (dev->ipsecd_drop_re_dis)
+			rx_cfg &= ~(ROC_NIX_LF_RX_CFG_DROP_RE);
+	}
+
 	nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
 	nb_txq = RTE_MAX(data->nb_tx_queues, 1);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index a2bcea2..6fd50da 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -269,6 +269,7 @@ struct cnxk_eth_dev {
 	union {
 		struct {
 			uint64_t cq_min_4k : 1;
+			uint64_t ipsecd_drop_re_dis : 1;
 		};
 		uint64_t hwcap;
 	};
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (24 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
                     ` (2 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds capabitlities for AES_CBC and HMAC_SHA1 for 9k
security offload.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev_sec.c | 40 +++++++++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn9k_ethdev_sec.c  | 40 +++++++++++++++++++++++++++++++++++++
 2 files changed, 80 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index c66730a..82dc636 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -41,6 +41,46 @@ static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
 			}, }
 		}, }
 	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 20,
+					.max = 64,
+					.increment = 1
+				},
+				.digest_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
index deb1daf..b070ad5 100644
--- a/drivers/net/cnxk/cn9k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -40,6 +40,46 @@ static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {
 			}, }
 		}, }
 	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 20,
+					.max = 64,
+					.increment = 1
+				},
+				.digest_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 27/28] net/cnxk: support configuring channel mask via devargs
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (25 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
  2021-10-01  5:37   ` [dpdk-dev] [PATCH v2 00/28] net/cnxk: support for inline ipsec Jerin Jacob
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Satheesh Paul

From: Satheesh Paul <psatheesh@marvell.com>

This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
---
 doc/guides/nics/cnxk.rst           | 20 +++++++++++++++++++
 drivers/net/cnxk/cnxk_ethdev_sec.c | 39 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index b542437..dd955d3 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -255,6 +255,26 @@ Runtime Config Options
    With the above configuration, inbound encrypted traffic from both the ports
    is received by ipsec inline device.
 
+- ``Inline IPsec device channel and mask`` (default ``none``)
+
+   Set channel and channel mask configuration for the inline IPSec device. This
+   will be used when creating flow rules with RTE_FLOW_ACTION_TYPE_SECURITY
+   action.
+
+   By default, RTE Flow API sets the channel number of the port on which the
+   rule is created in the MCAM entry and matches it exactly. This behaviour can
+   be modified using the ``inl_cpt_channel`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,inl_cpt_channel=0x100/0xf00
+
+   With the above configuration, RTE Flow rules API will set the channel
+   and channel mask as 0x100 and 0xF00 in the MCAM entries of the  flow rules
+   created with RTE_FLOW_ACTION_TYPE_SECURITY action. Since channel number is
+   set with this custom mask, inbound encrypted traffic from all ports with
+   matching channel number pattern will be directed to the inline IPSec device.
+
 .. note::
 
    Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index c76e230..ae3e49c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -6,6 +6,13 @@
 
 #define CNXK_NIX_INL_SELFTEST	      "selftest"
 #define CNXK_NIX_INL_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
+#define CNXK_INL_CPT_CHANNEL	      "inl_cpt_channel"
+
+struct inl_cpt_channel {
+	bool is_multi_channel;
+	uint16_t channel;
+	uint16_t mask;
+};
 
 #define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)
 #define CNXK_NIX_INL_DEV_NAME_LEN                                              \
@@ -137,13 +144,37 @@ parse_selftest(const char *key, const char *value, void *extra_args)
 }
 
 static int
+parse_inl_cpt_channel(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint16_t chan = 0, mask = 0;
+	char *next = 0;
+
+	/* next will point to the separator '/' */
+	chan = strtol(value, &next, 16);
+	mask = strtol(++next, 0, 16);
+
+	if (chan > GENMASK(12, 0) || mask > GENMASK(12, 0))
+		return -EINVAL;
+
+	((struct inl_cpt_channel *)extra_args)->channel = chan;
+	((struct inl_cpt_channel *)extra_args)->mask = mask;
+	((struct inl_cpt_channel *)extra_args)->is_multi_channel = true;
+
+	return 0;
+}
+
+static int
 nix_inl_parse_devargs(struct rte_devargs *devargs,
 		      struct roc_nix_inl_dev *inl_dev)
 {
 	uint32_t ipsec_in_max_spi = BIT(8) - 1;
+	struct inl_cpt_channel cpt_channel;
 	struct rte_kvargs *kvlist;
 	uint8_t selftest = 0;
 
+	memset(&cpt_channel, 0, sizeof(cpt_channel));
+
 	if (devargs == NULL)
 		goto null_devargs;
 
@@ -155,11 +186,16 @@ nix_inl_parse_devargs(struct rte_devargs *devargs,
 			   &selftest);
 	rte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,
 			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_process(kvlist, CNXK_INL_CPT_CHANNEL, &parse_inl_cpt_channel,
+			   &cpt_channel);
 	rte_kvargs_free(kvlist);
 
 null_devargs:
 	inl_dev->ipsec_in_max_spi = ipsec_in_max_spi;
 	inl_dev->selftest = selftest;
+	inl_dev->channel = cpt_channel.channel;
+	inl_dev->chan_mask = cpt_channel.mask;
+	inl_dev->is_multi_channel = cpt_channel.is_multi_channel;
 	return 0;
 exit:
 	return -EINVAL;
@@ -275,4 +311,5 @@ RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, "vfio-pci");
 
 RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,
 			      CNXK_NIX_INL_SELFTEST "=1"
-			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>");
+			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>"
+			      CNXK_INL_CPT_CHANNEL "=<1-4095>/<1-4095>");
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v2 28/28] net/cnxk: reflect globally enabled offloads in queue conf
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (26 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
@ 2021-09-30 17:01   ` Nithin Dabilpuram
  2021-10-01  5:37   ` [dpdk-dev] [PATCH v2 00/28] net/cnxk: support for inline ipsec Jerin Jacob
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-09-30 17:01 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, stable

Reflect globally enabled Rx and Tx offloads in queue conf.
Also fix issue with lmt data prepare for multi seg.

Fixes: a24af6361e37 ("net/cnxk: add Tx queue setup and release")
Fixes: a86144cd9ded ("net/cnxk: add Rx queue setup and release")
Fixes: 305ca2c4c382 ("net/cnxk: support multi-segment vector Tx")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/cnxk/cn10k_tx.h    | 2 +-
 drivers/net/cnxk/cnxk_ethdev.c | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index ad84464..c6f349b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -1280,7 +1280,7 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,
 			vst1q_u64(lmt_addr + 14, cmd1[3]);
 
 			*data128 |= ((__uint128_t)7) << *shift;
-			shift += 3;
+			*shift += 3;
 
 			return 1;
 		}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 2367d5c..bdceac8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -380,6 +380,8 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq_sp->dev = dev;
 	txq_sp->qid = qid;
 	txq_sp->qconf.conf.tx = *tx_conf;
+	/* Queue config should reflect global offloads */
+	txq_sp->qconf.conf.tx.offloads = dev->tx_offloads;
 	txq_sp->qconf.nb_desc = nb_desc;
 
 	plt_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " lmt_addr=%p"
@@ -527,6 +529,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->dev = dev;
 	rxq_sp->qid = qid;
 	rxq_sp->qconf.conf.rx = *rx_conf;
+	/* Queue config should reflect global offloads */
+	rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads;
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/28] net/cnxk: support for inline ipsec
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
                     ` (27 preceding siblings ...)
  2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
@ 2021-10-01  5:37   ` Jerin Jacob
  28 siblings, 0 replies; 91+ messages in thread
From: Jerin Jacob @ 2021-10-01  5:37 UTC (permalink / raw)
  To: Nithin Dabilpuram; +Cc: Jerin Jacob, dpdk-dev

On Thu, Sep 30, 2021 at 10:32 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> Support for inline ipsec in CN9K event mode and in Cn10K event mode and
> poll mode.
>
> Kommula Shiva Shankar (1):
>   common/cnxk: add CQ enable support in NIX Tx path
>
> Nithin Dabilpuram (18):
>   common/cnxk: support CPT parse header dump
>   common/cnxk: allow reuse of SSO API for inline dev
>   common/cnxk: change NIX debug API and queue API interface
>   common/cnxk: support NIX inline device IRQ
>   common/cnxk: support NIX inline device init and fini
>   common/cnxk: support NIX inline inbound and outbound setup
>   common/cnxk: disable CQ drop when inline inbound is enabled
>   common/cnxk: dump CPT LF registers on error intr
>   common/cnxk: align CPT LF enable/disable sequence
>   common/cnxk: restore NIX sqb pool limit before destroy
>   common/cnxk: setup aura BP conf based on nix
>   net/cnxk: support inline security setup for cn9k
>   net/cnxk: support inline security setup for cn10k
>   net/cnxk: support Rx security offload on cn9k
>   net/cnxk: support Tx security offload on cn9k
>   net/cnxk: support Rx security offload on cn10k
>   net/cnxk: support Tx security offload on cn10k
>   net/cnxk: reflect globally enabled offloads in queue conf
>
> Satheesh Paul (2):
>   common/cnxk: support inline IPsec rte flow action
>   net/cnxk: support configuring channel mask via devargs
>
> Srujana Challa (7):
>   common/cnxk: support cn9k fast path security session
>   common/cnxk: support anti-replay check in SW for cn9k
>   net/cnxk: support IPsec anti replay in cn9k
>   net/cnxk: support IPsec transport mode in cn10k
>   net/cnxk: update ethertype for mixed IPsec tunnel versions
>   net/cnxk: allow zero udp6 checksum for non inline device
>   net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1


Could you please rebase with next-net-mrvl


[for-next-net]dell[dpdk-next-net-mrvl] $ git pw series apply 19307
Applying: common/cnxk: support cn9k fast path security session
Applying: common/cnxk: support CPT parse header dump
Applying: common/cnxk: allow reuse of SSO API for inline dev
Applying: common/cnxk: change NIX debug API and queue API interface
error: sha1 information is lacking or useless
(drivers/common/cnxk/roc_nix_priv.h).
error: could not build fake ancestor
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0004 common/cnxk: change NIX debug API and queue API interface
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

> v2:
> - Included bug fixes for second pass packets
> - Updated .ini files.
> - Reworded commit messages with additional description
>   and abbreviation fixes
>
>  doc/guides/nics/cnxk.rst                         |  122 +++
>  doc/guides/nics/features/cnxk.ini                |    1 +
>  doc/guides/nics/features/cnxk_vec.ini            |    1 +
>  doc/guides/nics/features/cnxk_vf.ini             |    1 +
>  doc/guides/rel_notes/release_21_11.rst           |    2 +
>  drivers/common/cnxk/cnxk_security.c              |  212 +++++
>  drivers/common/cnxk/cnxk_security.h              |   12 +
>  drivers/common/cnxk/cnxk_security_ar.h           |  184 ++++
>  drivers/common/cnxk/hw/cpt.h                     |   19 +
>  drivers/common/cnxk/meson.build                  |    3 +
>  drivers/common/cnxk/roc_api.h                    |   49 +-
>  drivers/common/cnxk/roc_constants.h              |   58 ++
>  drivers/common/cnxk/roc_cpt.c                    |   54 +-
>  drivers/common/cnxk/roc_cpt.h                    |   10 +
>  drivers/common/cnxk/roc_cpt_debug.c              |   63 +-
>  drivers/common/cnxk/roc_cpt_priv.h               |    1 +
>  drivers/common/cnxk/roc_idev.c                   |    2 +
>  drivers/common/cnxk/roc_idev_priv.h              |    3 +
>  drivers/common/cnxk/roc_io.h                     |    9 +
>  drivers/common/cnxk/roc_io_generic.h             |    3 +-
>  drivers/common/cnxk/roc_irq.c                    |    7 +-
>  drivers/common/cnxk/roc_nix.c                    |    2 +-
>  drivers/common/cnxk/roc_nix.h                    |    7 +
>  drivers/common/cnxk/roc_nix_debug.c              |  168 +++-
>  drivers/common/cnxk/roc_nix_fc.c                 |   23 +-
>  drivers/common/cnxk/roc_nix_inl.c                |  778 +++++++++++++++++
>  drivers/common/cnxk/roc_nix_inl.h                |  170 ++++
>  drivers/common/cnxk/roc_nix_inl_dev.c            |  639 ++++++++++++++
>  drivers/common/cnxk/roc_nix_inl_dev_irq.c        |  359 ++++++++
>  drivers/common/cnxk/roc_nix_inl_priv.h           |   68 ++
>  drivers/common/cnxk/roc_nix_priv.h               |   31 +
>  drivers/common/cnxk/roc_nix_queue.c              |  137 +--
>  drivers/common/cnxk/roc_npc.c                    |   27 +-
>  drivers/common/cnxk/roc_npc_mcam.c               |   28 +-
>  drivers/common/cnxk/roc_platform.h               |   11 +-
>  drivers/common/cnxk/roc_priv.h                   |    3 +
>  drivers/common/cnxk/roc_sso.c                    |   52 +-
>  drivers/common/cnxk/roc_sso_priv.h               |    9 +
>  drivers/common/cnxk/version.map                  |   34 +
>  drivers/event/cnxk/cn10k_eventdev.c              |   93 +-
>  drivers/event/cnxk/cn10k_worker.h                |  147 +++-
>  drivers/event/cnxk/cn10k_worker_deq.c            |    2 +-
>  drivers/event/cnxk/cn10k_worker_deq_burst.c      |    2 +-
>  drivers/event/cnxk/cn10k_worker_deq_ca.c         |    2 +-
>  drivers/event/cnxk/cn10k_worker_deq_tmo.c        |    2 +-
>  drivers/event/cnxk/cn10k_worker_tx_enq.c         |    2 +-
>  drivers/event/cnxk/cn10k_worker_tx_enq_seg.c     |    2 +-
>  drivers/event/cnxk/cn9k_eventdev.c               |  182 ++--
>  drivers/event/cnxk/cn9k_worker.h                 |  170 +++-
>  drivers/event/cnxk/cn9k_worker_deq.c             |    2 +-
>  drivers/event/cnxk/cn9k_worker_deq_burst.c       |    2 +-
>  drivers/event/cnxk/cn9k_worker_deq_ca.c          |    2 +-
>  drivers/event/cnxk/cn9k_worker_deq_tmo.c         |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq.c        |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq_burst.c  |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq_ca.c     |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c    |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |    2 +-
>  drivers/event/cnxk/cn9k_worker_tx_enq.c          |    2 +-
>  drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |    2 +-
>  drivers/event/cnxk/cnxk_eventdev_adptr.c         |   36 +-
>  drivers/net/cnxk/cn10k_ethdev.c                  |   41 +-
>  drivers/net/cnxk/cn10k_ethdev.h                  |   48 ++
>  drivers/net/cnxk/cn10k_ethdev_sec.c              |  492 +++++++++++
>  drivers/net/cnxk/cn10k_rx.c                      |   31 +-
>  drivers/net/cnxk/cn10k_rx.h                      |  649 +++++++++++---
>  drivers/net/cnxk/cn10k_rx_mseg.c                 |    2 +-
>  drivers/net/cnxk/cn10k_rx_vec.c                  |    4 +-
>  drivers/net/cnxk/cn10k_rx_vec_mseg.c             |    4 +-
>  drivers/net/cnxk/cn10k_tx.c                      |   31 +-
>  drivers/net/cnxk/cn10k_tx.h                      | 1006 +++++++++++++++++++---
>  drivers/net/cnxk/cn10k_tx_mseg.c                 |    2 +-
>  drivers/net/cnxk/cn10k_tx_vec.c                  |    2 +-
>  drivers/net/cnxk/cn10k_tx_vec_mseg.c             |    2 +-
>  drivers/net/cnxk/cn9k_ethdev.c                   |   23 +
>  drivers/net/cnxk/cn9k_ethdev.h                   |   64 ++
>  drivers/net/cnxk/cn9k_ethdev_sec.c               |  382 ++++++++
>  drivers/net/cnxk/cn9k_rx.c                       |   31 +-
>  drivers/net/cnxk/cn9k_rx.h                       |  493 +++++++++--
>  drivers/net/cnxk/cn9k_rx_mseg.c                  |    2 +-
>  drivers/net/cnxk/cn9k_rx_vec.c                   |    2 +-
>  drivers/net/cnxk/cn9k_rx_vec_mseg.c              |    2 +-
>  drivers/net/cnxk/cn9k_tx.c                       |   29 +-
>  drivers/net/cnxk/cn9k_tx.h                       |  393 ++++++---
>  drivers/net/cnxk/cn9k_tx_mseg.c                  |    2 +-
>  drivers/net/cnxk/cn9k_tx_vec.c                   |    2 +-
>  drivers/net/cnxk/cn9k_tx_vec_mseg.c              |    2 +-
>  drivers/net/cnxk/cnxk_ethdev.c                   |  243 +++++-
>  drivers/net/cnxk/cnxk_ethdev.h                   |  125 ++-
>  drivers/net/cnxk/cnxk_ethdev_devargs.c           |   88 +-
>  drivers/net/cnxk/cnxk_ethdev_sec.c               |  315 +++++++
>  drivers/net/cnxk/cnxk_lookup.c                   |   50 +-
>  drivers/net/cnxk/meson.build                     |    3 +
>  drivers/net/cnxk/version.map                     |    5 +
>  usertools/dpdk-devbind.py                        |    8 +-
>  96 files changed, 7686 insertions(+), 918 deletions(-)
>  create mode 100644 drivers/common/cnxk/cnxk_security_ar.h
>  create mode 100644 drivers/common/cnxk/roc_constants.h
>  create mode 100644 drivers/common/cnxk/roc_nix_inl.c
>  create mode 100644 drivers/common/cnxk/roc_nix_inl.h
>  create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c
>  create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
>  create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h
>  create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c
>  create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
>  create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c
>
> --
> 2.8.4
>

^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 00/28] net/cnxk: support for inline ipsec
  2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
                   ` (28 preceding siblings ...)
  2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
@ 2021-10-01 13:39 ` Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
                     ` (28 more replies)
  29 siblings, 29 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:39 UTC (permalink / raw)
  To: jerinj; +Cc: dev, Nithin Dabilpuram

Support for inline ipsec in CN9K event mode and in Cn10K event mode and
poll mode.

Kommula Shiva Shankar (1):
  common/cnxk: add CQ enable support in NIX Tx path

Nithin Dabilpuram (18):
  common/cnxk: support CPT parse header dump
  common/cnxk: allow reuse of SSO API for inline dev
  common/cnxk: change NIX debug API and queue API interface
  common/cnxk: support NIX inline device IRQ
  common/cnxk: support NIX inline device init and fini
  common/cnxk: support NIX inline inbound and outbound setup
  common/cnxk: disable CQ drop when inline inbound is enabled
  common/cnxk: dump CPT LF registers on error intr
  common/cnxk: align CPT LF enable/disable sequence
  common/cnxk: restore NIX sqb pool limit before destroy
  common/cnxk: setup aura BP conf based on nix
  net/cnxk: support inline security setup for cn9k
  net/cnxk: support inline security setup for cn10k
  net/cnxk: support Rx security offload on cn9k
  net/cnxk: support Tx security offload on cn9k
  net/cnxk: support Rx security offload on cn10k
  net/cnxk: support Tx security offload on cn10k
  net/cnxk: reflect globally enabled offloads in queue conf

Satheesh Paul (2):
  common/cnxk: support inline IPsec rte flow action
  net/cnxk: support configuring channel mask via devargs

Srujana Challa (7):
  common/cnxk: support cn9k fast path security session
  common/cnxk: support anti-replay check in SW for cn9k
  net/cnxk: support IPsec anti replay in cn9k
  net/cnxk: support IPsec transport mode in cn10k
  net/cnxk: update ethertype for mixed IPsec tunnel versions
  net/cnxk: allow zero udp6 checksum for non inline device
  net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1

v3:
- Rebased and fixed conflicts

v2:
- Included bug fixes for second pass packets
- Updated .ini files.
- Reworded commit messages with additional description
  and abbreviation fixes

 doc/guides/nics/cnxk.rst                         |  122 +++
 doc/guides/nics/features/cnxk.ini                |    1 +
 doc/guides/nics/features/cnxk_vec.ini            |    1 +
 doc/guides/nics/features/cnxk_vf.ini             |    1 +
 doc/guides/rel_notes/release_21_11.rst           |    2 +
 drivers/common/cnxk/cnxk_security.c              |  212 +++++
 drivers/common/cnxk/cnxk_security.h              |   12 +
 drivers/common/cnxk/cnxk_security_ar.h           |  184 ++++
 drivers/common/cnxk/hw/cpt.h                     |   19 +
 drivers/common/cnxk/meson.build                  |    3 +
 drivers/common/cnxk/roc_api.h                    |   49 +-
 drivers/common/cnxk/roc_constants.h              |   58 ++
 drivers/common/cnxk/roc_cpt.c                    |   54 +-
 drivers/common/cnxk/roc_cpt.h                    |   10 +
 drivers/common/cnxk/roc_cpt_debug.c              |   63 +-
 drivers/common/cnxk/roc_cpt_priv.h               |    1 +
 drivers/common/cnxk/roc_idev.c                   |    2 +
 drivers/common/cnxk/roc_idev_priv.h              |    3 +
 drivers/common/cnxk/roc_io.h                     |    9 +
 drivers/common/cnxk/roc_io_generic.h             |    3 +-
 drivers/common/cnxk/roc_irq.c                    |    7 +-
 drivers/common/cnxk/roc_nix.c                    |    2 +-
 drivers/common/cnxk/roc_nix.h                    |    7 +
 drivers/common/cnxk/roc_nix_debug.c              |  168 +++-
 drivers/common/cnxk/roc_nix_fc.c                 |   23 +-
 drivers/common/cnxk/roc_nix_inl.c                |  778 +++++++++++++++++
 drivers/common/cnxk/roc_nix_inl.h                |  170 ++++
 drivers/common/cnxk/roc_nix_inl_dev.c            |  639 ++++++++++++++
 drivers/common/cnxk/roc_nix_inl_dev_irq.c        |  359 ++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h           |   68 ++
 drivers/common/cnxk/roc_nix_priv.h               |   31 +
 drivers/common/cnxk/roc_nix_queue.c              |  137 +--
 drivers/common/cnxk/roc_npc.c                    |   27 +-
 drivers/common/cnxk/roc_npc_mcam.c               |   28 +-
 drivers/common/cnxk/roc_platform.h               |   11 +-
 drivers/common/cnxk/roc_priv.h                   |    3 +
 drivers/common/cnxk/roc_sso.c                    |   52 +-
 drivers/common/cnxk/roc_sso_priv.h               |    9 +
 drivers/common/cnxk/version.map                  |   34 +
 drivers/event/cnxk/cn10k_eventdev.c              |   93 +-
 drivers/event/cnxk/cn10k_worker.h                |  147 +++-
 drivers/event/cnxk/cn10k_worker_deq.c            |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_burst.c      |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_ca.c         |    2 +-
 drivers/event/cnxk/cn10k_worker_deq_tmo.c        |    2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq.c         |    2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq_seg.c     |    2 +-
 drivers/event/cnxk/cn9k_eventdev.c               |  182 ++--
 drivers/event/cnxk/cn9k_worker.h                 |  170 +++-
 drivers/event/cnxk/cn9k_worker_deq.c             |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_burst.c       |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_ca.c          |    2 +-
 drivers/event/cnxk/cn9k_worker_deq_tmo.c         |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq.c        |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_burst.c  |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c     |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c    |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |    2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |    2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq.c          |    2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |    2 +-
 drivers/event/cnxk/cnxk_eventdev_adptr.c         |   36 +-
 drivers/net/cnxk/cn10k_ethdev.c                  |   41 +-
 drivers/net/cnxk/cn10k_ethdev.h                  |   48 ++
 drivers/net/cnxk/cn10k_ethdev_sec.c              |  492 +++++++++++
 drivers/net/cnxk/cn10k_rx.c                      |   31 +-
 drivers/net/cnxk/cn10k_rx.h                      |  649 +++++++++++---
 drivers/net/cnxk/cn10k_rx_mseg.c                 |    2 +-
 drivers/net/cnxk/cn10k_rx_vec.c                  |    4 +-
 drivers/net/cnxk/cn10k_rx_vec_mseg.c             |    4 +-
 drivers/net/cnxk/cn10k_tx.c                      |   31 +-
 drivers/net/cnxk/cn10k_tx.h                      | 1006 +++++++++++++++++++---
 drivers/net/cnxk/cn10k_tx_mseg.c                 |    2 +-
 drivers/net/cnxk/cn10k_tx_vec.c                  |    2 +-
 drivers/net/cnxk/cn10k_tx_vec_mseg.c             |    2 +-
 drivers/net/cnxk/cn9k_ethdev.c                   |   23 +
 drivers/net/cnxk/cn9k_ethdev.h                   |   64 ++
 drivers/net/cnxk/cn9k_ethdev_sec.c               |  382 ++++++++
 drivers/net/cnxk/cn9k_rx.c                       |   31 +-
 drivers/net/cnxk/cn9k_rx.h                       |  493 +++++++++--
 drivers/net/cnxk/cn9k_rx_mseg.c                  |    2 +-
 drivers/net/cnxk/cn9k_rx_vec.c                   |    2 +-
 drivers/net/cnxk/cn9k_rx_vec_mseg.c              |    2 +-
 drivers/net/cnxk/cn9k_tx.c                       |   29 +-
 drivers/net/cnxk/cn9k_tx.h                       |  393 ++++++---
 drivers/net/cnxk/cn9k_tx_mseg.c                  |    2 +-
 drivers/net/cnxk/cn9k_tx_vec.c                   |    2 +-
 drivers/net/cnxk/cn9k_tx_vec_mseg.c              |    2 +-
 drivers/net/cnxk/cnxk_ethdev.c                   |  243 +++++-
 drivers/net/cnxk/cnxk_ethdev.h                   |  125 ++-
 drivers/net/cnxk/cnxk_ethdev_devargs.c           |   88 +-
 drivers/net/cnxk/cnxk_ethdev_sec.c               |  315 +++++++
 drivers/net/cnxk/cnxk_lookup.c                   |   50 +-
 drivers/net/cnxk/meson.build                     |    3 +
 drivers/net/cnxk/version.map                     |    5 +
 usertools/dpdk-devbind.py                        |    8 +-
 96 files changed, 7686 insertions(+), 918 deletions(-)
 create mode 100644 drivers/common/cnxk/cnxk_security_ar.h
 create mode 100644 drivers/common/cnxk/roc_constants.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c

-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 01/28] common/cnxk: support cn9k fast path security session
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
@ 2021-10-01 13:39   ` Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
                     ` (27 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:39 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Add security support to init cn9k fast path SA data
for AES GCM and AES CBC + HMAC SHA1.

Signed-off-by: Srujana Challa <schalla@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/cnxk_security.c | 211 ++++++++++++++++++++++++++++++++++++
 drivers/common/cnxk/cnxk_security.h |  12 ++
 drivers/common/cnxk/version.map     |   4 +
 3 files changed, 227 insertions(+)

diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index cc5daf3..c117fa7 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -513,6 +513,217 @@ cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa)
 	return !!sa->w2.s.valid;
 }
 
+static inline int
+ipsec_xfrm_verify(struct rte_security_ipsec_xform *ipsec_xfrm,
+		  struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	if (crypto_xfrm->next == NULL)
+		return -EINVAL;
+
+	if (ipsec_xfrm->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+		    crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+	} else {
+		if (crypto_xfrm->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+		    crypto_xfrm->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+onf_ipsec_sa_common_param_fill(struct roc_ie_onf_sa_ctl *ctl, uint8_t *salt,
+			       uint8_t *cipher_key, uint8_t *hmac_opad_ipad,
+			       struct rte_security_ipsec_xform *ipsec_xfrm,
+			       struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct rte_crypto_sym_xform *auth_xfrm, *cipher_xfrm;
+	int rc, length, auth_key_len;
+	const uint8_t *key = NULL;
+
+	/* Set direction */
+	switch (ipsec_xfrm->direction) {
+	case RTE_SECURITY_IPSEC_SA_DIR_INGRESS:
+		ctl->direction = ROC_IE_SA_DIR_INBOUND;
+		auth_xfrm = crypto_xfrm;
+		cipher_xfrm = crypto_xfrm->next;
+		break;
+	case RTE_SECURITY_IPSEC_SA_DIR_EGRESS:
+		ctl->direction = ROC_IE_SA_DIR_OUTBOUND;
+		cipher_xfrm = crypto_xfrm;
+		auth_xfrm = crypto_xfrm->next;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set protocol - ESP vs AH */
+	switch (ipsec_xfrm->proto) {
+	case RTE_SECURITY_IPSEC_SA_PROTO_ESP:
+		ctl->ipsec_proto = ROC_IE_SA_PROTOCOL_ESP;
+		break;
+	case RTE_SECURITY_IPSEC_SA_PROTO_AH:
+		return -ENOTSUP;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set mode - transport vs tunnel */
+	switch (ipsec_xfrm->mode) {
+	case RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT:
+		ctl->ipsec_mode = ROC_IE_SA_MODE_TRANSPORT;
+		break;
+	case RTE_SECURITY_IPSEC_SA_MODE_TUNNEL:
+		ctl->ipsec_mode = ROC_IE_SA_MODE_TUNNEL;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	/* Set encryption algorithm */
+	if (crypto_xfrm->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		length = crypto_xfrm->aead.key.length;
+
+		switch (crypto_xfrm->aead.algo) {
+		case RTE_CRYPTO_AEAD_AES_GCM:
+			ctl->enc_type = ROC_IE_ON_SA_ENC_AES_GCM;
+			ctl->auth_type = ROC_IE_ON_SA_AUTH_NULL;
+			memcpy(salt, &ipsec_xfrm->salt, 4);
+			key = crypto_xfrm->aead.key.data;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+
+	} else {
+		rc = ipsec_xfrm_verify(ipsec_xfrm, crypto_xfrm);
+		if (rc)
+			return rc;
+
+		switch (cipher_xfrm->cipher.algo) {
+		case RTE_CRYPTO_CIPHER_AES_CBC:
+			ctl->enc_type = ROC_IE_ON_SA_ENC_AES_CBC;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+
+		switch (auth_xfrm->auth.algo) {
+		case RTE_CRYPTO_AUTH_SHA1_HMAC:
+			ctl->auth_type = ROC_IE_ON_SA_AUTH_SHA1;
+			break;
+		default:
+			return -ENOTSUP;
+		}
+		auth_key_len = auth_xfrm->auth.key.length;
+		if (auth_key_len < 20 || auth_key_len > 64)
+			return -ENOTSUP;
+
+		key = cipher_xfrm->cipher.key.data;
+		length = cipher_xfrm->cipher.key.length;
+
+		ipsec_hmac_opad_ipad_gen(auth_xfrm, hmac_opad_ipad);
+	}
+
+	switch (length) {
+	case ROC_CPT_AES128_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_128;
+		break;
+	case ROC_CPT_AES192_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_192;
+		break;
+	case ROC_CPT_AES256_KEY_LEN:
+		ctl->aes_key_len = ROC_IE_SA_AES_KEY_LEN_256;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	memcpy(cipher_key, key, length);
+
+	if (ipsec_xfrm->options.esn)
+		ctl->esn_en = 1;
+
+	ctl->spi = rte_cpu_to_be_32(ipsec_xfrm->spi);
+	return 0;
+}
+
+int
+cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
+			   struct rte_security_ipsec_xform *ipsec_xfrm,
+			   struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
+	int rc;
+
+	rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
+					    sa->hmac_key, ipsec_xfrm,
+					    crypto_xfrm);
+	if (rc)
+		return rc;
+
+	rte_wmb();
+
+	/* Enable SA */
+	ctl->valid = 1;
+	return 0;
+}
+
+int
+cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
+			    struct rte_security_ipsec_xform *ipsec_xfrm,
+			    struct rte_crypto_sym_xform *crypto_xfrm)
+{
+	struct rte_security_ipsec_tunnel_param *tunnel = &ipsec_xfrm->tunnel;
+	struct roc_ie_onf_sa_ctl *ctl = &sa->ctl;
+	int rc;
+
+	/* Fill common params */
+	rc = onf_ipsec_sa_common_param_fill(ctl, sa->nonce, sa->cipher_key,
+					    sa->hmac_key, ipsec_xfrm,
+					    crypto_xfrm);
+	if (rc)
+		return rc;
+
+	if (ipsec_xfrm->mode != RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
+		goto skip_tunnel_info;
+
+	/* Tunnel header info */
+	switch (tunnel->type) {
+	case RTE_SECURITY_IPSEC_TUNNEL_IPV4:
+		memcpy(&sa->ip_src, &tunnel->ipv4.src_ip,
+		       sizeof(struct in_addr));
+		memcpy(&sa->ip_dst, &tunnel->ipv4.dst_ip,
+		       sizeof(struct in_addr));
+		break;
+	case RTE_SECURITY_IPSEC_TUNNEL_IPV6:
+		return -ENOTSUP;
+	default:
+		return -EINVAL;
+	}
+
+skip_tunnel_info:
+	rte_wmb();
+
+	/* Enable SA */
+	ctl->valid = 1;
+	return 0;
+}
+
+bool
+cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa)
+{
+	return !!sa->ctl.valid;
+}
+
+bool
+cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa)
+{
+	return !!sa->ctl.valid;
+}
+
 uint8_t
 cnxk_ipsec_ivlen_get(enum rte_crypto_cipher_algorithm c_algo,
 		     enum rte_crypto_auth_algorithm a_algo,
diff --git a/drivers/common/cnxk/cnxk_security.h b/drivers/common/cnxk/cnxk_security.h
index 602f583..db97887 100644
--- a/drivers/common/cnxk/cnxk_security.h
+++ b/drivers/common/cnxk/cnxk_security.h
@@ -46,4 +46,16 @@ cnxk_ot_ipsec_outb_sa_fill(struct roc_ot_ipsec_outb_sa *sa,
 bool __roc_api cnxk_ot_ipsec_inb_sa_valid(struct roc_ot_ipsec_inb_sa *sa);
 bool __roc_api cnxk_ot_ipsec_outb_sa_valid(struct roc_ot_ipsec_outb_sa *sa);
 
+/* [CN9K, CN10K) */
+int __roc_api
+cnxk_onf_ipsec_inb_sa_fill(struct roc_onf_ipsec_inb_sa *sa,
+			   struct rte_security_ipsec_xform *ipsec_xfrm,
+			   struct rte_crypto_sym_xform *crypto_xfrm);
+int __roc_api
+cnxk_onf_ipsec_outb_sa_fill(struct roc_onf_ipsec_outb_sa *sa,
+			    struct rte_security_ipsec_xform *ipsec_xfrm,
+			    struct rte_crypto_sym_xform *crypto_xfrm);
+bool __roc_api cnxk_onf_ipsec_inb_sa_valid(struct roc_onf_ipsec_inb_sa *sa);
+bool __roc_api cnxk_onf_ipsec_outb_sa_valid(struct roc_onf_ipsec_outb_sa *sa);
+
 #endif /* _CNXK_SECURITY_H__ */
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index fff7902..f7b6ef6 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -14,6 +14,10 @@ INTERNAL {
 	cnxk_logtype_sso;
 	cnxk_logtype_tim;
 	cnxk_logtype_tm;
+	cnxk_onf_ipsec_inb_sa_fill;
+	cnxk_onf_ipsec_outb_sa_fill;
+	cnxk_onf_ipsec_inb_sa_valid;
+	cnxk_onf_ipsec_outb_sa_valid;
 	cnxk_ot_ipsec_inb_sa_fill;
 	cnxk_ot_ipsec_outb_sa_fill;
 	cnxk_ot_ipsec_inb_sa_valid;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 02/28] common/cnxk: support CPT parse header dump
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
@ 2021-10-01 13:39   ` Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
                     ` (26 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:39 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev

Add helper API to dump CPT parse header.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_cpt.h       |  2 ++
 drivers/common/cnxk/roc_cpt_debug.c | 31 +++++++++++++++++++++++++++++++
 drivers/common/cnxk/version.map     |  1 +
 3 files changed, 34 insertions(+)

diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index 9e63073..c80a8e0 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -155,4 +155,6 @@ void __roc_api roc_cpt_iq_enable(struct roc_cpt_lf *lf);
 int __roc_api roc_cpt_lmtline_init(struct roc_cpt *roc_cpt,
 				   struct roc_cpt_lmtline *lmtline, int lf_id);
 
+void __roc_api roc_cpt_parse_hdr_dump(const struct cpt_parse_hdr_s *cpth);
+
 #endif /* _ROC_CPT_H_ */
diff --git a/drivers/common/cnxk/roc_cpt_debug.c b/drivers/common/cnxk/roc_cpt_debug.c
index 9a9dcba..a6c9004 100644
--- a/drivers/common/cnxk/roc_cpt_debug.c
+++ b/drivers/common/cnxk/roc_cpt_debug.c
@@ -5,6 +5,37 @@
 #include "roc_api.h"
 #include "roc_priv.h"
 
+void
+roc_cpt_parse_hdr_dump(const struct cpt_parse_hdr_s *cpth)
+{
+	plt_print("CPT_PARSE \t0x%p:", cpth);
+
+	/* W0 */
+	plt_print("W0: cookie \t0x%x\t\tmatch_id \t0x%04x\t\terr_sum \t%u \t",
+		  cpth->w0.cookie, cpth->w0.match_id, cpth->w0.err_sum);
+	plt_print("W0: reas_sts \t0x%x\t\tet_owr \t%u\t\tpkt_fmt \t%u \t",
+		  cpth->w0.reas_sts, cpth->w0.et_owr, cpth->w0.pkt_fmt);
+	plt_print("W0: pad_len \t%u\t\tnum_frags \t%u\t\tpkt_out \t%u \t",
+		  cpth->w0.pad_len, cpth->w0.num_frags, cpth->w0.pkt_out);
+
+	/* W1 */
+	plt_print("W1: wqe_ptr \t0x%016lx\t", cpth->wqe_ptr);
+
+	/* W2 */
+	plt_print("W2: frag_age \t0x%x\t\torig_pf_func \t0x%04x",
+		  cpth->w2.frag_age, cpth->w2.orig_pf_func);
+	plt_print("W2: il3_off \t0x%x\t\tfi_pad \t0x%x\t\tfi_offset \t0x%x \t",
+		  cpth->w2.il3_off, cpth->w2.fi_pad, cpth->w2.fi_offset);
+
+	/* W3 */
+	plt_print("W3: hw_ccode \t0x%x\t\tuc_ccode \t0x%x\t\tspi \t0x%08x",
+		  cpth->w3.hw_ccode, cpth->w3.uc_ccode, cpth->w3.spi);
+
+	/* W4 */
+	plt_print("W4: esn \t%" PRIx64 " \t OR frag1_wqe_ptr \t0x%" PRIx64,
+		  cpth->esn, cpth->frag1_wqe_ptr);
+}
+
 static int
 cpt_af_reg_read(struct roc_cpt *roc_cpt, uint64_t reg, uint64_t *val)
 {
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index f7b6ef6..008098e 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -65,6 +65,7 @@ INTERNAL {
 	roc_cpt_lf_fini;
 	roc_cpt_lfs_print;
 	roc_cpt_lmtline_init;
+	roc_cpt_parse_hdr_dump;
 	roc_cpt_rxc_time_cfg;
 	roc_error_msg_get;
 	roc_hash_sha1_gen;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 03/28] common/cnxk: allow reuse of SSO API for inline dev
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
@ 2021-10-01 13:39   ` Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
                     ` (25 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:39 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Rework interface of SSO internal functions to use for NIX inline dev's
SSO LF's.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_sso.c      | 52 ++++++++++++++++++++++++--------------
 drivers/common/cnxk/roc_sso_priv.h |  9 +++++++
 2 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 1ccf262..bdf973f 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -6,11 +6,10 @@
 #include "roc_priv.h"
 
 /* Private functions. */
-static int
-sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf,
+int
+sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
 	     void **rsp)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	int rc = -ENOSPC;
 
 	switch (lf_type) {
@@ -41,10 +40,9 @@ sso_lf_alloc(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf,
 	return 0;
 }
 
-static int
-sso_lf_free(struct roc_sso *roc_sso, enum sso_lf_type lf_type, uint16_t nb_lf)
+int
+sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	int rc = -ENOSPC;
 
 	switch (lf_type) {
@@ -152,7 +150,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 	return 0;
 }
 
-static void
+void
 sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
 		    uint16_t hwgrp[], uint16_t n, uint16_t enable)
 {
@@ -172,8 +170,10 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
 		k = k ? k : 4;
 		for (j = 0; j < k; j++) {
 			mask[j] = hwgrp[i + j] | enable << 14;
-			enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
-				 plt_bitmap_clear(bmp, hwgrp[i + j]);
+			if (bmp) {
+				enable ? plt_bitmap_set(bmp, hwgrp[i + j]) :
+					 plt_bitmap_clear(bmp, hwgrp[i + j]);
+			}
 			plt_sso_dbg("HWS %d Linked to HWGRP %d", hws,
 				    hwgrp[i + j]);
 		}
@@ -388,10 +388,8 @@ roc_sso_hwgrp_qos_config(struct roc_sso *roc_sso, struct roc_sso_hwgrp_qos *qos,
 }
 
 int
-roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
-			uint16_t hwgrps)
+sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps)
 {
-	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
 	struct sso_hw_setconfig *req;
 	int rc = -ENOSPC;
 
@@ -406,9 +404,17 @@ roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
 }
 
 int
-roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
+roc_sso_hwgrp_alloc_xaq(struct roc_sso *roc_sso, uint32_t npa_aura_id,
+			uint16_t hwgrps)
 {
 	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+
+	return sso_hwgrp_alloc_xaq(dev, npa_aura_id, hwgrps);
+}
+
+int
+sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps)
+{
 	struct sso_hw_xaq_release *req;
 
 	req = mbox_alloc_msg_sso_hw_release_xaq_aura(dev->mbox);
@@ -420,6 +426,14 @@ roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
 }
 
 int
+roc_sso_hwgrp_release_xaq(struct roc_sso *roc_sso, uint16_t hwgrps)
+{
+	struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev;
+
+	return sso_hwgrp_release_xaq(dev, hwgrps);
+}
+
+int
 roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp,
 			   uint8_t weight, uint8_t affinity, uint8_t priority)
 {
@@ -468,13 +482,13 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 		goto hwgrp_atch_fail;
 	}
 
-	rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWS, nb_hws, NULL);
+	rc = sso_lf_alloc(&sso->dev, SSO_LF_TYPE_HWS, nb_hws, NULL);
 	if (rc < 0) {
 		plt_err("Unable to alloc SSO HWS LFs");
 		goto hws_alloc_fail;
 	}
 
-	rc = sso_lf_alloc(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp,
+	rc = sso_lf_alloc(&sso->dev, SSO_LF_TYPE_HWGRP, nb_hwgrp,
 			  (void **)&rsp_hwgrp);
 	if (rc < 0) {
 		plt_err("Unable to alloc SSO HWGRP Lfs");
@@ -503,9 +517,9 @@ roc_sso_rsrc_init(struct roc_sso *roc_sso, uint8_t nb_hws, uint16_t nb_hwgrp)
 
 	return 0;
 sso_msix_fail:
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, nb_hwgrp);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, nb_hwgrp);
 hwgrp_alloc_fail:
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, nb_hws);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, nb_hws);
 hws_alloc_fail:
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
 hwgrp_atch_fail:
@@ -523,8 +537,8 @@ roc_sso_rsrc_fini(struct roc_sso *roc_sso)
 
 	sso_unregister_irqs_priv(roc_sso, &sso->pci_dev->intr_handle,
 				 roc_sso->nb_hws, roc_sso->nb_hwgrp);
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
-	sso_lf_free(roc_sso, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWS, roc_sso->nb_hws);
+	sso_lf_free(&sso->dev, SSO_LF_TYPE_HWGRP, roc_sso->nb_hwgrp);
 
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWS);
 	sso_rsrc_detach(roc_sso, SSO_LF_TYPE_HWGRP);
diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h
index 5361d4f..8dffa3f 100644
--- a/drivers/common/cnxk/roc_sso_priv.h
+++ b/drivers/common/cnxk/roc_sso_priv.h
@@ -39,6 +39,15 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso)
 	return (struct sso *)&roc_sso->reserved[0];
 }
 
+/* SSO LF ops */
+int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf,
+		 void **rsp);
+int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf);
+void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp,
+			 uint16_t hwgrp[], uint16_t n, uint16_t enable);
+int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps);
+int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps);
+
 /* SSO IRQ */
 int sso_register_irqs_priv(struct roc_sso *roc_sso,
 			   struct plt_intr_handle *handle, uint16_t nb_hws,
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 04/28] common/cnxk: change NIX debug API and queue API interface
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (2 preceding siblings ...)
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
@ 2021-10-01 13:39   ` Nithin Dabilpuram
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
                     ` (24 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:39 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Change NIX debug API and queue API interface for use by
internal NIX inline device initialization.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix.c       |   2 +-
 drivers/common/cnxk/roc_nix_debug.c | 118 +++++++++++++++++++++++++++---------
 drivers/common/cnxk/roc_nix_priv.h  |  16 +++++
 drivers/common/cnxk/roc_nix_queue.c |  89 +++++++++++++++------------
 4 files changed, 159 insertions(+), 66 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix.c b/drivers/common/cnxk/roc_nix.c
index ee9e81d..b7ef843 100644
--- a/drivers/common/cnxk/roc_nix.c
+++ b/drivers/common/cnxk/roc_nix.c
@@ -306,7 +306,7 @@ sdp_lbk_id_update(struct plt_pci_device *pci_dev, struct nix *nix)
 	}
 }
 
-static inline uint64_t
+uint64_t
 nix_get_blkaddr(struct dev *dev)
 {
 	uint64_t reg;
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 6e56513..9539bb9 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -110,17 +110,12 @@ roc_nix_lf_get_reg_count(struct roc_nix *roc_nix)
 }
 
 int
-roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+nix_lf_gen_reg_dump(uintptr_t nix_lf_base, uint64_t *data)
 {
-	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
-	uintptr_t nix_lf_base = nix->base;
 	bool dump_stdout;
 	uint64_t reg;
 	uint32_t i;
 
-	if (roc_nix == NULL)
-		return NIX_ERR_PARAM;
-
 	dump_stdout = data ? 0 : 1;
 
 	for (i = 0; i < PLT_DIM(nix_lf_reg); i++) {
@@ -131,8 +126,21 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 			*data++ = reg;
 	}
 
+	return i;
+}
+
+int
+nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint8_t lf_tx_stats,
+		     uint8_t lf_rx_stats)
+{
+	uint32_t i, count = 0;
+	bool dump_stdout;
+	uint64_t reg;
+
+	dump_stdout = data ? 0 : 1;
+
 	/* NIX_LF_TX_STATX */
-	for (i = 0; i < nix->lf_tx_stats; i++) {
+	for (i = 0; i < lf_tx_stats; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_TX_STATX(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_TX_STATX", i,
@@ -140,9 +148,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_RX_STATX */
-	for (i = 0; i < nix->lf_rx_stats; i++) {
+	for (i = 0; i < lf_rx_stats; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_RX_STATX(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_RX_STATX", i,
@@ -151,8 +160,21 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 			*data++ = reg;
 	}
 
+	return count + i;
+}
+
+int
+nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
+		    uint16_t cints)
+{
+	uint32_t i, count = 0;
+	bool dump_stdout;
+	uint64_t reg;
+
+	dump_stdout = data ? 0 : 1;
+
 	/* NIX_LF_QINTX_CNT*/
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_CNT", i,
@@ -160,9 +182,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_INT */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_INT", i,
@@ -170,9 +193,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_ENA_W1S */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1S",
@@ -180,9 +204,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_QINTX_ENA_W1C */
-	for (i = 0; i < nix->qints; i++) {
+	for (i = 0; i < qints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_QINTX_ENA_W1C",
@@ -190,9 +215,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_CNT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_CNT", i,
@@ -200,9 +226,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_WAIT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_WAIT", i,
@@ -210,9 +237,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_INT */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT", i,
@@ -220,9 +248,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_INT_W1S */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_INT_W1S",
@@ -230,9 +259,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_ENA_W1S */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1S",
@@ -240,9 +270,10 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+	count += i;
 
 	/* NIX_LF_CINTX_ENA_W1C */
-	for (i = 0; i < nix->cints; i++) {
+	for (i = 0; i < cints; i++) {
 		reg = plt_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
 		if (dump_stdout && reg)
 			nix_dump("%32s_%d = 0x%" PRIx64, "NIX_LF_CINTX_ENA_W1C",
@@ -250,12 +281,40 @@ roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
 		if (data)
 			*data++ = reg;
 	}
+
+	return count + i;
+}
+
+int
+roc_nix_lf_reg_dump(struct roc_nix *roc_nix, uint64_t *data)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	bool dump_stdout = data ? 0 : 1;
+	uintptr_t nix_base;
+	uint32_t i;
+
+	if (roc_nix == NULL)
+		return NIX_ERR_PARAM;
+
+	nix_base = nix->base;
+	/* General registers */
+	i = nix_lf_gen_reg_dump(nix_base, data);
+
+	/* Rx, Tx stat registers */
+	i += nix_lf_stat_reg_dump(nix_base, dump_stdout ? NULL : &data[i],
+				  nix->lf_tx_stats, nix->lf_rx_stats);
+
+	/* Intr registers */
+	i += nix_lf_int_reg_dump(nix_base, dump_stdout ? NULL : &data[i],
+				 nix->qints, nix->cints);
+
 	return 0;
 }
 
-static int
-nix_q_ctx_get(struct mbox *mbox, uint8_t ctype, uint16_t qid, __io void **ctx_p)
+int
+nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid, __io void **ctx_p)
 {
+	struct mbox *mbox = dev->mbox;
 	int rc;
 
 	if (roc_model_is_cn9k()) {
@@ -485,7 +544,7 @@ nix_cn9k_lf_rq_dump(__io struct nix_rq_ctx_s *ctx)
 	nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
 }
 
-static inline void
+void
 nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx)
 {
 	nix_dump("W0: wqe_aura \t\t\t%d\nW0: len_ol3_dis \t\t\t%d",
@@ -595,12 +654,12 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 {
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	int rc = -1, q, rq = nix->nb_rx_queues;
-	struct mbox *mbox = (&nix->dev)->mbox;
 	struct npa_aq_enq_rsp *npa_rsp;
 	struct npa_aq_enq_req *npa_aq;
-	volatile void *ctx;
+	struct dev *dev = &nix->dev;
 	int sq = nix->nb_tx_queues;
 	struct npa_lf *npa_lf;
+	volatile void *ctx;
 	uint32_t sqb_aura;
 
 	npa_lf = idev_npa_obj_get();
@@ -608,7 +667,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 		return NPA_ERR_DEVICE_NOT_BOUNDED;
 
 	for (q = 0; q < rq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_CQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_CQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get cq context");
 			goto fail;
@@ -619,7 +678,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 	}
 
 	for (q = 0; q < rq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_RQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get rq context");
 			goto fail;
@@ -633,7 +692,7 @@ roc_nix_queues_ctx_dump(struct roc_nix *roc_nix)
 	}
 
 	for (q = 0; q < sq; q++) {
-		rc = nix_q_ctx_get(mbox, NIX_AQ_CTYPE_SQ, q, &ctx);
+		rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_SQ, q, &ctx);
 		if (rc) {
 			plt_err("Failed to get sq context");
 			goto fail;
@@ -686,11 +745,13 @@ roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+	const uint64_t *sgs = (const uint64_t *)(rx + 1);
+	int i;
 
 	nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
 		 cq->tag, cq->q, cq->node, cq->cqe_type);
 
-	nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", rx->chan,
+	nix_dump("W0: chan \t0x%x\t\tdesc_sizem1 \t%d", rx->chan,
 		 rx->desc_sizem1);
 	nix_dump("W0: imm_copy \t%d\t\texpress \t%d", rx->imm_copy,
 		 rx->express);
@@ -731,6 +792,9 @@ roc_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
 
 	nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
 		 rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
+
+	for (i = 0; i < (rx->desc_sizem1 + 1) << 1; i++)
+		nix_dump("sg[%u] = %p", i, (void *)sgs[i]);
 }
 
 void
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index b573879..b140dad 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -352,6 +352,12 @@ int nix_tm_update_parent_info(struct nix *nix, enum roc_nix_tm_tree tree);
 int nix_tm_sq_sched_conf(struct nix *nix, struct nix_tm_node *node,
 			 bool rr_quantum_only);
 
+int nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
+		    bool cfg, bool ena);
+int nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
+	       bool ena);
+int nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable);
+
 /*
  * TM priv utils.
  */
@@ -397,4 +403,14 @@ void nix_tm_node_free(struct nix_tm_node *node);
 struct nix_tm_shaper_profile *nix_tm_shaper_profile_alloc(void);
 void nix_tm_shaper_profile_free(struct nix_tm_shaper_profile *profile);
 
+uint64_t nix_get_blkaddr(struct dev *dev);
+void nix_lf_rq_dump(__io struct nix_cn10k_rq_ctx_s *ctx);
+int nix_lf_gen_reg_dump(uintptr_t nix_lf_base, uint64_t *data);
+int nix_lf_stat_reg_dump(uintptr_t nix_lf_base, uint64_t *data,
+			 uint8_t lf_tx_stats, uint8_t lf_rx_stats);
+int nix_lf_int_reg_dump(uintptr_t nix_lf_base, uint64_t *data, uint16_t qints,
+			uint16_t cints);
+int nix_q_ctx_get(struct dev *dev, uint8_t ctype, uint16_t qid,
+		  __io void **ctx_p);
+
 #endif /* _ROC_NIX_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index d7c4844..cff0ec3 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -29,46 +29,54 @@ nix_qsize_clampup(uint32_t val)
 }
 
 int
+nix_rq_ena_dis(struct dev *dev, struct roc_nix_rq *rq, bool enable)
+{
+	struct mbox *mbox = dev->mbox;
+
+	/* Pkts will be dropped silently if RQ is disabled */
+	if (roc_model_is_cn9k()) {
+		struct nix_aq_enq_req *aq;
+
+		aq = mbox_alloc_msg_nix_aq_enq(mbox);
+		aq->qidx = rq->qid;
+		aq->ctype = NIX_AQ_CTYPE_RQ;
+		aq->op = NIX_AQ_INSTOP_WRITE;
+
+		aq->rq.ena = enable;
+		aq->rq_mask.ena = ~(aq->rq_mask.ena);
+	} else {
+		struct nix_cn10k_aq_enq_req *aq;
+
+		aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
+		aq->qidx = rq->qid;
+		aq->ctype = NIX_AQ_CTYPE_RQ;
+		aq->op = NIX_AQ_INSTOP_WRITE;
+
+		aq->rq.ena = enable;
+		aq->rq_mask.ena = ~(aq->rq_mask.ena);
+	}
+
+	return mbox_process(mbox);
+}
+
+int
 roc_nix_rq_ena_dis(struct roc_nix_rq *rq, bool enable)
 {
 	struct nix *nix = roc_nix_to_nix_priv(rq->roc_nix);
-	struct mbox *mbox = (&nix->dev)->mbox;
 	int rc;
 
-	/* Pkts will be dropped silently if RQ is disabled */
-	if (roc_model_is_cn9k()) {
-		struct nix_aq_enq_req *aq;
-
-		aq = mbox_alloc_msg_nix_aq_enq(mbox);
-		aq->qidx = rq->qid;
-		aq->ctype = NIX_AQ_CTYPE_RQ;
-		aq->op = NIX_AQ_INSTOP_WRITE;
-
-		aq->rq.ena = enable;
-		aq->rq_mask.ena = ~(aq->rq_mask.ena);
-	} else {
-		struct nix_cn10k_aq_enq_req *aq;
-
-		aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
-		aq->qidx = rq->qid;
-		aq->ctype = NIX_AQ_CTYPE_RQ;
-		aq->op = NIX_AQ_INSTOP_WRITE;
-
-		aq->rq.ena = enable;
-		aq->rq_mask.ena = ~(aq->rq_mask.ena);
-	}
-
-	rc = mbox_process(mbox);
+	rc = nix_rq_ena_dis(&nix->dev, rq, enable);
 
 	if (roc_model_is_cn10k())
 		plt_write64(rq->qid, nix->base + NIX_LF_OP_VWQE_FLUSH);
 	return rc;
 }
 
-static int
-rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+int
+nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
+		bool cfg, bool ena)
 {
-	struct mbox *mbox = (&nix->dev)->mbox;
+	struct mbox *mbox = dev->mbox;
 	struct nix_aq_enq_req *aq;
 
 	aq = mbox_alloc_msg_nix_aq_enq(mbox);
@@ -118,7 +126,7 @@ rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
 	aq->rq.rq_int_ena = 0;
 	/* Many to one reduction */
-	aq->rq.qint_idx = rq->qid % nix->qints;
+	aq->rq.qint_idx = rq->qid % qints;
 	aq->rq.xqe_drop_ena = 1;
 
 	/* If RED enabled, then fill enable for all cases */
@@ -179,11 +187,12 @@ rq_cn9k_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	return 0;
 }
 
-static int
-rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
+int
+nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
+	   bool ena)
 {
-	struct mbox *mbox = (&nix->dev)->mbox;
 	struct nix_cn10k_aq_enq_req *aq;
+	struct mbox *mbox = dev->mbox;
 
 	aq = mbox_alloc_msg_nix_cn10k_aq_enq(mbox);
 	aq->qidx = rq->qid;
@@ -220,8 +229,10 @@ rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 		aq->rq.cq = rq->qid;
 	}
 
-	if (rq->ipsech_ena)
+	if (rq->ipsech_ena) {
 		aq->rq.ipsech_ena = 1;
+		aq->rq.ipsecd_drop_en = 1;
+	}
 
 	aq->rq.lpb_aura = roc_npa_aura_handle_to_aura(rq->aura_handle);
 
@@ -260,7 +271,7 @@ rq_cfg(struct nix *nix, struct roc_nix_rq *rq, bool cfg, bool ena)
 	aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
 	aq->rq.rq_int_ena = 0;
 	/* Many to one reduction */
-	aq->rq.qint_idx = rq->qid % nix->qints;
+	aq->rq.qint_idx = rq->qid % qints;
 	aq->rq.xqe_drop_ena = 1;
 
 	/* If RED enabled, then fill enable for all cases */
@@ -359,6 +370,7 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct mbox *mbox = (&nix->dev)->mbox;
 	bool is_cn9k = roc_model_is_cn9k();
+	struct dev *dev = &nix->dev;
 	int rc;
 
 	if (roc_nix == NULL || rq == NULL)
@@ -370,9 +382,9 @@ roc_nix_rq_init(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	rq->roc_nix = roc_nix;
 
 	if (is_cn9k)
-		rc = rq_cn9k_cfg(nix, rq, false, ena);
+		rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, false, ena);
 	else
-		rc = rq_cfg(nix, rq, false, ena);
+		rc = nix_rq_cfg(dev, rq, nix->qints, false, ena);
 
 	if (rc)
 		return rc;
@@ -386,6 +398,7 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct mbox *mbox = (&nix->dev)->mbox;
 	bool is_cn9k = roc_model_is_cn9k();
+	struct dev *dev = &nix->dev;
 	int rc;
 
 	if (roc_nix == NULL || rq == NULL)
@@ -397,9 +410,9 @@ roc_nix_rq_modify(struct roc_nix *roc_nix, struct roc_nix_rq *rq, bool ena)
 	rq->roc_nix = roc_nix;
 
 	if (is_cn9k)
-		rc = rq_cn9k_cfg(nix, rq, true, ena);
+		rc = nix_rq_cn9k_cfg(dev, rq, nix->qints, true, ena);
 	else
-		rc = rq_cfg(nix, rq, true, ena);
+		rc = nix_rq_cfg(dev, rq, nix->qints, true, ena);
 
 	if (rc)
 		return rc;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 05/28] common/cnxk: support NIX inline device IRQ
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (3 preceding siblings ...)
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
@ 2021-10-01 13:39   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
                     ` (23 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:39 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Add API to setup NIX inline device IRQ's. This registers
IRQ's for errors in case of NIX, CPT LF, SSOW and get wor
interrupt in case of SSO.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/meson.build           |   1 +
 drivers/common/cnxk/roc_api.h             |   3 +
 drivers/common/cnxk/roc_irq.c             |   7 +-
 drivers/common/cnxk/roc_nix_inl.h         |  10 +
 drivers/common/cnxk/roc_nix_inl_dev_irq.c | 359 ++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h    |  57 +++++
 drivers/common/cnxk/roc_platform.h        |   9 +-
 drivers/common/cnxk/roc_priv.h            |   3 +
 8 files changed, 442 insertions(+), 7 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_nix_inl.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
 create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h

diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 258429d..3e836ce 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
         'roc_nix_mcast.c',
         'roc_nix_npc.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 7dec845..c1af95e 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -129,4 +129,7 @@
 /* HASH computation */
 #include "roc_hash.h"
 
+/* NIX Inline dev */
+#include "roc_nix_inl.h"
+
 #endif /* _ROC_API_H_ */
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index 4c2b4c3..28fe691 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -138,9 +138,10 @@ dev_irq_register(struct plt_intr_handle *intr_handle, plt_intr_callback_fn cb,
 		irq_init(intr_handle);
 	}
 
-	if (vec > intr_handle->max_intr) {
-		plt_err("Vector=%d greater than max_intr=%d", vec,
-			intr_handle->max_intr);
+	if (vec > intr_handle->max_intr || vec >= PLT_DIM(intr_handle->efds)) {
+		plt_err("Vector=%d greater than max_intr=%d or "
+			"max_efd=%" PRIu64,
+			vec, intr_handle->max_intr, PLT_DIM(intr_handle->efds));
 		return -EINVAL;
 	}
 
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
new file mode 100644
index 0000000..1ec3dda
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_NIX_INL_H_
+#define _ROC_NIX_INL_H_
+
+/* Inline device SSO Work callback */
+typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
+
+#endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev_irq.c b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
new file mode 100644
index 0000000..25ed42f
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_dev_irq.c
@@ -0,0 +1,359 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+static void
+nix_inl_sso_work_cb(struct nix_inl_dev *inl_dev)
+{
+	uintptr_t getwrk_op = inl_dev->ssow_base + SSOW_LF_GWS_OP_GET_WORK0;
+	uintptr_t tag_wqe_op = inl_dev->ssow_base + SSOW_LF_GWS_WQE0;
+	uint32_t wdata = BIT(16) | 1;
+	union {
+		__uint128_t get_work;
+		uint64_t u64[2];
+	} gw;
+	uint64_t work;
+
+again:
+	/* Try to do get work */
+	gw.get_work = wdata;
+	plt_write64(gw.u64[0], getwrk_op);
+	do {
+		roc_load_pair(gw.u64[0], gw.u64[1], tag_wqe_op);
+	} while (gw.u64[0] & BIT_ULL(63));
+
+	work = gw.u64[1];
+	/* Do we have any work? */
+	if (work) {
+		if (inl_dev->work_cb)
+			inl_dev->work_cb(gw.u64, inl_dev->cb_args);
+		else
+			plt_warn("Undelivered inl dev work gw0: %p gw1: %p",
+				 (void *)gw.u64[0], (void *)gw.u64[1]);
+		goto again;
+	}
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+}
+
+static int
+nix_inl_nix_reg_dump(struct nix_inl_dev *inl_dev)
+{
+	uintptr_t nix_base = inl_dev->nix_base;
+
+	/* General registers */
+	nix_lf_gen_reg_dump(nix_base, NULL);
+
+	/* Rx, Tx stat registers */
+	nix_lf_stat_reg_dump(nix_base, NULL, inl_dev->lf_tx_stats,
+			     inl_dev->lf_rx_stats);
+
+	/* Intr registers */
+	nix_lf_int_reg_dump(nix_base, NULL, inl_dev->qints, inl_dev->cints);
+
+	return 0;
+}
+
+static void
+nix_inl_sso_hwgrp_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint64_t intr;
+
+	intr = plt_read64(sso_base + SSO_LF_GGRP_INT);
+	if (intr == 0)
+		return;
+
+	/* Check for work executable interrupt */
+	if (intr & BIT(1))
+		nix_inl_sso_work_cb(inl_dev);
+
+	if (!(intr & BIT(1)))
+		plt_err("GGRP 0 GGRP_INT=0x%" PRIx64 "", intr);
+
+	/* Clear interrupt */
+	plt_write64(intr, sso_base + SSO_LF_GGRP_INT);
+}
+
+static void
+nix_inl_sso_hws_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uint64_t intr;
+
+	intr = plt_read64(ssow_base + SSOW_LF_GWS_INT);
+	if (intr == 0)
+		return;
+
+	plt_err("GWS 0 GWS_INT=0x%" PRIx64 "", intr);
+
+	/* Clear interrupt */
+	plt_write64(intr, ssow_base + SSOW_LF_GWS_INT);
+}
+
+int
+nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint16_t sso_msixoff, ssow_msixoff;
+	int rc;
+
+	ssow_msixoff = inl_dev->ssow_msixoff;
+	sso_msixoff = inl_dev->sso_msixoff;
+	if (sso_msixoff == MSIX_VECTOR_INVALID ||
+	    ssow_msixoff == MSIX_VECTOR_INVALID) {
+		plt_err("Invalid SSO/SSOW MSIX offsets (0x%x, 0x%x)",
+			sso_msixoff, ssow_msixoff);
+		return -EINVAL;
+	}
+
+	/*
+	 * Setup SSOW interrupt
+	 */
+
+	/* Clear SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1C);
+	/* Register interrupt with vfio */
+	rc = dev_irq_register(handle, nix_inl_sso_hws_irq, inl_dev,
+			      ssow_msixoff + SSOW_LF_INT_VEC_IOP);
+	/* Set SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1S);
+
+	/*
+	 * Setup SSO/HWGRP interrupt
+	 */
+
+	/* Clear SSO interrupt enable */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1C);
+	/* Register IRQ */
+	rc |= dev_irq_register(handle, nix_inl_sso_hwgrp_irq, (void *)inl_dev,
+			       sso_msixoff + SSO_LF_INT_VEC_GRP);
+	/* Enable hw interrupt */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1S);
+
+	/* Setup threshold for work exec interrupt to 1 wqe in IAQ */
+	plt_write64(0x1ull, sso_base + SSO_LF_GGRP_INT_THR);
+
+	return rc;
+}
+
+void
+nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t ssow_base = inl_dev->ssow_base;
+	uintptr_t sso_base = inl_dev->sso_base;
+	uint16_t sso_msixoff, ssow_msixoff;
+
+	ssow_msixoff = inl_dev->ssow_msixoff;
+	sso_msixoff = inl_dev->sso_msixoff;
+
+	/* Clear SSOW interrupt enable */
+	plt_write64(~0ull, ssow_base + SSOW_LF_GWS_INT_ENA_W1C);
+	/* Clear SSO/HWGRP interrupt enable */
+	plt_write64(~0ull, sso_base + SSO_LF_GGRP_INT_ENA_W1C);
+	/* Clear SSO threshold */
+	plt_write64(0, sso_base + SSO_LF_GGRP_INT_THR);
+
+	/* Unregister IRQ */
+	dev_irq_unregister(handle, nix_inl_sso_hws_irq, (void *)inl_dev,
+			   ssow_msixoff + SSOW_LF_INT_VEC_IOP);
+	dev_irq_unregister(handle, nix_inl_sso_hwgrp_irq, (void *)inl_dev,
+			   sso_msixoff + SSO_LF_INT_VEC_GRP);
+}
+
+static void
+nix_inl_nix_q_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t reg, intr;
+	uint8_t irq;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_QINTX_INT(0));
+	if (intr == 0)
+		return;
+
+	plt_err("Queue_intr=0x%" PRIx64 " qintx 0 pf=%d, vf=%d", intr, dev->pf,
+		dev->vf);
+
+	/* Get and clear RQ0 interrupt */
+	reg = roc_atomic64_add_nosync(0,
+				      (int64_t *)(nix_base + NIX_LF_RQ_OP_INT));
+	if (reg & BIT_ULL(42) /* OP_ERR */) {
+		plt_err("Failed to get rq_int");
+		return;
+	}
+	irq = reg & 0xff;
+	plt_write64(0 | irq, nix_base + NIX_LF_RQ_OP_INT);
+
+	if (irq & BIT_ULL(NIX_RQINT_DROP))
+		plt_err("RQ=0 NIX_RQINT_DROP");
+
+	if (irq & BIT_ULL(NIX_RQINT_RED))
+		plt_err("RQ=0 NIX_RQINT_RED");
+
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_QINTX_INT(0));
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+static void
+nix_inl_nix_ras_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t intr;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_RAS);
+	if (intr == 0)
+		return;
+
+	plt_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_RAS);
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+static void
+nix_inl_nix_err_irq(void *param)
+{
+	struct nix_inl_dev *inl_dev = (struct nix_inl_dev *)param;
+	uintptr_t nix_base = inl_dev->nix_base;
+	struct dev *dev = &inl_dev->dev;
+	volatile void *ctx;
+	uint64_t intr;
+	int rc;
+
+	intr = plt_read64(nix_base + NIX_LF_ERR_INT);
+	if (intr == 0)
+		return;
+
+	plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
+
+	/* Clear interrupt */
+	plt_write64(intr, nix_base + NIX_LF_ERR_INT);
+
+	/* Dump registers to std out */
+	nix_inl_nix_reg_dump(inl_dev);
+
+	/* Dump RQ 0 */
+	rc = nix_q_ctx_get(dev, NIX_AQ_CTYPE_RQ, 0, &ctx);
+	if (rc) {
+		plt_err("Failed to get rq context");
+		return;
+	}
+	nix_lf_rq_dump(ctx);
+}
+
+int
+nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t nix_base = inl_dev->nix_base;
+	uint16_t msixoff;
+	int rc;
+
+	msixoff = inl_dev->nix_msixoff;
+	if (msixoff == MSIX_VECTOR_INVALID) {
+		plt_err("Invalid NIXLF MSIX vector offset: 0x%x", msixoff);
+		return -EINVAL;
+	}
+
+	/* Disable err interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_ERR_INT_ENA_W1C);
+	/* DIsable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1C);
+
+	/* Register err irq */
+	rc = dev_irq_register(handle, nix_inl_nix_err_irq, inl_dev,
+			      msixoff + NIX_LF_INT_VEC_ERR_INT);
+	rc |= dev_irq_register(handle, nix_inl_nix_ras_irq, inl_dev,
+			       msixoff + NIX_LF_INT_VEC_POISON);
+
+	/* Enable all nix lf error irqs except RQ_DISABLED and CQ_DISABLED */
+	plt_write64(~(BIT_ULL(11) | BIT_ULL(24)),
+		    nix_base + NIX_LF_ERR_INT_ENA_W1S);
+	/* Enable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1S);
+
+	/* Setup queue irq for RQ 0 */
+
+	/* Clear QINT CNT, interrupt */
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+
+	/* Register queue irq vector */
+	rc |= dev_irq_register(handle, nix_inl_nix_q_irq, inl_dev,
+			       msixoff + NIX_LF_INT_VEC_QINT_START);
+
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+	/* Enable QINT interrupt */
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1S(0));
+
+	return rc;
+}
+
+void
+nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev)
+{
+	struct plt_intr_handle *handle = &inl_dev->pci_dev->intr_handle;
+	uintptr_t nix_base = inl_dev->nix_base;
+	uint16_t msixoff;
+
+	msixoff = inl_dev->nix_msixoff;
+	/* Disable err interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_ERR_INT_ENA_W1C);
+	/* DIsable RAS interrupts */
+	plt_write64(~0ull, nix_base + NIX_LF_RAS_ENA_W1C);
+
+	dev_irq_unregister(handle, nix_inl_nix_err_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_ERR_INT);
+	dev_irq_unregister(handle, nix_inl_nix_ras_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_POISON);
+
+	/* Clear QINT CNT */
+	plt_write64(0, nix_base + NIX_LF_QINTX_CNT(0));
+	plt_write64(0, nix_base + NIX_LF_QINTX_INT(0));
+
+	/* Disable QINT interrupt */
+	plt_write64(~0ull, nix_base + NIX_LF_QINTX_ENA_W1C(0));
+
+	/* Unregister queue irq vector */
+	dev_irq_unregister(handle, nix_inl_nix_q_irq, inl_dev,
+			   msixoff + NIX_LF_INT_VEC_QINT_START);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
new file mode 100644
index 0000000..f424009
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_NIX_INL_PRIV_H_
+#define _ROC_NIX_INL_PRIV_H_
+
+struct nix_inl_dev {
+	/* Base device object */
+	struct dev dev;
+
+	/* PCI device */
+	struct plt_pci_device *pci_dev;
+
+	/* LF specific BAR2 regions */
+	uintptr_t nix_base;
+	uintptr_t ssow_base;
+	uintptr_t sso_base;
+
+	/* MSIX vector offsets */
+	uint16_t nix_msixoff;
+	uint16_t ssow_msixoff;
+	uint16_t sso_msixoff;
+
+	/* SSO data */
+	uint32_t xaq_buf_size;
+	uint32_t xae_waes;
+	uint32_t iue;
+	uint64_t xaq_aura;
+	void *xaq_mem;
+	roc_nix_inl_sso_work_cb_t work_cb;
+	void *cb_args;
+
+	/* NIX data */
+	uint8_t lf_tx_stats;
+	uint8_t lf_rx_stats;
+	uint16_t cints;
+	uint16_t qints;
+	struct roc_nix_rq rq;
+	uint16_t rq_refs;
+	bool is_nix1;
+
+	/* NIX/CPT data */
+	void *inb_sa_base;
+	uint16_t inb_sa_sz;
+
+	/* Device arguments */
+	uint8_t selftest;
+	uint16_t ipsec_in_max_spi;
+};
+
+int nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev);
+void nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev);
+
+int nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev);
+void nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev);
+
+#endif /* _ROC_NIX_INL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 285b24b..177db3d 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -113,10 +113,11 @@
 #define plt_write64(val, addr)                                                 \
 	rte_write64_relaxed((val), (volatile void *)(addr))
 
-#define plt_wmb() rte_wmb()
-#define plt_rmb() rte_rmb()
-#define plt_io_wmb() rte_io_wmb()
-#define plt_io_rmb() rte_io_rmb()
+#define plt_wmb()		rte_wmb()
+#define plt_rmb()		rte_rmb()
+#define plt_io_wmb()		rte_io_wmb()
+#define plt_io_rmb()		rte_io_rmb()
+#define plt_atomic_thread_fence rte_atomic_thread_fence
 
 #define plt_mmap       mmap
 #define PLT_PROT_READ  PROT_READ
diff --git a/drivers/common/cnxk/roc_priv.h b/drivers/common/cnxk/roc_priv.h
index 7494b8d..f72bbd5 100644
--- a/drivers/common/cnxk/roc_priv.h
+++ b/drivers/common/cnxk/roc_priv.h
@@ -38,4 +38,7 @@
 /* CPT */
 #include "roc_cpt_priv.h"
 
+/* NIX Inline dev */
+#include "roc_nix_inl_priv.h"
+
 #endif /* _ROC_PRIV_H_ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 06/28] common/cnxk: support NIX inline device init and fini
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (4 preceding siblings ...)
  2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
                     ` (22 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev

Add support to init and fini inline device with NIX LF,
SSO LF and SSOW LF for inline inbound IPSec in CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/meson.build        |   1 +
 drivers/common/cnxk/roc_api.h          |   2 +
 drivers/common/cnxk/roc_cpt.c          |   7 +-
 drivers/common/cnxk/roc_idev.c         |   2 +
 drivers/common/cnxk/roc_idev_priv.h    |   3 +
 drivers/common/cnxk/roc_nix_debug.c    |  35 ++
 drivers/common/cnxk/roc_nix_inl.h      |  56 +++
 drivers/common/cnxk/roc_nix_inl_dev.c  | 636 +++++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl_priv.h |   8 +
 drivers/common/cnxk/roc_platform.h     |   2 +
 drivers/common/cnxk/version.map        |   3 +
 11 files changed, 752 insertions(+), 3 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c

diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 3e836ce..43af6a0 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl_dev.c',
         'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
         'roc_nix_mcast.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index c1af95e..53f4e4b 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -53,6 +53,8 @@
 #define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
 #define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
 #define PCI_DEVID_CNXK_BPHY	      0xA089
+#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
+#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
 
 #define PCI_DEVID_CN9K_CGX  0xA059
 #define PCI_DEVID_CN10K_RPM 0xA060
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 33524ef..48a378b 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -381,11 +381,12 @@ cpt_lfs_alloc(struct dev *dev, uint8_t eng_grpmsk, uint8_t blkaddr,
 	if (blkaddr != RVU_BLOCK_ADDR_CPT0 && blkaddr != RVU_BLOCK_ADDR_CPT1)
 		return -EINVAL;
 
-	PLT_SET_USED(inl_dev_sso);
-
 	req = mbox_alloc_msg_cpt_lf_alloc(mbox);
 	req->nix_pf_func = 0;
-	req->sso_pf_func = idev_sso_pffunc_get();
+	if (inl_dev_sso && nix_inl_dev_pffunc_get())
+		req->sso_pf_func = nix_inl_dev_pffunc_get();
+	else
+		req->sso_pf_func = idev_sso_pffunc_get();
 	req->eng_grpmsk = eng_grpmsk;
 	req->blkaddr = blkaddr;
 
diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c
index 1494187..648f37b 100644
--- a/drivers/common/cnxk/roc_idev.c
+++ b/drivers/common/cnxk/roc_idev.c
@@ -38,6 +38,8 @@ idev_set_defaults(struct idev_cfg *idev)
 	idev->num_lmtlines = 0;
 	idev->bphy = NULL;
 	idev->cpt = NULL;
+	idev->nix_inl_dev = NULL;
+	plt_spinlock_init(&idev->nix_inl_dev_lock);
 	__atomic_store_n(&idev->npa_refcnt, 0, __ATOMIC_RELEASE);
 }
 
diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h
index 84e6f1e..2c8309b 100644
--- a/drivers/common/cnxk/roc_idev_priv.h
+++ b/drivers/common/cnxk/roc_idev_priv.h
@@ -9,6 +9,7 @@
 struct npa_lf;
 struct roc_bphy;
 struct roc_cpt;
+struct nix_inl_dev;
 struct idev_cfg {
 	uint16_t sso_pf_func;
 	uint16_t npa_pf_func;
@@ -20,6 +21,8 @@ struct idev_cfg {
 	uint64_t lmt_base_addr;
 	struct roc_bphy *bphy;
 	struct roc_cpt *cpt;
+	struct nix_inl_dev *nix_inl_dev;
+	plt_spinlock_t nix_inl_dev_lock;
 };
 
 /* Generic */
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 9539bb9..582f5a3 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -1213,3 +1213,38 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  \trss_alg_idx = %d", nix->rss_alg_idx);
 	nix_dump("  \ttx_pause = %d", nix->tx_pause);
 }
+
+void
+roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct nix_inl_dev *inl_dev =
+		(struct nix_inl_dev *)&roc_inl_dev->reserved;
+	struct dev *dev = &inl_dev->dev;
+
+	nix_dump("nix_inl_dev@%p", inl_dev);
+	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
+	nix_dump("  vf = %d", dev_get_vf(dev->pf_func));
+	nix_dump("  bar2 = 0x%" PRIx64, dev->bar2);
+	nix_dump("  bar4 = 0x%" PRIx64, dev->bar4);
+
+	nix_dump("  \tpci_dev = %p", inl_dev->pci_dev);
+	nix_dump("  \tnix_base = 0x%" PRIxPTR "", inl_dev->nix_base);
+	nix_dump("  \tsso_base = 0x%" PRIxPTR "", inl_dev->sso_base);
+	nix_dump("  \tssow_base = 0x%" PRIxPTR "", inl_dev->ssow_base);
+	nix_dump("  \tnix_msixoff = %d", inl_dev->nix_msixoff);
+	nix_dump("  \tsso_msixoff = %d", inl_dev->sso_msixoff);
+	nix_dump("  \tssow_msixoff = %d", inl_dev->ssow_msixoff);
+	nix_dump("  \tnix_cints = %d", inl_dev->cints);
+	nix_dump("  \tnix_qints = %d", inl_dev->qints);
+	nix_dump("  \trq_refs = %d", inl_dev->rq_refs);
+	nix_dump("  \tinb_sa_base = 0x%p", inl_dev->inb_sa_base);
+	nix_dump("  \tinb_sa_sz = %d", inl_dev->inb_sa_sz);
+	nix_dump("  \txaq_buf_size = %u", inl_dev->xaq_buf_size);
+	nix_dump("  \txae_waes = %u", inl_dev->xae_waes);
+	nix_dump("  \tiue = %u", inl_dev->iue);
+	nix_dump("  \txaq_aura = 0x%" PRIx64, inl_dev->xaq_aura);
+	nix_dump("  \txaq_mem = 0x%p", inl_dev->xaq_mem);
+
+	nix_dump("  \tinl_dev_rq:");
+	roc_nix_rq_dump(&inl_dev->rq);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 1ec3dda..1b3aab0 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -4,7 +4,63 @@
 #ifndef _ROC_NIX_INL_H_
 #define _ROC_NIX_INL_H_
 
+/* ONF INB HW area */
+#define ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ                                        \
+	PLT_ALIGN(sizeof(struct roc_onf_ipsec_inb_sa), ROC_ALIGN)
+/* ONF INB SW reserved area */
+#define ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD 384
+#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ                                        \
+	(ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD)
+#define ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2 9
+
+/* ONF OUTB HW area */
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ                                       \
+	PLT_ALIGN(sizeof(struct roc_onf_ipsec_outb_sa), ROC_ALIGN)
+/* ONF OUTB SW reserved area */
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD 128
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ                                       \
+	(ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD)
+#define ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2 8
+
+/* OT INB HW area */
+#define ROC_NIX_INL_OT_IPSEC_INB_HW_SZ                                         \
+	PLT_ALIGN(sizeof(struct roc_ot_ipsec_inb_sa), ROC_ALIGN)
+/* OT INB SW reserved area */
+#define ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD 128
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ                                         \
+	(ROC_NIX_INL_OT_IPSEC_INB_HW_SZ + ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD)
+#define ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2 10
+
+/* OT OUTB HW area */
+#define ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ                                        \
+	PLT_ALIGN(sizeof(struct roc_ot_ipsec_outb_sa), ROC_ALIGN)
+/* OT OUTB SW reserved area */
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD 128
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ                                        \
+	(ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ + ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD)
+#define ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2 9
+
+/* Alignment of SA Base */
+#define ROC_NIX_INL_SA_BASE_ALIGN BIT_ULL(16)
+
 /* Inline device SSO Work callback */
 typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
 
+struct roc_nix_inl_dev {
+	/* Input parameters */
+	struct plt_pci_device *pci_dev;
+	uint16_t ipsec_in_max_spi;
+	bool selftest;
+	bool attach_cptlf;
+	/* End of input parameters */
+
+#define ROC_NIX_INL_MEM_SZ (1280)
+	uint8_t reserved[ROC_NIX_INL_MEM_SZ] __plt_cache_aligned;
+} __plt_cache_aligned;
+
+/* NIX Inline Device API */
+int __roc_api roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev);
+int __roc_api roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev);
+void __roc_api roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev);
+
 #endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
new file mode 100644
index 0000000..0789f99
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -0,0 +1,636 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+#define XAQ_CACHE_CNT 0x7
+
+/* Default Rx Config for Inline NIX LF */
+#define NIX_INL_LF_RX_CFG                                                      \
+	(ROC_NIX_LF_RX_CFG_DROP_RE | ROC_NIX_LF_RX_CFG_L2_LEN_ERR |            \
+	 ROC_NIX_LF_RX_CFG_IP6_UDP_OPT | ROC_NIX_LF_RX_CFG_DIS_APAD |          \
+	 ROC_NIX_LF_RX_CFG_CSUM_IL4 | ROC_NIX_LF_RX_CFG_CSUM_OL4 |             \
+	 ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |               \
+	 ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3)
+
+uint16_t
+nix_inl_dev_pffunc_get(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev != NULL) {
+		inl_dev = idev->nix_inl_dev;
+		if (inl_dev)
+			return inl_dev->dev.pf_func;
+	}
+	return 0;
+}
+
+static void
+nix_inl_selftest_work_cb(uint64_t *gw, void *args)
+{
+	uintptr_t work = gw[1];
+
+	*((uintptr_t *)args + (gw[0] & 0x1)) = work;
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+}
+
+static int
+nix_inl_selftest(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	roc_nix_inl_sso_work_cb_t save_cb;
+	static uintptr_t work_arr[2];
+	struct nix_inl_dev *inl_dev;
+	void *save_cb_args;
+	uint64_t add_work0;
+	int rc = 0;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inl_dev == NULL)
+		return -ENOTSUP;
+
+	plt_info("Performing nix inl self test");
+
+	/* Save and update cb to test cb */
+	save_cb = inl_dev->work_cb;
+	save_cb_args = inl_dev->cb_args;
+	inl_dev->work_cb = nix_inl_selftest_work_cb;
+	inl_dev->cb_args = work_arr;
+
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+#define WORK_MAGIC1 0x335577ff0
+#define WORK_MAGIC2 0xdeadbeef0
+
+	/* Add work */
+	add_work0 = ((uint64_t)(SSO_TT_ORDERED) << 32) | 0x0;
+	roc_store_pair(add_work0, WORK_MAGIC1, inl_dev->sso_base);
+	add_work0 = ((uint64_t)(SSO_TT_ORDERED) << 32) | 0x1;
+	roc_store_pair(add_work0, WORK_MAGIC2, inl_dev->sso_base);
+
+	plt_delay_ms(10000);
+
+	/* Check if we got expected work */
+	if (work_arr[0] != WORK_MAGIC1 || work_arr[1] != WORK_MAGIC2) {
+		plt_err("Failed to get expected work, [0]=%p [1]=%p",
+			(void *)work_arr[0], (void *)work_arr[1]);
+		rc = -EFAULT;
+		goto exit;
+	}
+
+	plt_info("Work, [0]=%p [1]=%p", (void *)work_arr[0],
+		 (void *)work_arr[1]);
+
+exit:
+	/* Restore state */
+	inl_dev->work_cb = save_cb;
+	inl_dev->cb_args = save_cb_args;
+	return rc;
+}
+
+static int
+nix_inl_nix_ipsec_cfg(struct nix_inl_dev *inl_dev, bool ena)
+{
+	struct nix_inline_ipsec_lf_cfg *lf_cfg;
+	struct mbox *mbox = (&inl_dev->dev)->mbox;
+	uint32_t sa_w;
+
+	lf_cfg = mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
+	if (lf_cfg == NULL)
+		return -ENOSPC;
+
+	if (ena) {
+		sa_w = plt_align32pow2(inl_dev->ipsec_in_max_spi + 1);
+		sa_w = plt_log2_u32(sa_w);
+
+		lf_cfg->enable = 1;
+		lf_cfg->sa_base_addr = (uintptr_t)inl_dev->inb_sa_base;
+		lf_cfg->ipsec_cfg1.sa_idx_w = sa_w;
+		/* CN9K SA size is different */
+		if (roc_model_is_cn9k())
+			lf_cfg->ipsec_cfg0.lenm1_max = NIX_CN9K_MAX_HW_FRS - 1;
+		else
+			lf_cfg->ipsec_cfg0.lenm1_max = NIX_RPM_MAX_HW_FRS - 1;
+		lf_cfg->ipsec_cfg1.sa_idx_max = inl_dev->ipsec_in_max_spi;
+		lf_cfg->ipsec_cfg0.sa_pow2_size =
+			plt_log2_u32(inl_dev->inb_sa_sz);
+
+		lf_cfg->ipsec_cfg0.tag_const = 0;
+		lf_cfg->ipsec_cfg0.tt = SSO_TT_ORDERED;
+	} else {
+		lf_cfg->enable = 0;
+	}
+
+	return mbox_process(mbox);
+}
+
+static int
+nix_inl_cpt_setup(struct nix_inl_dev *inl_dev)
+{
+	struct roc_cpt_lf *lf = &inl_dev->cpt_lf;
+	struct dev *dev = &inl_dev->dev;
+	uint8_t eng_grpmask;
+	int rc;
+
+	if (!inl_dev->attach_cptlf)
+		return 0;
+
+	/* Alloc CPT LF */
+	eng_grpmask = (1ULL << ROC_CPT_DFLT_ENG_GRP_SE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
+	rc = cpt_lfs_alloc(dev, eng_grpmask, RVU_BLOCK_ADDR_CPT0, false);
+	if (rc) {
+		plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
+		return rc;
+	}
+
+	/* Setup CPT LF for submitting control opcode */
+	lf = &inl_dev->cpt_lf;
+	lf->lf_id = 0;
+	lf->nb_desc = 0; /* Set to default */
+	lf->dev = &inl_dev->dev;
+	lf->msixoff = inl_dev->cpt_msixoff;
+	lf->pci_dev = inl_dev->pci_dev;
+
+	rc = cpt_lf_init(lf);
+	if (rc) {
+		plt_err("Failed to initialize CPT LF, rc=%d", rc);
+		goto lf_free;
+	}
+
+	roc_cpt_iq_enable(lf);
+	return 0;
+lf_free:
+	rc |= cpt_lfs_free(dev);
+	return rc;
+}
+
+static int
+nix_inl_cpt_release(struct nix_inl_dev *inl_dev)
+{
+	struct roc_cpt_lf *lf = &inl_dev->cpt_lf;
+	struct dev *dev = &inl_dev->dev;
+	int rc, ret = 0;
+
+	if (!inl_dev->attach_cptlf)
+		return 0;
+
+	/* Cleanup CPT LF queue */
+	cpt_lf_fini(lf);
+
+	/* Free LF resources */
+	rc = cpt_lfs_free(dev);
+	if (rc)
+		plt_err("Failed to free CPT LF resources, rc=%d", rc);
+	ret |= rc;
+
+	/* Detach LF */
+	rc = cpt_lfs_detach(dev);
+	if (rc)
+		plt_err("Failed to detach CPT LF, rc=%d", rc);
+	ret |= rc;
+
+	return ret;
+}
+
+static int
+nix_inl_sso_setup(struct nix_inl_dev *inl_dev)
+{
+	struct sso_lf_alloc_rsp *sso_rsp;
+	struct dev *dev = &inl_dev->dev;
+	uint32_t xaq_cnt, count, aura;
+	uint16_t hwgrp[1] = {0};
+	struct npa_pool_s pool;
+	uintptr_t iova;
+	int rc;
+
+	/* Alloc SSOW LF */
+	rc = sso_lf_alloc(dev, SSO_LF_TYPE_HWS, 1, NULL);
+	if (rc) {
+		plt_err("Failed to alloc SSO HWS, rc=%d", rc);
+		return rc;
+	}
+
+	/* Alloc HWGRP LF */
+	rc = sso_lf_alloc(dev, SSO_LF_TYPE_HWGRP, 1, (void **)&sso_rsp);
+	if (rc) {
+		plt_err("Failed to alloc SSO HWGRP, rc=%d", rc);
+		goto free_ssow;
+	}
+
+	inl_dev->xaq_buf_size = sso_rsp->xaq_buf_size;
+	inl_dev->xae_waes = sso_rsp->xaq_wq_entries;
+	inl_dev->iue = sso_rsp->in_unit_entries;
+
+	/* Create XAQ pool */
+	xaq_cnt = XAQ_CACHE_CNT;
+	xaq_cnt += inl_dev->iue / inl_dev->xae_waes;
+	plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
+
+	inl_dev->xaq_mem = plt_zmalloc(inl_dev->xaq_buf_size * xaq_cnt,
+				       inl_dev->xaq_buf_size);
+	if (!inl_dev->xaq_mem) {
+		rc = NIX_ERR_NO_MEM;
+		plt_err("Failed to alloc xaq buf mem");
+		goto free_sso;
+	}
+
+	memset(&pool, 0, sizeof(struct npa_pool_s));
+	pool.nat_align = 1;
+	rc = roc_npa_pool_create(&inl_dev->xaq_aura, inl_dev->xaq_buf_size,
+				 xaq_cnt, NULL, &pool);
+	if (rc) {
+		plt_err("Failed to alloc aura for XAQ, rc=%d", rc);
+		goto free_mem;
+	}
+
+	/* Fill the XAQ buffers */
+	iova = (uint64_t)inl_dev->xaq_mem;
+	for (count = 0; count < xaq_cnt; count++) {
+		roc_npa_aura_op_free(inl_dev->xaq_aura, 0, iova);
+		iova += inl_dev->xaq_buf_size;
+	}
+	roc_npa_aura_op_range_set(inl_dev->xaq_aura, (uint64_t)inl_dev->xaq_mem,
+				  iova);
+
+	aura = roc_npa_aura_handle_to_aura(inl_dev->xaq_aura);
+
+	/* Setup xaq for hwgrps */
+	rc = sso_hwgrp_alloc_xaq(dev, aura, 1);
+	if (rc) {
+		plt_err("Failed to setup hwgrp xaq aura, rc=%d", rc);
+		goto destroy_pool;
+	}
+
+	/* Register SSO, SSOW error and work irq's */
+	rc = nix_inl_sso_register_irqs(inl_dev);
+	if (rc) {
+		plt_err("Failed to register sso irq's, rc=%d", rc);
+		goto release_xaq;
+	}
+
+	/* Setup hwgrp->hws link */
+	sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true);
+
+	/* Enable HWGRP */
+	plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
+
+	return 0;
+
+release_xaq:
+	sso_hwgrp_release_xaq(&inl_dev->dev, 1);
+destroy_pool:
+	roc_npa_pool_destroy(inl_dev->xaq_aura);
+	inl_dev->xaq_aura = 0;
+free_mem:
+	plt_free(inl_dev->xaq_mem);
+	inl_dev->xaq_mem = NULL;
+free_sso:
+	sso_lf_free(dev, SSO_LF_TYPE_HWGRP, 1);
+free_ssow:
+	sso_lf_free(dev, SSO_LF_TYPE_HWS, 1);
+	return rc;
+}
+
+static int
+nix_inl_sso_release(struct nix_inl_dev *inl_dev)
+{
+	uint16_t hwgrp[1] = {0};
+
+	/* Disable HWGRP */
+	plt_write64(0, inl_dev->sso_base + SSO_LF_GGRP_QCTL);
+
+	/* Unregister SSO/SSOW IRQ's */
+	nix_inl_sso_unregister_irqs(inl_dev);
+
+	/* Unlink hws */
+	sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false);
+
+	/* Release XAQ aura */
+	sso_hwgrp_release_xaq(&inl_dev->dev, 1);
+
+	/* Free SSO, SSOW LF's */
+	sso_lf_free(&inl_dev->dev, SSO_LF_TYPE_HWS, 1);
+	sso_lf_free(&inl_dev->dev, SSO_LF_TYPE_HWGRP, 1);
+
+	return 0;
+}
+
+static int
+nix_inl_nix_setup(struct nix_inl_dev *inl_dev)
+{
+	uint16_t ipsec_in_max_spi = inl_dev->ipsec_in_max_spi;
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct nix_lf_alloc_rsp *rsp;
+	struct nix_lf_alloc_req *req;
+	size_t inb_sa_sz;
+	int rc = -ENOSPC;
+
+	/* Alloc NIX LF needed for single RQ */
+	req = mbox_alloc_msg_nix_lf_alloc(mbox);
+	if (req == NULL)
+		return rc;
+	req->rq_cnt = 1;
+	req->sq_cnt = 1;
+	req->cq_cnt = 1;
+	/* XQESZ is W16 */
+	req->xqe_sz = NIX_XQESZ_W16;
+	/* RSS size does not matter as this RQ is only for UCAST_IPSEC action */
+	req->rss_sz = ROC_NIX_RSS_RETA_SZ_64;
+	req->rss_grps = ROC_NIX_RSS_GRPS;
+	req->npa_func = idev_npa_pffunc_get();
+	req->sso_func = dev->pf_func;
+	req->rx_cfg = NIX_INL_LF_RX_CFG;
+	req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
+
+	if (roc_model_is_cn10ka_a0() || roc_model_is_cnf10ka_a0() ||
+	    roc_model_is_cnf10kb_a0())
+		req->rx_cfg &= ~ROC_NIX_LF_RX_CFG_DROP_RE;
+
+	rc = mbox_process_msg(mbox, (void *)&rsp);
+	if (rc) {
+		plt_err("Failed to alloc lf, rc=%d", rc);
+		return rc;
+	}
+
+	inl_dev->lf_tx_stats = rsp->lf_tx_stats;
+	inl_dev->lf_rx_stats = rsp->lf_rx_stats;
+	inl_dev->qints = rsp->qints;
+	inl_dev->cints = rsp->cints;
+
+	/* Register nix interrupts */
+	rc = nix_inl_nix_register_irqs(inl_dev);
+	if (rc) {
+		plt_err("Failed to register nix irq's, rc=%d", rc);
+		goto lf_free;
+	}
+
+	/* CN9K SA is different */
+	if (roc_model_is_cn9k())
+		inb_sa_sz = ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ;
+	else
+		inb_sa_sz = ROC_NIX_INL_OT_IPSEC_INB_SA_SZ;
+
+	/* Alloc contiguous memory for Inbound SA's */
+	inl_dev->inb_sa_sz = inb_sa_sz;
+	inl_dev->inb_sa_base = plt_zmalloc(inb_sa_sz * ipsec_in_max_spi,
+					   ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!inl_dev->inb_sa_base) {
+		plt_err("Failed to allocate memory for Inbound SA");
+		rc = -ENOMEM;
+		goto unregister_irqs;
+	}
+
+	/* Setup device specific inb SA table */
+	rc = nix_inl_nix_ipsec_cfg(inl_dev, true);
+	if (rc) {
+		plt_err("Failed to setup NIX Inbound SA conf, rc=%d", rc);
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	plt_free(inl_dev->inb_sa_base);
+	inl_dev->inb_sa_base = NULL;
+unregister_irqs:
+	nix_inl_nix_unregister_irqs(inl_dev);
+lf_free:
+	mbox_alloc_msg_nix_lf_free(mbox);
+	rc |= mbox_process(mbox);
+	return rc;
+}
+
+static int
+nix_inl_nix_release(struct nix_inl_dev *inl_dev)
+{
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct nix_lf_free_req *req;
+	struct ndc_sync_op *ndc_req;
+	int rc = -ENOSPC;
+
+	/* Disable Inbound processing */
+	rc = nix_inl_nix_ipsec_cfg(inl_dev, false);
+	if (rc)
+		plt_err("Failed to disable Inbound IPSec, rc=%d", rc);
+
+	/* Sync NDC-NIX for LF */
+	ndc_req = mbox_alloc_msg_ndc_sync_op(mbox);
+	if (ndc_req == NULL)
+		return rc;
+	ndc_req->nix_lf_rx_sync = 1;
+	rc = mbox_process(mbox);
+	if (rc)
+		plt_err("Error on NDC-NIX-RX LF sync, rc %d", rc);
+
+	/* Unregister IRQs */
+	nix_inl_nix_unregister_irqs(inl_dev);
+
+	/* By default all associated mcam rules are deleted */
+	req = mbox_alloc_msg_nix_lf_free(mbox);
+	if (req == NULL)
+		return -ENOSPC;
+
+	return mbox_process(mbox);
+}
+
+static int
+nix_inl_lf_attach(struct nix_inl_dev *inl_dev)
+{
+	struct msix_offset_rsp *msix_rsp;
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct rsrc_attach_req *req;
+	uint64_t nix_blkaddr;
+	int rc = -ENOSPC;
+
+	req = mbox_alloc_msg_attach_resources(mbox);
+	if (req == NULL)
+		return rc;
+	req->modify = true;
+	/* Attach 1 NIXLF, SSO HWS and SSO HWGRP */
+	req->nixlf = true;
+	req->ssow = 1;
+	req->sso = 1;
+	if (inl_dev->attach_cptlf) {
+		req->cptlfs = 1;
+		req->cpt_blkaddr = RVU_BLOCK_ADDR_CPT0;
+	}
+
+	rc = mbox_process(dev->mbox);
+	if (rc)
+		return rc;
+
+	/* Get MSIX vector offsets */
+	mbox_alloc_msg_msix_offset(mbox);
+	rc = mbox_process_msg(dev->mbox, (void **)&msix_rsp);
+	if (rc)
+		return rc;
+
+	inl_dev->nix_msixoff = msix_rsp->nix_msixoff;
+	inl_dev->ssow_msixoff = msix_rsp->ssow_msixoff[0];
+	inl_dev->sso_msixoff = msix_rsp->sso_msixoff[0];
+	inl_dev->cpt_msixoff = msix_rsp->cptlf_msixoff[0];
+
+	nix_blkaddr = nix_get_blkaddr(dev);
+	inl_dev->is_nix1 = (nix_blkaddr == RVU_BLOCK_ADDR_NIX1);
+
+	/* Update base addresses for LF's */
+	inl_dev->nix_base = dev->bar2 + (nix_blkaddr << 20);
+	inl_dev->ssow_base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20);
+	inl_dev->sso_base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20);
+	inl_dev->cpt_base = dev->bar2 + (RVU_BLOCK_ADDR_CPT0 << 20);
+
+	return 0;
+}
+
+static int
+nix_inl_lf_detach(struct nix_inl_dev *inl_dev)
+{
+	struct dev *dev = &inl_dev->dev;
+	struct mbox *mbox = dev->mbox;
+	struct rsrc_detach_req *req;
+	int rc = -ENOSPC;
+
+	req = mbox_alloc_msg_detach_resources(mbox);
+	if (req == NULL)
+		return rc;
+	req->partial = true;
+	req->nixlf = true;
+	req->ssow = true;
+	req->sso = true;
+	req->cptlfs = !!inl_dev->attach_cptlf;
+
+	return mbox_process(dev->mbox);
+}
+
+int
+roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct plt_pci_device *pci_dev;
+	struct nix_inl_dev *inl_dev;
+	struct idev_cfg *idev;
+	int rc;
+
+	pci_dev = roc_inl_dev->pci_dev;
+
+	/* Skip probe if already done */
+	idev = idev_get_cfg();
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	if (idev->nix_inl_dev) {
+		plt_info("Skipping device %s, inline device already probed",
+			 pci_dev->name);
+		return -EEXIST;
+	}
+
+	PLT_STATIC_ASSERT(sizeof(struct nix_inl_dev) <= ROC_NIX_INL_MEM_SZ);
+
+	inl_dev = (struct nix_inl_dev *)roc_inl_dev->reserved;
+	memset(inl_dev, 0, sizeof(*inl_dev));
+
+	inl_dev->pci_dev = pci_dev;
+	inl_dev->ipsec_in_max_spi = roc_inl_dev->ipsec_in_max_spi;
+	inl_dev->selftest = roc_inl_dev->selftest;
+	inl_dev->attach_cptlf = roc_inl_dev->attach_cptlf;
+
+	/* Initialize base device */
+	rc = dev_init(&inl_dev->dev, pci_dev);
+	if (rc) {
+		plt_err("Failed to init roc device");
+		goto error;
+	}
+
+	/* Attach LF resources */
+	rc = nix_inl_lf_attach(inl_dev);
+	if (rc) {
+		plt_err("Failed to attach LF resources, rc=%d", rc);
+		goto dev_cleanup;
+	}
+
+	/* Setup NIX LF */
+	rc = nix_inl_nix_setup(inl_dev);
+	if (rc)
+		goto lf_detach;
+
+	/* Setup SSO LF */
+	rc = nix_inl_sso_setup(inl_dev);
+	if (rc)
+		goto nix_release;
+
+	/* Setup CPT LF */
+	rc = nix_inl_cpt_setup(inl_dev);
+	if (rc)
+		goto sso_release;
+
+	/* Perform selftest if asked for */
+	if (inl_dev->selftest) {
+		rc = nix_inl_selftest();
+		if (rc)
+			goto cpt_release;
+	}
+
+	idev->nix_inl_dev = inl_dev;
+
+	return 0;
+cpt_release:
+	rc |= nix_inl_cpt_release(inl_dev);
+sso_release:
+	rc |= nix_inl_sso_release(inl_dev);
+nix_release:
+	rc |= nix_inl_nix_release(inl_dev);
+lf_detach:
+	rc |= nix_inl_lf_detach(inl_dev);
+dev_cleanup:
+	rc |= dev_fini(&inl_dev->dev, pci_dev);
+error:
+	return rc;
+}
+
+int
+roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev)
+{
+	struct plt_pci_device *pci_dev;
+	struct nix_inl_dev *inl_dev;
+	struct idev_cfg *idev;
+	int rc;
+
+	idev = idev_get_cfg();
+	if (idev == NULL)
+		return 0;
+
+	if (!idev->nix_inl_dev ||
+	    PLT_PTR_DIFF(roc_inl_dev->reserved, idev->nix_inl_dev))
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	pci_dev = inl_dev->pci_dev;
+
+	/* Release SSO */
+	rc = nix_inl_sso_release(inl_dev);
+
+	/* Release NIX */
+	rc |= nix_inl_nix_release(inl_dev);
+
+	/* Detach LF's */
+	rc |= nix_inl_lf_detach(inl_dev);
+
+	/* Cleanup mbox */
+	rc |= dev_fini(&inl_dev->dev, pci_dev);
+	if (rc)
+		return rc;
+
+	idev->nix_inl_dev = NULL;
+	return 0;
+}
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index f424009..4729a38 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -15,11 +15,13 @@ struct nix_inl_dev {
 	uintptr_t nix_base;
 	uintptr_t ssow_base;
 	uintptr_t sso_base;
+	uintptr_t cpt_base;
 
 	/* MSIX vector offsets */
 	uint16_t nix_msixoff;
 	uint16_t ssow_msixoff;
 	uint16_t sso_msixoff;
+	uint16_t cpt_msixoff;
 
 	/* SSO data */
 	uint32_t xaq_buf_size;
@@ -43,9 +45,13 @@ struct nix_inl_dev {
 	void *inb_sa_base;
 	uint16_t inb_sa_sz;
 
+	/* CPT data */
+	struct roc_cpt_lf cpt_lf;
+
 	/* Device arguments */
 	uint8_t selftest;
 	uint16_t ipsec_in_max_spi;
+	bool attach_cptlf;
 };
 
 int nix_inl_sso_register_irqs(struct nix_inl_dev *inl_dev);
@@ -54,4 +60,6 @@ void nix_inl_sso_unregister_irqs(struct nix_inl_dev *inl_dev);
 int nix_inl_nix_register_irqs(struct nix_inl_dev *inl_dev);
 void nix_inl_nix_unregister_irqs(struct nix_inl_dev *inl_dev);
 
+uint16_t nix_inl_dev_pffunc_get(void);
+
 #endif /* _ROC_NIX_INL_PRIV_H_ */
diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h
index 177db3d..241655b 100644
--- a/drivers/common/cnxk/roc_platform.h
+++ b/drivers/common/cnxk/roc_platform.h
@@ -37,6 +37,7 @@
 #define PLT_MEMZONE_NAMESIZE	 RTE_MEMZONE_NAMESIZE
 #define PLT_STD_C11		 RTE_STD_C11
 #define PLT_PTR_ADD		 RTE_PTR_ADD
+#define PLT_PTR_DIFF		 RTE_PTR_DIFF
 #define PLT_MAX_RXTX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
 #define PLT_INTR_VEC_RXTX_OFFSET RTE_INTR_VEC_RXTX_OFFSET
 #define PLT_MIN			 RTE_MIN
@@ -77,6 +78,7 @@
 #define plt_cpu_to_be_64 rte_cpu_to_be_64
 #define plt_be_to_cpu_64 rte_be_to_cpu_64
 
+#define plt_align32pow2	    rte_align32pow2
 #define plt_align32prevpow2 rte_align32prevpow2
 
 #define plt_bitmap			rte_bitmap
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 008098e..1f76664 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -99,6 +99,9 @@ INTERNAL {
 	roc_nix_get_pf_func;
 	roc_nix_get_vf;
 	roc_nix_get_vwqe_interval;
+	roc_nix_inl_dev_dump;
+	roc_nix_inl_dev_fini;
+	roc_nix_inl_dev_init;
 	roc_nix_is_lbk;
 	roc_nix_is_pf;
 	roc_nix_is_sdp;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 07/28] common/cnxk: support NIX inline inbound and outbound setup
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (5 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
                     ` (21 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella
  Cc: dev

Add API to support setting up NIX inline inbound and
NIX inline outbound. In case of inbound, SA base is setup
on NIX PFFUNC and in case of outbound, required number of
CPT LF's are attached to NIX PFFUNC.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/hw/cpt.h         |   8 +
 drivers/common/cnxk/meson.build      |   1 +
 drivers/common/cnxk/roc_api.h        |  48 +--
 drivers/common/cnxk/roc_constants.h  |  58 +++
 drivers/common/cnxk/roc_io.h         |   9 +
 drivers/common/cnxk/roc_io_generic.h |   3 +-
 drivers/common/cnxk/roc_nix.h        |   5 +
 drivers/common/cnxk/roc_nix_debug.c  |  15 +
 drivers/common/cnxk/roc_nix_inl.c    | 778 +++++++++++++++++++++++++++++++++++
 drivers/common/cnxk/roc_nix_inl.h    | 101 +++++
 drivers/common/cnxk/roc_nix_priv.h   |  15 +
 drivers/common/cnxk/roc_nix_queue.c  |  28 +-
 drivers/common/cnxk/roc_npc.c        |  27 +-
 drivers/common/cnxk/version.map      |  26 ++
 14 files changed, 1047 insertions(+), 75 deletions(-)
 create mode 100644 drivers/common/cnxk/roc_constants.h
 create mode 100644 drivers/common/cnxk/roc_nix_inl.c

diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 84ebf2d..975139f 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -40,6 +40,7 @@
 #define CPT_LF_CTX_ENC_PKT_CNT	(0x540ull)
 #define CPT_LF_CTX_DEC_BYTE_CNT (0x550ull)
 #define CPT_LF_CTX_DEC_PKT_CNT	(0x560ull)
+#define CPT_LF_CTX_RELOAD	(0x570ull)
 
 #define CPT_AF_LFX_CTL(a)  (0x27000ull | (uint64_t)(a) << 3)
 #define CPT_AF_LFX_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
@@ -68,6 +69,13 @@ union cpt_lf_ctx_flush {
 	} s;
 };
 
+union cpt_lf_ctx_reload {
+	uint64_t u;
+	struct {
+		uint64_t cptr : 46;
+	} s;
+};
+
 union cpt_lf_inprog {
 	uint64_t u;
 	struct cpt_lf_inprog_s {
diff --git a/drivers/common/cnxk/meson.build b/drivers/common/cnxk/meson.build
index 43af6a0..97db5f0 100644
--- a/drivers/common/cnxk/meson.build
+++ b/drivers/common/cnxk/meson.build
@@ -28,6 +28,7 @@ sources = files(
         'roc_nix_debug.c',
         'roc_nix_fc.c',
         'roc_nix_irq.c',
+        'roc_nix_inl.c',
         'roc_nix_inl_dev.c',
         'roc_nix_inl_dev_irq.c',
         'roc_nix_mac.c',
diff --git a/drivers/common/cnxk/roc_api.h b/drivers/common/cnxk/roc_api.h
index 53f4e4b..b8f3667 100644
--- a/drivers/common/cnxk/roc_api.h
+++ b/drivers/common/cnxk/roc_api.h
@@ -9,28 +9,21 @@
 #include <stdint.h>
 #include <string.h>
 
-/* Alignment */
-#define ROC_ALIGN 128
-
 /* Bits manipulation */
 #include "roc_bits.h"
 
 /* Bitfields manipulation */
 #include "roc_bitfield.h"
 
+/* ROC Constants */
+#include "roc_constants.h"
+
 /* Constants */
 #define PLT_ETHER_ADDR_LEN 6
 
 /* Platform definition */
 #include "roc_platform.h"
 
-#define ROC_LMT_LINE_SZ		    128
-#define ROC_NUM_LMT_LINES	    2048
-#define ROC_LMT_LINES_PER_CORE_LOG2 5
-#define ROC_LMT_LINE_SIZE_LOG2	    7
-#define ROC_LMT_BASE_PER_CORE_LOG2                                             \
-	(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
-
 /* IO */
 #if defined(__aarch64__)
 #include "roc_io.h"
@@ -38,41 +31,6 @@
 #include "roc_io_generic.h"
 #endif
 
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM	      0x177D
-#define PCI_DEVID_CNXK_RVU_PF	      0xA063
-#define PCI_DEVID_CNXK_RVU_VF	      0xA064
-#define PCI_DEVID_CNXK_RVU_AF	      0xA065
-#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_CNXK_RVU_NPA_PF     0xA0FB
-#define PCI_DEVID_CNXK_RVU_NPA_VF     0xA0FC
-#define PCI_DEVID_CNXK_RVU_AF_VF      0xA0f8
-#define PCI_DEVID_CNXK_DPI_VF	      0xA081
-#define PCI_DEVID_CNXK_EP_VF	      0xB203
-#define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
-#define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
-#define PCI_DEVID_CNXK_BPHY	      0xA089
-#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
-#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
-
-#define PCI_DEVID_CN9K_CGX  0xA059
-#define PCI_DEVID_CN10K_RPM 0xA060
-
-#define PCI_DEVID_CN9K_RVU_CPT_PF  0xA0FD
-#define PCI_DEVID_CN9K_RVU_CPT_VF  0xA0FE
-#define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2
-#define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3
-
-#define PCI_SUBSYSTEM_DEVID_CN10KA  0xB900
-#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
-
-#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
-#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
-#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
-#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
-#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
-
 /* HW structure definition */
 #include "hw/cpt.h"
 #include "hw/nix.h"
diff --git a/drivers/common/cnxk/roc_constants.h b/drivers/common/cnxk/roc_constants.h
new file mode 100644
index 0000000..1e6427c
--- /dev/null
+++ b/drivers/common/cnxk/roc_constants.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _ROC_CONSTANTS_H_
+#define _ROC_CONSTANTS_H_
+
+/* Alignment */
+#define ROC_ALIGN 128
+
+/* LMTST constants */
+/* [CN10K, .) */
+#define ROC_LMT_LINE_SZ		    128
+#define ROC_NUM_LMT_LINES	    2048
+#define ROC_LMT_LINES_PER_CORE_LOG2 5
+#define ROC_LMT_LINE_SIZE_LOG2	    7
+#define ROC_LMT_BASE_PER_CORE_LOG2                                             \
+	(ROC_LMT_LINES_PER_CORE_LOG2 + ROC_LMT_LINE_SIZE_LOG2)
+#define ROC_LMT_MAX_THREADS		42UL
+#define ROC_LMT_CPT_LINES_PER_CORE_LOG2 4
+#define ROC_LMT_CPT_BASE_ID_OFF                                                \
+	(ROC_LMT_MAX_THREADS << ROC_LMT_LINES_PER_CORE_LOG2)
+
+/* PCI IDs */
+#define PCI_VENDOR_ID_CAVIUM	      0x177D
+#define PCI_DEVID_CNXK_RVU_PF	      0xA063
+#define PCI_DEVID_CNXK_RVU_VF	      0xA064
+#define PCI_DEVID_CNXK_RVU_AF	      0xA065
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_PF 0xA0F9
+#define PCI_DEVID_CNXK_RVU_SSO_TIM_VF 0xA0FA
+#define PCI_DEVID_CNXK_RVU_NPA_PF     0xA0FB
+#define PCI_DEVID_CNXK_RVU_NPA_VF     0xA0FC
+#define PCI_DEVID_CNXK_RVU_AF_VF      0xA0f8
+#define PCI_DEVID_CNXK_DPI_VF	      0xA081
+#define PCI_DEVID_CNXK_EP_VF	      0xB203
+#define PCI_DEVID_CNXK_RVU_SDP_PF     0xA0f6
+#define PCI_DEVID_CNXK_RVU_SDP_VF     0xA0f7
+#define PCI_DEVID_CNXK_BPHY	      0xA089
+#define PCI_DEVID_CNXK_RVU_NIX_INL_PF 0xA0F0
+#define PCI_DEVID_CNXK_RVU_NIX_INL_VF 0xA0F1
+
+#define PCI_DEVID_CN9K_CGX  0xA059
+#define PCI_DEVID_CN10K_RPM 0xA060
+
+#define PCI_DEVID_CN9K_RVU_CPT_PF  0xA0FD
+#define PCI_DEVID_CN9K_RVU_CPT_VF  0xA0FE
+#define PCI_DEVID_CN10K_RVU_CPT_PF 0xA0F2
+#define PCI_DEVID_CN10K_RVU_CPT_VF 0xA0F3
+
+#define PCI_SUBSYSTEM_DEVID_CN10KA  0xB900
+#define PCI_SUBSYSTEM_DEVID_CN10KAS 0xB900
+
+#define PCI_SUBSYSTEM_DEVID_CN9KA 0x0000
+#define PCI_SUBSYSTEM_DEVID_CN9KB 0xb400
+#define PCI_SUBSYSTEM_DEVID_CN9KC 0x0200
+#define PCI_SUBSYSTEM_DEVID_CN9KD 0xB200
+#define PCI_SUBSYSTEM_DEVID_CN9KE 0xB100
+
+#endif /* _ROC_CONSTANTS_H_ */
diff --git a/drivers/common/cnxk/roc_io.h b/drivers/common/cnxk/roc_io.h
index aee8c7f..fe5f7f4 100644
--- a/drivers/common/cnxk/roc_io.h
+++ b/drivers/common/cnxk/roc_io.h
@@ -13,6 +13,15 @@
 		(lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2);    \
 	} while (0)
 
+#define ROC_LMT_CPT_BASE_ID_GET(lmt_addr, lmt_id)                              \
+	do {                                                                   \
+		/* 16 Lines per core */                                        \
+		lmt_id = ROC_LMT_CPT_BASE_ID_OFF;                              \
+		lmt_id += (plt_lcore_id() << ROC_LMT_CPT_LINES_PER_CORE_LOG2); \
+		/* Each line is of 128B */                                     \
+		(lmt_addr) += ((uint64_t)lmt_id << ROC_LMT_LINE_SIZE_LOG2);    \
+	} while (0)
+
 #define roc_load_pair(val0, val1, addr)                                        \
 	({                                                                     \
 		asm volatile("ldp %x[x0], %x[x1], [%x[p1]]"                    \
diff --git a/drivers/common/cnxk/roc_io_generic.h b/drivers/common/cnxk/roc_io_generic.h
index 28cb096..ceaa3a3 100644
--- a/drivers/common/cnxk/roc_io_generic.h
+++ b/drivers/common/cnxk/roc_io_generic.h
@@ -5,7 +5,8 @@
 #ifndef _ROC_IO_GENERIC_H_
 #define _ROC_IO_GENERIC_H_
 
-#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
+#define ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id)	  (lmt_id = 0)
+#define ROC_LMT_CPT_BASE_ID_GET(lmt_addr, lmt_id) (lmt_id = 0)
 
 #define roc_load_pair(val0, val1, addr)                                        \
 	do {                                                                   \
diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index d9a4613..4fcce49 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -171,6 +171,7 @@ struct roc_nix_rq {
 	uint8_t spb_red_pass;
 	/* End of Input parameters */
 	struct roc_nix *roc_nix;
+	bool inl_dev_ref;
 };
 
 struct roc_nix_cq {
@@ -258,6 +259,10 @@ struct roc_nix {
 	bool enable_loop;
 	bool hw_vlan_ins;
 	uint8_t lock_rx_ctx;
+	uint32_t outb_nb_desc;
+	uint16_t outb_nb_crypto_qs;
+	uint16_t ipsec_in_max_spi;
+	uint16_t ipsec_out_max_sa;
 	/* End of input parameters */
 	/* LMT line base for "Per Core Tx LMT line" mode*/
 	uintptr_t lmt_base;
diff --git a/drivers/common/cnxk/roc_nix_debug.c b/drivers/common/cnxk/roc_nix_debug.c
index 582f5a3..266935a 100644
--- a/drivers/common/cnxk/roc_nix_debug.c
+++ b/drivers/common/cnxk/roc_nix_debug.c
@@ -818,6 +818,7 @@ roc_nix_rq_dump(struct roc_nix_rq *rq)
 	nix_dump("  vwqe_wait_tmo = %ld", rq->vwqe_wait_tmo);
 	nix_dump("  vwqe_aura_handle = %ld", rq->vwqe_aura_handle);
 	nix_dump("  roc_nix = %p", rq->roc_nix);
+	nix_dump("  inl_dev_ref = %d", rq->inl_dev_ref);
 }
 
 void
@@ -1160,6 +1161,7 @@ roc_nix_dump(struct roc_nix *roc_nix)
 {
 	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
 	struct dev *dev = &nix->dev;
+	int i;
 
 	nix_dump("nix@%p", nix);
 	nix_dump("  pf = %d", dev_get_pf(dev->pf_func));
@@ -1169,6 +1171,7 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  port_id = %d", roc_nix->port_id);
 	nix_dump("  rss_tag_as_xor = %d", roc_nix->rss_tag_as_xor);
 	nix_dump("  rss_tag_as_xor = %d", roc_nix->max_sqb_count);
+	nix_dump("  outb_nb_desc = %u", roc_nix->outb_nb_desc);
 
 	nix_dump("  \tpci_dev = %p", nix->pci_dev);
 	nix_dump("  \tbase = 0x%" PRIxPTR "", nix->base);
@@ -1206,12 +1209,24 @@ roc_nix_dump(struct roc_nix *roc_nix)
 	nix_dump("  \ttx_link = %d", nix->tx_link);
 	nix_dump("  \tsqb_size = %d", nix->sqb_size);
 	nix_dump("  \tmsixoff = %d", nix->msixoff);
+	for (i = 0; i < nix->nb_cpt_lf; i++)
+		nix_dump("  \tcpt_msixoff[%d] = %d", i, nix->cpt_msixoff[i]);
 	nix_dump("  \tcints = %d", nix->cints);
 	nix_dump("  \tqints = %d", nix->qints);
 	nix_dump("  \tsdp_link = %d", nix->sdp_link);
 	nix_dump("  \tptp_en = %d", nix->ptp_en);
 	nix_dump("  \trss_alg_idx = %d", nix->rss_alg_idx);
 	nix_dump("  \ttx_pause = %d", nix->tx_pause);
+	nix_dump("  \tinl_inb_ena = %d", nix->inl_inb_ena);
+	nix_dump("  \tinl_outb_ena = %d", nix->inl_outb_ena);
+	nix_dump("  \tinb_sa_base = 0x%p", nix->inb_sa_base);
+	nix_dump("  \tinb_sa_sz = %" PRIu64, nix->inb_sa_sz);
+	nix_dump("  \toutb_sa_base = 0x%p", nix->outb_sa_base);
+	nix_dump("  \toutb_sa_sz = %" PRIu64, nix->outb_sa_sz);
+	nix_dump("  \toutb_err_sso_pffunc = 0x%x", nix->outb_err_sso_pffunc);
+	nix_dump("  \tcpt_lf_base = 0x%p", nix->cpt_lf_base);
+	nix_dump("  \tnb_cpt_lf = %d", nix->nb_cpt_lf);
+	nix_dump("  \tinb_inl_dev = %d", nix->inb_inl_dev);
 }
 
 void
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
new file mode 100644
index 0000000..1d962e3
--- /dev/null
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -0,0 +1,778 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "roc_api.h"
+#include "roc_priv.h"
+
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ ==
+		  1UL << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ == 512);
+PLT_STATIC_ASSERT(ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ ==
+		  1UL << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ ==
+		  1UL << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_INB_SA_SZ == 1024);
+PLT_STATIC_ASSERT(ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ ==
+		  1UL << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2);
+
+static int
+nix_inl_inb_sa_tbl_setup(struct roc_nix *roc_nix)
+{
+	uint16_t ipsec_in_max_spi = roc_nix->ipsec_in_max_spi;
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_nix_ipsec_cfg cfg;
+	size_t inb_sa_sz;
+	int rc;
+
+	/* CN9K SA size is different */
+	if (roc_model_is_cn9k())
+		inb_sa_sz = ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ;
+	else
+		inb_sa_sz = ROC_NIX_INL_OT_IPSEC_INB_SA_SZ;
+
+	/* Alloc contiguous memory for Inbound SA's */
+	nix->inb_sa_sz = inb_sa_sz;
+	nix->inb_sa_base = plt_zmalloc(inb_sa_sz * ipsec_in_max_spi,
+				       ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!nix->inb_sa_base) {
+		plt_err("Failed to allocate memory for Inbound SA");
+		return -ENOMEM;
+	}
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.sa_size = inb_sa_sz;
+	cfg.iova = (uintptr_t)nix->inb_sa_base;
+	cfg.max_sa = ipsec_in_max_spi + 1;
+	cfg.tt = SSO_TT_ORDERED;
+
+	/* Setup device specific inb SA table */
+	rc = roc_nix_lf_inl_ipsec_cfg(roc_nix, &cfg, true);
+	if (rc) {
+		plt_err("Failed to setup NIX Inbound SA conf, rc=%d", rc);
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	plt_free(nix->inb_sa_base);
+	nix->inb_sa_base = NULL;
+	return rc;
+}
+
+static int
+nix_inl_sa_tbl_release(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	int rc;
+
+	rc = roc_nix_lf_inl_ipsec_cfg(roc_nix, NULL, false);
+	if (rc) {
+		plt_err("Failed to disable Inbound inline ipsec, rc=%d", rc);
+		return rc;
+	}
+
+	plt_free(nix->inb_sa_base);
+	nix->inb_sa_base = NULL;
+	return 0;
+}
+
+struct roc_cpt_lf *
+roc_nix_inl_outb_lf_base_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	/* NIX Inline config needs to be done */
+	if (!nix->inl_outb_ena || !nix->cpt_lf_base)
+		return NULL;
+
+	return (struct roc_cpt_lf *)nix->cpt_lf_base;
+}
+
+uintptr_t
+roc_nix_inl_outb_sa_base_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return (uintptr_t)nix->outb_sa_base;
+}
+
+uintptr_t
+roc_nix_inl_inb_sa_base_get(struct roc_nix *roc_nix, bool inb_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inb_inl_dev) {
+		/* Return inline dev sa base */
+		if (inl_dev)
+			return (uintptr_t)inl_dev->inb_sa_base;
+		return 0;
+	}
+
+	return (uintptr_t)nix->inb_sa_base;
+}
+
+uint32_t
+roc_nix_inl_inb_sa_max_spi(struct roc_nix *roc_nix, bool inb_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inb_inl_dev) {
+		if (inl_dev)
+			return inl_dev->ipsec_in_max_spi;
+		return 0;
+	}
+
+	return roc_nix->ipsec_in_max_spi;
+}
+
+uint32_t
+roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix, bool inl_dev_sa)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!inl_dev_sa)
+		return nix->inb_sa_sz;
+
+	inl_dev = idev->nix_inl_dev;
+	if (inl_dev_sa && inl_dev)
+		return inl_dev->inb_sa_sz;
+
+	/* On error */
+	return 0;
+}
+
+uintptr_t
+roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix, bool inb_inl_dev, uint32_t spi)
+{
+	uintptr_t sa_base;
+	uint32_t max_spi;
+	uint64_t sz;
+
+	sa_base = roc_nix_inl_inb_sa_base_get(roc_nix, inb_inl_dev);
+	/* Check if SA base exists */
+	if (!sa_base)
+		return 0;
+
+	/* Check if SPI is in range */
+	max_spi = roc_nix_inl_inb_sa_max_spi(roc_nix, inb_inl_dev);
+	if (spi > max_spi) {
+		plt_err("Inbound SA SPI %u exceeds max %u", spi, max_spi);
+		return 0;
+	}
+
+	/* Get SA size */
+	sz = roc_nix_inl_inb_sa_sz(roc_nix, inb_inl_dev);
+	if (!sz)
+		return 0;
+
+	/* Basic logic of SPI->SA for now */
+	return (sa_base + (spi * sz));
+}
+
+int
+roc_nix_inl_inb_init(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct roc_cpt *roc_cpt;
+	uint16_t param1;
+	int rc;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	/* Unless we have another mechanism to trigger
+	 * onetime Inline config in CPTPF, we cannot
+	 * support without CPT being probed.
+	 */
+	roc_cpt = idev->cpt;
+	if (!roc_cpt) {
+		plt_err("Cannot support inline inbound, cryptodev not probed");
+		return -ENOTSUP;
+	}
+
+	if (roc_model_is_cn9k()) {
+		param1 = ROC_ONF_IPSEC_INB_MAX_L2_SZ;
+	} else {
+		union roc_ot_ipsec_inb_param1 u;
+
+		u.u16 = 0;
+		u.s.esp_trailer_disable = 1;
+		param1 = u.u16;
+	}
+
+	/* Do onetime Inbound Inline config in CPTPF */
+	rc = roc_cpt_inline_ipsec_inb_cfg(roc_cpt, param1, 0);
+	if (rc && rc != -EEXIST) {
+		plt_err("Failed to setup inbound lf, rc=%d", rc);
+		return rc;
+	}
+
+	/* Setup Inbound SA table */
+	rc = nix_inl_inb_sa_tbl_setup(roc_nix);
+	if (rc)
+		return rc;
+
+	nix->inl_inb_ena = true;
+	return 0;
+}
+
+int
+roc_nix_inl_inb_fini(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	nix->inl_inb_ena = false;
+
+	/* Disable Inbound SA */
+	return nix_inl_sa_tbl_release(roc_nix);
+}
+
+int
+roc_nix_inl_outb_init(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct idev_cfg *idev = idev_get_cfg();
+	struct roc_cpt_lf *lf_base, *lf;
+	struct dev *dev = &nix->dev;
+	struct msix_offset_rsp *rsp;
+	struct nix_inl_dev *inl_dev;
+	uint16_t sso_pffunc;
+	uint8_t eng_grpmask;
+	uint64_t blkaddr;
+	uint16_t nb_lf;
+	void *sa_base;
+	size_t sa_sz;
+	int i, j, rc;
+
+	if (idev == NULL)
+		return -ENOTSUP;
+
+	nb_lf = roc_nix->outb_nb_crypto_qs;
+	blkaddr = nix->is_nix1 ? RVU_BLOCK_ADDR_CPT1 : RVU_BLOCK_ADDR_CPT0;
+
+	/* Retrieve inline device if present */
+	inl_dev = idev->nix_inl_dev;
+	sso_pffunc = inl_dev ? inl_dev->dev.pf_func : idev_sso_pffunc_get();
+	if (!sso_pffunc) {
+		plt_err("Failed to setup inline outb, need either "
+			"inline device or sso device");
+		return -ENOTSUP;
+	}
+
+	/* Attach CPT LF for outbound */
+	rc = cpt_lfs_attach(dev, blkaddr, true, nb_lf);
+	if (rc) {
+		plt_err("Failed to attach CPT LF for inline outb, rc=%d", rc);
+		return rc;
+	}
+
+	/* Alloc CPT LF */
+	eng_grpmask = (1ULL << ROC_CPT_DFLT_ENG_GRP_SE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_SE_IE |
+		       1ULL << ROC_CPT_DFLT_ENG_GRP_AE);
+	rc = cpt_lfs_alloc(dev, eng_grpmask, blkaddr, true);
+	if (rc) {
+		plt_err("Failed to alloc CPT LF resources, rc=%d", rc);
+		goto lf_detach;
+	}
+
+	/* Get msix offsets */
+	rc = cpt_get_msix_offset(dev, &rsp);
+	if (rc) {
+		plt_err("Failed to get CPT LF msix offset, rc=%d", rc);
+		goto lf_free;
+	}
+
+	mbox_memcpy(nix->cpt_msixoff,
+		    nix->is_nix1 ? rsp->cpt1_lf_msixoff : rsp->cptlf_msixoff,
+		    sizeof(nix->cpt_msixoff));
+
+	/* Alloc required num of cpt lfs */
+	lf_base = plt_zmalloc(nb_lf * sizeof(struct roc_cpt_lf), 0);
+	if (!lf_base) {
+		plt_err("Failed to alloc cpt lf memory");
+		rc = -ENOMEM;
+		goto lf_free;
+	}
+
+	/* Initialize CPT LF's */
+	for (i = 0; i < nb_lf; i++) {
+		lf = &lf_base[i];
+
+		lf->lf_id = i;
+		lf->nb_desc = roc_nix->outb_nb_desc;
+		lf->dev = &nix->dev;
+		lf->msixoff = nix->cpt_msixoff[i];
+		lf->pci_dev = nix->pci_dev;
+
+		/* Setup CPT LF instruction queue */
+		rc = cpt_lf_init(lf);
+		if (rc) {
+			plt_err("Failed to initialize CPT LF, rc=%d", rc);
+			goto lf_fini;
+		}
+
+		/* Associate this CPT LF with NIX PFFUNC */
+		rc = cpt_lf_outb_cfg(dev, sso_pffunc, nix->dev.pf_func, i,
+				     true);
+		if (rc) {
+			plt_err("Failed to setup CPT LF->(NIX,SSO) link, rc=%d",
+				rc);
+			goto lf_fini;
+		}
+
+		/* Enable IQ */
+		roc_cpt_iq_enable(lf);
+	}
+
+	if (!roc_nix->ipsec_out_max_sa)
+		goto skip_sa_alloc;
+
+	/* CN9K SA size is different */
+	if (roc_model_is_cn9k())
+		sa_sz = ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ;
+	else
+		sa_sz = ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ;
+	/* Alloc contiguous memory of outbound SA */
+	sa_base = plt_zmalloc(sa_sz * roc_nix->ipsec_out_max_sa,
+			      ROC_NIX_INL_SA_BASE_ALIGN);
+	if (!sa_base) {
+		plt_err("Outbound SA base alloc failed");
+		goto lf_fini;
+	}
+	nix->outb_sa_base = sa_base;
+	nix->outb_sa_sz = sa_sz;
+
+skip_sa_alloc:
+
+	nix->cpt_lf_base = lf_base;
+	nix->nb_cpt_lf = nb_lf;
+	nix->outb_err_sso_pffunc = sso_pffunc;
+	nix->inl_outb_ena = true;
+	return 0;
+
+lf_fini:
+	for (j = i - 1; j >= 0; j--)
+		cpt_lf_fini(&lf_base[j]);
+	plt_free(lf_base);
+lf_free:
+	rc |= cpt_lfs_free(dev);
+lf_detach:
+	rc |= cpt_lfs_detach(dev);
+	return rc;
+}
+
+int
+roc_nix_inl_outb_fini(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_cpt_lf *lf_base = nix->cpt_lf_base;
+	struct dev *dev = &nix->dev;
+	int i, rc, ret = 0;
+
+	if (!nix->inl_outb_ena)
+		return 0;
+
+	nix->inl_outb_ena = false;
+
+	/* Cleanup CPT LF instruction queue */
+	for (i = 0; i < nix->nb_cpt_lf; i++)
+		cpt_lf_fini(&lf_base[i]);
+
+	/* Free LF resources */
+	rc = cpt_lfs_free(dev);
+	if (rc)
+		plt_err("Failed to free CPT LF resources, rc=%d", rc);
+	ret |= rc;
+
+	/* Detach LF */
+	rc = cpt_lfs_detach(dev);
+	if (rc)
+		plt_err("Failed to detach CPT LF, rc=%d", rc);
+
+	/* Free LF memory */
+	plt_free(lf_base);
+	nix->cpt_lf_base = NULL;
+	nix->nb_cpt_lf = 0;
+
+	/* Free outbound SA base */
+	plt_free(nix->outb_sa_base);
+	nix->outb_sa_base = NULL;
+
+	ret |= rc;
+	return ret;
+}
+
+bool
+roc_nix_inl_dev_is_probed(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev == NULL)
+		return 0;
+
+	return !!idev->nix_inl_dev;
+}
+
+bool
+roc_nix_inl_inb_is_enabled(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inl_inb_ena;
+}
+
+bool
+roc_nix_inl_outb_is_enabled(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inl_outb_ena;
+}
+
+int
+roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	struct dev *dev;
+	int rc;
+
+	if (idev == NULL)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	/* Nothing to do if no inline device */
+	if (!inl_dev)
+		return 0;
+
+	/* Just take reference if already inited */
+	if (inl_dev->rq_refs) {
+		inl_dev->rq_refs++;
+		rq->inl_dev_ref = true;
+		return 0;
+	}
+
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rq;
+	memset(inl_rq, 0, sizeof(struct roc_nix_rq));
+
+	/* Take RQ pool attributes from the first ethdev RQ */
+	inl_rq->qid = 0;
+	inl_rq->aura_handle = rq->aura_handle;
+	inl_rq->first_skip = rq->first_skip;
+	inl_rq->later_skip = rq->later_skip;
+	inl_rq->lpb_size = rq->lpb_size;
+
+	if (!roc_model_is_cn9k()) {
+		uint64_t aura_limit =
+			roc_npa_aura_op_limit_get(inl_rq->aura_handle);
+		uint64_t aura_shift = plt_log2_u32(aura_limit);
+
+		if (aura_shift < 8)
+			aura_shift = 0;
+		else
+			aura_shift = aura_shift - 8;
+
+		/* Set first pass RQ to drop when half of the buffers are in
+		 * use to avoid metabuf alloc failure. This is needed as long
+		 * as we cannot use different
+		 */
+		inl_rq->red_pass = (aura_limit / 2) >> aura_shift;
+		inl_rq->red_drop = ((aura_limit / 2) - 1) >> aura_shift;
+	}
+
+	/* Enable IPSec */
+	inl_rq->ipsech_ena = true;
+
+	inl_rq->flow_tag_width = 20;
+	/* Special tag mask */
+	inl_rq->tag_mask = 0xFFF00000;
+	inl_rq->tt = SSO_TT_ORDERED;
+	inl_rq->hwgrp = 0;
+	inl_rq->wqe_skip = 1;
+	inl_rq->sso_ena = true;
+
+	/* Prepare and send RQ init mbox */
+	if (roc_model_is_cn9k())
+		rc = nix_rq_cn9k_cfg(dev, inl_rq, inl_dev->qints, false, true);
+	else
+		rc = nix_rq_cfg(dev, inl_rq, inl_dev->qints, false, true);
+	if (rc) {
+		plt_err("Failed to prepare aq_enq msg, rc=%d", rc);
+		return rc;
+	}
+
+	rc = mbox_process(dev->mbox);
+	if (rc) {
+		plt_err("Failed to send aq_enq msg, rc=%d", rc);
+		return rc;
+	}
+
+	inl_dev->rq_refs++;
+	rq->inl_dev_ref = true;
+	return 0;
+}
+
+int
+roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+	struct dev *dev;
+	int rc;
+
+	if (idev == NULL)
+		return 0;
+
+	if (!rq->inl_dev_ref)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	/* Inline device should be there if we have ref */
+	if (!inl_dev) {
+		plt_err("Failed to find inline device with refs");
+		return -EFAULT;
+	}
+
+	rq->inl_dev_ref = false;
+	inl_dev->rq_refs--;
+	if (inl_dev->rq_refs)
+		return 0;
+
+	dev = &inl_dev->dev;
+	inl_rq = &inl_dev->rq;
+	/* There are no more references, disable RQ */
+	rc = nix_rq_ena_dis(dev, inl_rq, false);
+	if (rc)
+		plt_err("Failed to disable inline device rq, rc=%d", rc);
+
+	/* Flush NIX LF for CN10K */
+	if (roc_model_is_cn10k())
+		plt_write64(0, inl_dev->nix_base + NIX_LF_OP_VWQE_FLUSH);
+
+	return rc;
+}
+
+uint64_t
+roc_nix_inl_dev_rq_limit_get(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+	struct roc_nix_rq *inl_rq;
+
+	if (!idev || !idev->nix_inl_dev)
+		return 0;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev->rq_refs)
+		return 0;
+
+	inl_rq = &inl_dev->rq;
+
+	return roc_npa_aura_op_limit_get(inl_rq->aura_handle);
+}
+
+void
+roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	/* Info used by NPC flow rule add */
+	nix->inb_inl_dev = use_inl_dev;
+}
+
+bool
+roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->inb_inl_dev;
+}
+
+struct roc_nix_rq *
+roc_nix_inl_dev_rq(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev != NULL) {
+		inl_dev = idev->nix_inl_dev;
+		if (inl_dev != NULL && inl_dev->rq_refs)
+			return &inl_dev->rq;
+	}
+
+	return NULL;
+}
+
+uint16_t __roc_api
+roc_nix_inl_outb_sso_pffunc_get(struct roc_nix *roc_nix)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+
+	return nix->outb_err_sso_pffunc;
+}
+
+int
+roc_nix_inl_cb_register(roc_nix_inl_sso_work_cb_t cb, void *args)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return -EIO;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev)
+		return -EIO;
+
+	/* Be silent if registration called with same cb and args */
+	if (inl_dev->work_cb == cb && inl_dev->cb_args == args)
+		return 0;
+
+	/* Don't allow registration again if registered with different cb */
+	if (inl_dev->work_cb)
+		return -EBUSY;
+
+	inl_dev->work_cb = cb;
+	inl_dev->cb_args = args;
+	return 0;
+}
+
+int
+roc_nix_inl_cb_unregister(roc_nix_inl_sso_work_cb_t cb, void *args)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+	struct nix_inl_dev *inl_dev;
+
+	if (idev == NULL)
+		return -ENOENT;
+
+	inl_dev = idev->nix_inl_dev;
+	if (!inl_dev)
+		return -ENOENT;
+
+	if (inl_dev->work_cb != cb || inl_dev->cb_args != args)
+		return -EINVAL;
+
+	inl_dev->work_cb = NULL;
+	inl_dev->cb_args = NULL;
+	return 0;
+}
+
+int
+roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix, uint32_t tag_const,
+			   uint8_t tt)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_nix_ipsec_cfg cfg;
+
+	/* Be silent if inline inbound not enabled */
+	if (!nix->inl_inb_ena)
+		return 0;
+
+	memset(&cfg, 0, sizeof(cfg));
+	cfg.sa_size = nix->inb_sa_sz;
+	cfg.iova = (uintptr_t)nix->inb_sa_base;
+	cfg.max_sa = roc_nix->ipsec_in_max_spi + 1;
+	cfg.tt = tt;
+	cfg.tag_const = tag_const;
+
+	return roc_nix_lf_inl_ipsec_cfg(roc_nix, &cfg, true);
+}
+
+int
+roc_nix_inl_sa_sync(struct roc_nix *roc_nix, void *sa, bool inb,
+		    enum roc_nix_inl_sa_sync_op op)
+{
+	struct nix *nix = roc_nix_to_nix_priv(roc_nix);
+	struct roc_cpt_lf *outb_lf = nix->cpt_lf_base;
+	union cpt_lf_ctx_reload reload;
+	union cpt_lf_ctx_flush flush;
+	uintptr_t rbase;
+
+	/* Nothing much to do on cn9k */
+	if (roc_model_is_cn9k()) {
+		plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+		return 0;
+	}
+
+	if (!inb && !outb_lf)
+		return -EINVAL;
+
+	/* Performing op via outbound lf is enough
+	 * when inline dev is not in use.
+	 */
+	if (outb_lf && !nix->inb_inl_dev) {
+		rbase = outb_lf->rbase;
+
+		flush.u = 0;
+		reload.u = 0;
+		switch (op) {
+		case ROC_NIX_INL_SA_OP_FLUSH_INVAL:
+			flush.s.inval = 1;
+			/* fall through */
+		case ROC_NIX_INL_SA_OP_FLUSH:
+			flush.s.cptr = ((uintptr_t)sa) >> 7;
+			plt_write64(flush.u, rbase + CPT_LF_CTX_FLUSH);
+			break;
+		case ROC_NIX_INL_SA_OP_RELOAD:
+			reload.s.cptr = ((uintptr_t)sa) >> 7;
+			plt_write64(reload.u, rbase + CPT_LF_CTX_RELOAD);
+			break;
+		default:
+			return -EINVAL;
+		}
+		return 0;
+	}
+
+	return -ENOTSUP;
+}
+
+void
+roc_nix_inl_dev_lock(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev != NULL)
+		plt_spinlock_lock(&idev->nix_inl_dev_lock);
+}
+
+void
+roc_nix_inl_dev_unlock(void)
+{
+	struct idev_cfg *idev = idev_get_cfg();
+
+	if (idev != NULL)
+		plt_spinlock_unlock(&idev->nix_inl_dev_lock);
+}
diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 1b3aab0..6b8c268 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -43,6 +43,62 @@
 /* Alignment of SA Base */
 #define ROC_NIX_INL_SA_BASE_ALIGN BIT_ULL(16)
 
+static inline struct roc_onf_ipsec_inb_sa *
+roc_nix_inl_onf_ipsec_inb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_INB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline struct roc_onf_ipsec_outb_sa *
+roc_nix_inl_onf_ipsec_outb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_ONF_IPSEC_OUTB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline void *
+roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_INB_HW_SZ);
+}
+
+static inline void *
+roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_ONF_IPSEC_OUTB_HW_SZ);
+}
+
+static inline struct roc_ot_ipsec_inb_sa *
+roc_nix_inl_ot_ipsec_inb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_OT_IPSEC_INB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline struct roc_ot_ipsec_outb_sa *
+roc_nix_inl_ot_ipsec_outb_sa(uintptr_t base, uint64_t idx)
+{
+	uint64_t off = idx << ROC_NIX_INL_OT_IPSEC_OUTB_SA_SZ_LOG2;
+
+	return PLT_PTR_ADD(base, off);
+}
+
+static inline void *
+roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_OT_IPSEC_INB_HW_SZ);
+}
+
+static inline void *
+roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(void *sa)
+{
+	return PLT_PTR_ADD(sa, ROC_NIX_INL_OT_IPSEC_OUTB_HW_SZ);
+}
+
 /* Inline device SSO Work callback */
 typedef void (*roc_nix_inl_sso_work_cb_t)(uint64_t *gw, void *args);
 
@@ -62,5 +118,50 @@ struct roc_nix_inl_dev {
 int __roc_api roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev);
 int __roc_api roc_nix_inl_dev_fini(struct roc_nix_inl_dev *roc_inl_dev);
 void __roc_api roc_nix_inl_dev_dump(struct roc_nix_inl_dev *roc_inl_dev);
+bool __roc_api roc_nix_inl_dev_is_probed(void);
+void __roc_api roc_nix_inl_dev_lock(void);
+void __roc_api roc_nix_inl_dev_unlock(void);
+
+/* NIX Inline Inbound API */
+int __roc_api roc_nix_inl_inb_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_inb_fini(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_inl_inb_is_enabled(struct roc_nix *roc_nix);
+uintptr_t __roc_api roc_nix_inl_inb_sa_base_get(struct roc_nix *roc_nix,
+						bool inl_dev_sa);
+uint32_t __roc_api roc_nix_inl_inb_sa_max_spi(struct roc_nix *roc_nix,
+					      bool inl_dev_sa);
+uint32_t __roc_api roc_nix_inl_inb_sa_sz(struct roc_nix *roc_nix,
+					 bool inl_dev_sa);
+uintptr_t __roc_api roc_nix_inl_inb_sa_get(struct roc_nix *roc_nix,
+					   bool inl_dev_sa, uint32_t spi);
+void __roc_api roc_nix_inb_mode_set(struct roc_nix *roc_nix, bool use_inl_dev);
+int __roc_api roc_nix_inl_dev_rq_get(struct roc_nix_rq *rq);
+int __roc_api roc_nix_inl_dev_rq_put(struct roc_nix_rq *rq);
+bool __roc_api roc_nix_inb_is_with_inl_dev(struct roc_nix *roc_nix);
+struct roc_nix_rq *__roc_api roc_nix_inl_dev_rq(void);
+int __roc_api roc_nix_inl_inb_tag_update(struct roc_nix *roc_nix,
+					 uint32_t tag_const, uint8_t tt);
+uint64_t __roc_api roc_nix_inl_dev_rq_limit_get(void);
+
+/* NIX Inline Outbound API */
+int __roc_api roc_nix_inl_outb_init(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_outb_fini(struct roc_nix *roc_nix);
+bool __roc_api roc_nix_inl_outb_is_enabled(struct roc_nix *roc_nix);
+uintptr_t __roc_api roc_nix_inl_outb_sa_base_get(struct roc_nix *roc_nix);
+struct roc_cpt_lf *__roc_api
+roc_nix_inl_outb_lf_base_get(struct roc_nix *roc_nix);
+uint16_t __roc_api roc_nix_inl_outb_sso_pffunc_get(struct roc_nix *roc_nix);
+int __roc_api roc_nix_inl_cb_register(roc_nix_inl_sso_work_cb_t cb, void *args);
+int __roc_api roc_nix_inl_cb_unregister(roc_nix_inl_sso_work_cb_t cb,
+					void *args);
+/* NIX Inline/Outbound API */
+enum roc_nix_inl_sa_sync_op {
+	ROC_NIX_INL_SA_OP_FLUSH,
+	ROC_NIX_INL_SA_OP_FLUSH_INVAL,
+	ROC_NIX_INL_SA_OP_RELOAD,
+};
+
+int __roc_api roc_nix_inl_sa_sync(struct roc_nix *roc_nix, void *sa, bool inb,
+				  enum roc_nix_inl_sa_sync_op op);
 
 #endif /* _ROC_NIX_INL_H_ */
diff --git a/drivers/common/cnxk/roc_nix_priv.h b/drivers/common/cnxk/roc_nix_priv.h
index b140dad..7653c5a 100644
--- a/drivers/common/cnxk/roc_nix_priv.h
+++ b/drivers/common/cnxk/roc_nix_priv.h
@@ -164,6 +164,21 @@ struct nix {
 	uint16_t tm_link_cfg_lvl;
 	uint16_t contig_rsvd[NIX_TXSCH_LVL_CNT];
 	uint16_t discontig_rsvd[NIX_TXSCH_LVL_CNT];
+
+	/* Ipsec info */
+	uint16_t cpt_msixoff[MAX_RVU_BLKLF_CNT];
+	bool inl_inb_ena;
+	bool inl_outb_ena;
+	void *inb_sa_base;
+	size_t inb_sa_sz;
+	void *outb_sa_base;
+	size_t outb_sa_sz;
+	uint16_t outb_err_sso_pffunc;
+	struct roc_cpt_lf *cpt_lf_base;
+	uint16_t nb_cpt_lf;
+	/* Mode provided by driver */
+	bool inb_inl_dev;
+
 } __plt_cache_aligned;
 
 enum nix_err_status {
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index cff0ec3..41e8f2c 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -131,11 +131,11 @@ nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
 
 	/* If RED enabled, then fill enable for all cases */
 	if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
-		aq->rq.spb_aura_pass = rq->spb_red_pass;
-		aq->rq.lpb_aura_pass = rq->red_pass;
+		aq->rq.spb_pool_pass = rq->spb_red_pass;
+		aq->rq.lpb_pool_pass = rq->red_pass;
 
-		aq->rq.spb_aura_drop = rq->spb_red_drop;
-		aq->rq.lpb_aura_drop = rq->red_drop;
+		aq->rq.spb_pool_drop = rq->spb_red_drop;
+		aq->rq.lpb_pool_drop = rq->red_drop;
 	}
 
 	if (cfg) {
@@ -176,11 +176,11 @@ nix_rq_cn9k_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints,
 		aq->rq_mask.xqe_drop_ena = ~aq->rq_mask.xqe_drop_ena;
 
 		if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
-			aq->rq_mask.spb_aura_pass = ~aq->rq_mask.spb_aura_pass;
-			aq->rq_mask.lpb_aura_pass = ~aq->rq_mask.lpb_aura_pass;
+			aq->rq_mask.spb_pool_pass = ~aq->rq_mask.spb_pool_pass;
+			aq->rq_mask.lpb_pool_pass = ~aq->rq_mask.lpb_pool_pass;
 
-			aq->rq_mask.spb_aura_drop = ~aq->rq_mask.spb_aura_drop;
-			aq->rq_mask.lpb_aura_drop = ~aq->rq_mask.lpb_aura_drop;
+			aq->rq_mask.spb_pool_drop = ~aq->rq_mask.spb_pool_drop;
+			aq->rq_mask.lpb_pool_drop = ~aq->rq_mask.lpb_pool_drop;
 		}
 	}
 
@@ -276,17 +276,13 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
 
 	/* If RED enabled, then fill enable for all cases */
 	if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
-		aq->rq.spb_pool_pass = rq->red_pass;
-		aq->rq.spb_aura_pass = rq->red_pass;
+		aq->rq.spb_pool_pass = rq->spb_red_pass;
 		aq->rq.lpb_pool_pass = rq->red_pass;
-		aq->rq.lpb_aura_pass = rq->red_pass;
 		aq->rq.wqe_pool_pass = rq->red_pass;
 		aq->rq.xqe_pass = rq->red_pass;
 
-		aq->rq.spb_pool_drop = rq->red_drop;
-		aq->rq.spb_aura_drop = rq->red_drop;
+		aq->rq.spb_pool_drop = rq->spb_red_drop;
 		aq->rq.lpb_pool_drop = rq->red_drop;
-		aq->rq.lpb_aura_drop = rq->red_drop;
 		aq->rq.wqe_pool_drop = rq->red_drop;
 		aq->rq.xqe_drop = rq->red_drop;
 	}
@@ -346,16 +342,12 @@ nix_rq_cfg(struct dev *dev, struct roc_nix_rq *rq, uint16_t qints, bool cfg,
 
 		if (rq->red_pass && (rq->red_pass >= rq->red_drop)) {
 			aq->rq_mask.spb_pool_pass = ~aq->rq_mask.spb_pool_pass;
-			aq->rq_mask.spb_aura_pass = ~aq->rq_mask.spb_aura_pass;
 			aq->rq_mask.lpb_pool_pass = ~aq->rq_mask.lpb_pool_pass;
-			aq->rq_mask.lpb_aura_pass = ~aq->rq_mask.lpb_aura_pass;
 			aq->rq_mask.wqe_pool_pass = ~aq->rq_mask.wqe_pool_pass;
 			aq->rq_mask.xqe_pass = ~aq->rq_mask.xqe_pass;
 
 			aq->rq_mask.spb_pool_drop = ~aq->rq_mask.spb_pool_drop;
-			aq->rq_mask.spb_aura_drop = ~aq->rq_mask.spb_aura_drop;
 			aq->rq_mask.lpb_pool_drop = ~aq->rq_mask.lpb_pool_drop;
-			aq->rq_mask.lpb_aura_drop = ~aq->rq_mask.lpb_aura_drop;
 			aq->rq_mask.wqe_pool_drop = ~aq->rq_mask.wqe_pool_drop;
 			aq->rq_mask.xqe_drop = ~aq->rq_mask.xqe_drop;
 		}
diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
index 1c1e043..b724ff9 100644
--- a/drivers/common/cnxk/roc_npc.c
+++ b/drivers/common/cnxk/roc_npc.c
@@ -342,10 +342,11 @@ roc_npc_fini(struct roc_npc *roc_npc)
 }
 
 static int
-npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr,
+npc_parse_actions(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 		  const struct roc_npc_action actions[],
 		  struct roc_npc_flow *flow)
 {
+	struct npc *npc = roc_npc_to_npc_priv(roc_npc);
 	const struct roc_npc_action_mark *act_mark;
 	const struct roc_npc_action_queue *act_q;
 	const struct roc_npc_action_vf *vf_act;
@@ -427,15 +428,16 @@ npc_parse_actions(struct npc *npc, const struct roc_npc_attr *attr,
 			 *    NPC_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
 			 *  session_protocol ==
 			 *    NPC_SECURITY_PROTOCOL_IPSEC
-			 *
-			 * RSS is not supported with inline ipsec. Get the
-			 * rq from associated conf, or make
-			 * ROC_NPC_ACTION_TYPE_QUEUE compulsory with this
-			 * action.
-			 * Currently, rq = 0 is assumed.
 			 */
 			req_act |= ROC_NPC_ACTION_TYPE_SEC;
 			rq = 0;
+
+			/* Special processing when with inline device */
+			if (roc_nix_inb_is_with_inl_dev(roc_npc->roc_nix) &&
+			    roc_nix_inl_dev_is_probed()) {
+				rq = 0;
+				pf_func = nix_inl_dev_pffunc_get();
+			}
 			break;
 		case ROC_NPC_ACTION_TYPE_VLAN_STRIP:
 			req_act |= ROC_NPC_ACTION_TYPE_VLAN_STRIP;
@@ -679,11 +681,12 @@ npc_parse_attr(struct npc *npc, const struct roc_npc_attr *attr,
 }
 
 static int
-npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr,
+npc_parse_rule(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	       const struct roc_npc_item_info pattern[],
 	       const struct roc_npc_action actions[], struct roc_npc_flow *flow,
 	       struct npc_parse_state *pst)
 {
+	struct npc *npc = roc_npc_to_npc_priv(roc_npc);
 	int err;
 
 	/* Check attr */
@@ -697,7 +700,7 @@ npc_parse_rule(struct npc *npc, const struct roc_npc_attr *attr,
 		return err;
 
 	/* Check action */
-	err = npc_parse_actions(npc, attr, actions, flow);
+	err = npc_parse_actions(roc_npc, attr, actions, flow);
 	if (err)
 		return err;
 	return 0;
@@ -713,7 +716,8 @@ roc_npc_flow_parse(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	struct npc_parse_state parse_state = {0};
 	int rc;
 
-	rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state);
+	rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow,
+			    &parse_state);
 	if (rc)
 		return rc;
 
@@ -1193,7 +1197,8 @@ roc_npc_flow_create(struct roc_npc *roc_npc, const struct roc_npc_attr *attr,
 	}
 	memset(flow, 0, sizeof(*flow));
 
-	rc = npc_parse_rule(npc, attr, pattern, actions, flow, &parse_state);
+	rc = npc_parse_rule(roc_npc, attr, pattern, actions, flow,
+			    &parse_state);
 	if (rc != 0) {
 		*errcode = rc;
 		goto err_exit;
diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map
index 1f76664..926d5c2 100644
--- a/drivers/common/cnxk/version.map
+++ b/drivers/common/cnxk/version.map
@@ -99,9 +99,35 @@ INTERNAL {
 	roc_nix_get_pf_func;
 	roc_nix_get_vf;
 	roc_nix_get_vwqe_interval;
+	roc_nix_inl_cb_register;
+	roc_nix_inl_cb_unregister;
 	roc_nix_inl_dev_dump;
 	roc_nix_inl_dev_fini;
 	roc_nix_inl_dev_init;
+	roc_nix_inl_dev_is_probed;
+	roc_nix_inl_dev_lock;
+	roc_nix_inl_dev_unlock;
+	roc_nix_inl_dev_rq;
+	roc_nix_inl_dev_rq_get;
+	roc_nix_inl_dev_rq_put;
+	roc_nix_inl_dev_rq_limit_get;
+	roc_nix_inl_inb_is_enabled;
+	roc_nix_inl_inb_init;
+	roc_nix_inl_inb_sa_base_get;
+	roc_nix_inl_inb_sa_get;
+	roc_nix_inl_inb_sa_max_spi;
+	roc_nix_inl_inb_sa_sz;
+	roc_nix_inl_inb_tag_update;
+	roc_nix_inl_inb_fini;
+	roc_nix_inb_is_with_inl_dev;
+	roc_nix_inb_mode_set;
+	roc_nix_inl_outb_fini;
+	roc_nix_inl_outb_init;
+	roc_nix_inl_outb_lf_base_get;
+	roc_nix_inl_outb_sa_base_get;
+	roc_nix_inl_outb_sso_pffunc_get;
+	roc_nix_inl_outb_is_enabled;
+	roc_nix_inl_sa_sync;
 	roc_nix_is_lbk;
 	roc_nix_is_pf;
 	roc_nix_is_sdp;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 08/28] common/cnxk: disable CQ drop when inline inbound is enabled
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (6 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
                     ` (20 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Disable CQ drop when inline inbound is enabled. CQ drop
is not supported for second pass IPsec decrypted packets.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_queue.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 41e8f2c..41a1422 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -492,15 +492,20 @@ roc_nix_cq_init(struct roc_nix *roc_nix, struct roc_nix_cq *cq)
 		cq->drop_thresh = min_rx_drop;
 	} else {
 		cq->drop_thresh = NIX_CQ_THRESH_LEVEL;
-		cq_ctx->drop = cq->drop_thresh;
-		cq_ctx->drop_ena = 1;
+		/* Drop processing or red drop cannot be enabled due to
+		 * due to packets coming for second pass from CPT.
+		 */
+		if (!roc_nix_inl_inb_is_enabled(roc_nix)) {
+			cq_ctx->drop = cq->drop_thresh;
+			cq_ctx->drop_ena = 1;
+		}
 	}
 
 	/* TX pause frames enable flow ctrl on RX side */
 	if (nix->tx_pause) {
 		/* Single BPID is allocated for all rx channels for now */
 		cq_ctx->bpid = nix->bpid[0];
-		cq_ctx->bp = cq_ctx->drop;
+		cq_ctx->bp = cq->drop_thresh;
 		cq_ctx->bp_ena = 1;
 	}
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 09/28] common/cnxk: dump CPT LF registers on error intr
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (7 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
                     ` (19 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Dump CPT LF registers on error interrupt for debugging
purpose.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_cpt.c       |  5 ++++-
 drivers/common/cnxk/roc_cpt_debug.c | 32 ++++++++++++++++++++++++++++++--
 drivers/common/cnxk/roc_cpt_priv.h  |  1 +
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 48a378b..6ddbaa2 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -51,6 +51,9 @@ cpt_lf_misc_irq(void *param)
 
 	plt_err("Err_irq=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
 
+	/* Dump lf registers */
+	cpt_lf_print(lf);
+
 	/* Clear interrupt */
 	plt_write64(intr, lf->rbase + CPT_LF_MISC_INT);
 }
@@ -203,7 +206,7 @@ cpt_lf_dump(struct roc_cpt_lf *lf)
 	plt_cpt_dbg("CPT LF REG:");
 	plt_cpt_dbg("LF_CTL[0x%016llx]: 0x%016" PRIx64, CPT_LF_CTL,
 		    plt_read64(lf->rbase + CPT_LF_CTL));
-	plt_cpt_dbg("Q_SIZE[0x%016llx]: 0x%016" PRIx64, CPT_LF_INPROG,
+	plt_cpt_dbg("LF_INPROG[0x%016llx]: 0x%016" PRIx64, CPT_LF_INPROG,
 		    plt_read64(lf->rbase + CPT_LF_INPROG));
 
 	plt_cpt_dbg("Q_BASE[0x%016llx]: 0x%016" PRIx64, CPT_LF_Q_BASE,
diff --git a/drivers/common/cnxk/roc_cpt_debug.c b/drivers/common/cnxk/roc_cpt_debug.c
index a6c9004..847d969 100644
--- a/drivers/common/cnxk/roc_cpt_debug.c
+++ b/drivers/common/cnxk/roc_cpt_debug.c
@@ -157,11 +157,40 @@ roc_cpt_afs_print(struct roc_cpt *roc_cpt)
 	return 0;
 }
 
-static void
+void
 cpt_lf_print(struct roc_cpt_lf *lf)
 {
 	uint64_t reg_val;
 
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_BASE);
+	plt_print("    CPT_LF_Q_BASE:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_SIZE);
+	plt_print("    CPT_LF_Q_SIZE:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_INST_PTR);
+	plt_print("    CPT_LF_Q_INST_PTR:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_Q_GRP_PTR);
+	plt_print("    CPT_LF_Q_GRP_PTR:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_CTL);
+	plt_print("    CPT_LF_CTL:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_MISC_INT_ENA_W1S);
+	plt_print("    CPT_LF_MISC_INT_ENA_W1S:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_MISC_INT);
+	plt_print("    CPT_LF_MISC_INT:\t%016lx", reg_val);
+
+	reg_val = plt_read64(lf->rbase + CPT_LF_INPROG);
+	plt_print("    CPT_LF_INPROG:\t%016lx", reg_val);
+
+	if (roc_model_is_cn9k())
+		return;
+
+	plt_print("Count registers for CPT LF%d:", lf->lf_id);
+
 	reg_val = plt_read64(lf->rbase + CPT_LF_CTX_ENC_BYTE_CNT);
 	plt_print("    Encrypted byte count:\t%" PRIu64, reg_val);
 
@@ -190,7 +219,6 @@ roc_cpt_lfs_print(struct roc_cpt *roc_cpt)
 		if (lf == NULL)
 			continue;
 
-		plt_print("Count registers for CPT LF%d:", lf_id);
 		cpt_lf_print(lf);
 	}
 
diff --git a/drivers/common/cnxk/roc_cpt_priv.h b/drivers/common/cnxk/roc_cpt_priv.h
index 21911e5..61dec9a 100644
--- a/drivers/common/cnxk/roc_cpt_priv.h
+++ b/drivers/common/cnxk/roc_cpt_priv.h
@@ -31,5 +31,6 @@ int cpt_lf_outb_cfg(struct dev *dev, uint16_t sso_pf_func, uint16_t nix_pf_func,
 		    uint8_t lf_id, bool ena);
 int cpt_get_msix_offset(struct dev *dev, struct msix_offset_rsp **msix_rsp);
 uint64_t cpt_get_blkaddr(struct dev *dev);
+void cpt_lf_print(struct roc_cpt_lf *lf);
 
 #endif /* _ROC_CPT_PRIV_H_ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 10/28] common/cnxk: align CPT LF enable/disable sequence
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (8 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
                     ` (18 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

For CPT LF IQ enable, set CPT_LF_CTL[ENA] before setting
CPT_LF_INPROG[EENA] to true.

For CPT LF IQ disable, align sequence to that of HRM.

Also this patch aligns space for instructions in CPT LF
to ROC_ALIGN to make complete memory cache aligned and
has other minor fixes/additions.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/hw/cpt.h  | 11 +++++++++++
 drivers/common/cnxk/roc_cpt.c | 42 ++++++++++++++++++++++++++++++++++--------
 drivers/common/cnxk/roc_cpt.h |  8 ++++++++
 3 files changed, 53 insertions(+), 8 deletions(-)

diff --git a/drivers/common/cnxk/hw/cpt.h b/drivers/common/cnxk/hw/cpt.h
index 975139f..4d9df59 100644
--- a/drivers/common/cnxk/hw/cpt.h
+++ b/drivers/common/cnxk/hw/cpt.h
@@ -124,6 +124,17 @@ union cpt_lf_misc_int {
 	} s;
 };
 
+union cpt_lf_q_grp_ptr {
+	uint64_t u;
+	struct {
+		uint64_t dq_ptr : 15;
+		uint64_t reserved_31_15 : 17;
+		uint64_t nq_ptr : 15;
+		uint64_t reserved_47_62 : 16;
+		uint64_t xq_xor : 1;
+	} s;
+};
+
 union cpt_inst_w4 {
 	uint64_t u64;
 	struct {
diff --git a/drivers/common/cnxk/roc_cpt.c b/drivers/common/cnxk/roc_cpt.c
index 6ddbaa2..68fdb27 100644
--- a/drivers/common/cnxk/roc_cpt.c
+++ b/drivers/common/cnxk/roc_cpt.c
@@ -437,8 +437,10 @@ cpt_lf_iq_mem_calc(uint32_t nb_desc)
 	len += CPT_IQ_FC_LEN;
 
 	/* For instruction queues */
-	len += CPT_IQ_NB_DESC_SIZE_DIV40(nb_desc) * CPT_IQ_NB_DESC_MULTIPLIER *
-	       sizeof(struct cpt_inst_s);
+	len += PLT_ALIGN(CPT_IQ_NB_DESC_SIZE_DIV40(nb_desc) *
+				 CPT_IQ_NB_DESC_MULTIPLIER *
+				 sizeof(struct cpt_inst_s),
+			 ROC_ALIGN);
 
 	return len;
 }
@@ -550,6 +552,7 @@ cpt_lf_init(struct roc_cpt_lf *lf)
 	iq_mem = plt_zmalloc(cpt_lf_iq_mem_calc(lf->nb_desc), ROC_ALIGN);
 	if (iq_mem == NULL)
 		return -ENOMEM;
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
 
 	blkaddr = cpt_get_blkaddr(dev);
 	lf->rbase = dev->bar2 + ((blkaddr << 20) | (lf->lf_id << 12));
@@ -634,7 +637,7 @@ roc_cpt_dev_init(struct roc_cpt *roc_cpt)
 	}
 
 	/* Reserve 1 CPT LF for inline inbound */
-	nb_lf_avail = PLT_MIN(nb_lf_avail, ROC_CPT_MAX_LFS - 1);
+	nb_lf_avail = PLT_MIN(nb_lf_avail, (uint16_t)(ROC_CPT_MAX_LFS - 1));
 
 	roc_cpt->nb_lf_avail = nb_lf_avail;
 
@@ -770,8 +773,10 @@ void
 roc_cpt_iq_disable(struct roc_cpt_lf *lf)
 {
 	union cpt_lf_ctl lf_ctl = {.u = 0x0};
+	union cpt_lf_q_grp_ptr grp_ptr;
 	union cpt_lf_inprog lf_inprog;
 	int timeout = 20;
+	int cnt;
 
 	/* Disable instructions enqueuing */
 	plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL);
@@ -795,6 +800,27 @@ roc_cpt_iq_disable(struct roc_cpt_lf *lf)
 	 */
 	lf_inprog.s.eena = 0x0;
 	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
+
+	/* Wait for instruction queue to become empty */
+	cnt = 0;
+	do {
+		lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+		if (lf_inprog.s.grb_partial)
+			cnt = 0;
+		else
+			cnt++;
+		grp_ptr.u = plt_read64(lf->rbase + CPT_LF_Q_GRP_PTR);
+	} while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
+
+	cnt = 0;
+	do {
+		lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+		if ((lf_inprog.s.inflight == 0) && (lf_inprog.s.gwb_cnt < 40) &&
+		    ((lf_inprog.s.grb_cnt == 0) || (lf_inprog.s.grb_cnt == 40)))
+			cnt++;
+		else
+			cnt = 0;
+	} while (cnt < 10);
 }
 
 void
@@ -806,11 +832,6 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf)
 	/* Disable command queue */
 	roc_cpt_iq_disable(lf);
 
-	/* Enable command queue execution */
-	lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
-	lf_inprog.s.eena = 1;
-	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
-
 	/* Enable instruction queue enqueuing */
 	lf_ctl.u = plt_read64(lf->rbase + CPT_LF_CTL);
 	lf_ctl.s.ena = 1;
@@ -819,6 +840,11 @@ roc_cpt_iq_enable(struct roc_cpt_lf *lf)
 	lf_ctl.s.fc_hyst_bits = lf->fc_hyst_bits;
 	plt_write64(lf_ctl.u, lf->rbase + CPT_LF_CTL);
 
+	/* Enable command queue execution */
+	lf_inprog.u = plt_read64(lf->rbase + CPT_LF_INPROG);
+	lf_inprog.s.eena = 1;
+	plt_write64(lf_inprog.u, lf->rbase + CPT_LF_INPROG);
+
 	cpt_lf_dump(lf);
 }
 
diff --git a/drivers/common/cnxk/roc_cpt.h b/drivers/common/cnxk/roc_cpt.h
index c80a8e0..06277d1 100644
--- a/drivers/common/cnxk/roc_cpt.h
+++ b/drivers/common/cnxk/roc_cpt.h
@@ -76,6 +76,14 @@
 #define ROC_CPT_TUNNEL_IPV4_HDR_LEN 20
 #define ROC_CPT_TUNNEL_IPV6_HDR_LEN 40
 
+#define ROC_CPT_CCM_AAD_DATA 1
+#define ROC_CPT_CCM_MSG_LEN  4
+#define ROC_CPT_CCM_ICV_LEN  16
+#define ROC_CPT_CCM_FLAGS                                                      \
+	((ROC_CPT_CCM_AAD_DATA << 6) |                                         \
+	 (((ROC_CPT_CCM_ICV_LEN - 2) / 2) << 3) | (ROC_CPT_CCM_MSG_LEN - 1))
+#define ROC_CPT_CCM_SALT_LEN 3
+
 struct roc_cpt_lmtline {
 	uint64_t io_addr;
 	uint64_t *fc_addr;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 11/28] common/cnxk: restore NIX sqb pool limit before destroy
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (9 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
                     ` (17 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Restore SQB AURA/POOL limit before destroying SQB to be
able to drain all the buffers from the aura.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_queue.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index 41a1422..a8a713a 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -934,6 +934,11 @@ roc_nix_sq_fini(struct roc_nix_sq *sq)
 		rc |= NIX_ERR_NDC_SYNC;
 
 	rc |= nix_tm_sq_flush_post(sq);
+
+	/* Restore limit to max SQB count that the pool was created
+	 * for aura drain to succeed.
+	 */
+	roc_npa_aura_limit_modify(sq->aura_handle, NIX_MAX_SQB);
 	rc |= roc_npa_pool_destroy(sq->aura_handle);
 	plt_free(sq->fc);
 	plt_free(sq->sqe_mem);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 12/28] common/cnxk: add CQ enable support in NIX Tx path
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (10 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
                     ` (16 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Kommula Shiva Shankar

From: Kommula Shiva Shankar <kshankar@marvell.com>

This patch provides applications to add CQ support
in Tx path. This enables packet completion events on
CQ for requested packets.

Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
---
 drivers/common/cnxk/roc_nix.h       | 2 ++
 drivers/common/cnxk/roc_nix_queue.c | 4 ++++
 2 files changed, 6 insertions(+)

diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h
index 4fcce49..b06895a 100644
--- a/drivers/common/cnxk/roc_nix.h
+++ b/drivers/common/cnxk/roc_nix.h
@@ -194,7 +194,9 @@ struct roc_nix_sq {
 	enum roc_nix_sq_max_sqe_sz max_sqe_sz;
 	uint32_t nb_desc;
 	uint16_t qid;
+	uint16_t cqid;
 	bool sso_ena;
+	bool cq_ena;
 	/* End of Input parameters */
 	uint16_t sqes_per_sqb_log2;
 	struct roc_nix *roc_nix;
diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c
index a8a713a..cba1294 100644
--- a/drivers/common/cnxk/roc_nix_queue.c
+++ b/drivers/common/cnxk/roc_nix_queue.c
@@ -661,6 +661,8 @@ sq_cn9k_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
 	aq->sq.sqe_stype = NIX_STYPE_STF;
 	aq->sq.ena = 1;
 	aq->sq.sso_ena = !!sq->sso_ena;
+	aq->sq.cq_ena = !!sq->cq_ena;
+	aq->sq.cq = sq->cqid;
 	if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
 		aq->sq.sqe_stype = NIX_STYPE_STP;
 	aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
@@ -759,6 +761,8 @@ sq_init(struct nix *nix, struct roc_nix_sq *sq, uint32_t rr_quantum,
 	aq->sq.sqe_stype = NIX_STYPE_STF;
 	aq->sq.ena = 1;
 	aq->sq.sso_ena = !!sq->sso_ena;
+	aq->sq.cq_ena = !!sq->cq_ena;
+	aq->sq.cq = sq->cqid;
 	if (aq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
 		aq->sq.sqe_stype = NIX_STYPE_STP;
 	aq->sq.sqb_aura = roc_npa_aura_handle_to_aura(sq->aura_handle);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 13/28] common/cnxk: setup aura BP conf based on nix
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (11 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
                     ` (15 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao; +Cc: dev

Currently only NIX0 conf is setup in AURA for backpressure.
This patch adds support for NIX1 as well.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/common/cnxk/roc_nix_fc.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_fc.c b/drivers/common/cnxk/roc_nix_fc.c
index f17eba4..7eac7d0 100644
--- a/drivers/common/cnxk/roc_nix_fc.c
+++ b/drivers/common/cnxk/roc_nix_fc.c
@@ -284,8 +284,18 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena,
 	limit = rsp->aura.limit;
 	/* BP is already enabled. */
 	if (rsp->aura.bp_ena) {
+		uint16_t bpid;
+		bool nix1;
+
+		nix1 = !!(rsp->aura.bp_ena & 0x2);
+		if (nix1)
+			bpid = rsp->aura.nix1_bpid;
+		else
+			bpid = rsp->aura.nix0_bpid;
+
 		/* If BP ids don't match disable BP. */
-		if ((rsp->aura.nix0_bpid != nix->bpid[0]) && !force) {
+		if (((nix1 != nix->is_nix1) || (bpid != nix->bpid[0])) &&
+		    !force) {
 			req = mbox_alloc_msg_npa_aq_enq(mbox);
 			if (req == NULL)
 				return;
@@ -315,14 +325,19 @@ rox_nix_fc_npa_bp_cfg(struct roc_nix *roc_nix, uint64_t pool_id, uint8_t ena,
 	req->op = NPA_AQ_INSTOP_WRITE;
 
 	if (ena) {
-		req->aura.nix0_bpid = nix->bpid[0];
-		req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
+		if (nix->is_nix1) {
+			req->aura.nix1_bpid = nix->bpid[0];
+			req->aura_mask.nix1_bpid = ~(req->aura_mask.nix1_bpid);
+		} else {
+			req->aura.nix0_bpid = nix->bpid[0];
+			req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
+		}
 		req->aura.bp = NIX_RQ_AURA_THRESH(
 			limit > 128 ? 256 : limit); /* 95% of size*/
 		req->aura_mask.bp = ~(req->aura_mask.bp);
 	}
 
-	req->aura.bp_ena = !!ena;
+	req->aura.bp_ena = (!!ena << nix->is_nix1);
 	req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
 
 	mbox_process(mbox);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 14/28] common/cnxk: support anti-replay check in SW for cn9k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (12 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
                     ` (14 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds anti replay SW implementation for cn9k platform.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/common/cnxk/cnxk_security_ar.h | 184 +++++++++++++++++++++++++++++++++
 1 file changed, 184 insertions(+)
 create mode 100644 drivers/common/cnxk/cnxk_security_ar.h

diff --git a/drivers/common/cnxk/cnxk_security_ar.h b/drivers/common/cnxk/cnxk_security_ar.h
new file mode 100644
index 0000000..6bc517c
--- /dev/null
+++ b/drivers/common/cnxk/cnxk_security_ar.h
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#ifndef __CNXK_SECURITY_AR_H__
+#define __CNXK_SECURITY_AR_H__
+
+#include <rte_mbuf.h>
+
+#include "cnxk_security.h"
+
+#define CNXK_ON_AR_WIN_SIZE_MAX 1024
+
+/* u64 array size to fit anti replay window bits */
+#define AR_WIN_ARR_SZ                                                          \
+	(PLT_ALIGN_CEIL(CNXK_ON_AR_WIN_SIZE_MAX, BITS_PER_LONG_LONG) /        \
+	 BITS_PER_LONG_LONG)
+
+#define WORD_SHIFT 6
+#define WORD_SIZE  (1 << WORD_SHIFT)
+#define WORD_MASK  (WORD_SIZE - 1)
+
+#define IPSEC_ANTI_REPLAY_FAILED (-1)
+
+struct cnxk_on_ipsec_ar {
+	rte_spinlock_t lock;
+	uint32_t winb;
+	uint32_t wint;
+	uint64_t base;			/**< base of the anti-replay window */
+	uint64_t window[AR_WIN_ARR_SZ]; /**< anti-replay window */
+};
+
+static inline int
+cnxk_on_anti_replay_check(uint64_t seq, struct cnxk_on_ipsec_ar *ar,
+			  uint32_t winsz)
+{
+	uint64_t ex_winsz = winsz + WORD_SIZE;
+	uint64_t *window = &ar->window[0];
+	uint64_t seqword, shiftwords;
+	uint64_t base = ar->base;
+	uint32_t winb = ar->winb;
+	uint32_t wint = ar->wint;
+	uint64_t winwords;
+	uint64_t bit_pos;
+	uint64_t shift;
+	uint64_t *wptr;
+	uint64_t tmp;
+
+	winwords = ex_winsz >> WORD_SHIFT;
+	if (winsz > 64)
+		goto slow_shift;
+	/* Check if the seq is the biggest one yet */
+	if (likely(seq > base)) {
+		shift = seq - base;
+		if (shift < winsz) { /* In window */
+			/*
+			 * If more than 64-bit anti-replay window,
+			 * use slow shift routine
+			 */
+			wptr = window + (shift >> WORD_SHIFT);
+			*wptr <<= shift;
+			*wptr |= 1ull;
+		} else {
+			/* No special handling of window size > 64 */
+			wptr = window + ((winsz - 1) >> WORD_SHIFT);
+			/*
+			 * Zero out the whole window (especially for
+			 * bigger than 64b window) till the last 64b word
+			 * as the incoming sequence number minus
+			 * base sequence is more than the window size.
+			 */
+			while (window != wptr)
+				*window++ = 0ull;
+			/*
+			 * Set the last bit (of the window) to 1
+			 * as that corresponds to the base sequence number.
+			 * Now any incoming sequence number which is
+			 * (base - window size - 1) will pass anti-replay check
+			 */
+			*wptr = 1ull;
+		}
+		/*
+		 * Set the base to incoming sequence number as
+		 * that is the biggest sequence number seen yet
+		 */
+		ar->base = seq;
+		return 0;
+	}
+
+	bit_pos = base - seq;
+
+	/* If seq falls behind the window, return failure */
+	if (bit_pos >= winsz)
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* seq is within anti-replay window */
+	wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
+	bit_pos &= WORD_MASK;
+
+	/* Check if this is a replayed packet */
+	if (*wptr & ((1ull) << bit_pos))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* mark as seen */
+	*wptr |= ((1ull) << bit_pos);
+	return 0;
+
+slow_shift:
+	if (likely(seq > base)) {
+		uint32_t i;
+
+		shift = seq - base;
+		if (unlikely(shift >= winsz)) {
+			/*
+			 * shift is bigger than the window,
+			 * so just zero out everything
+			 */
+			for (i = 0; i < winwords; i++)
+				window[i] = 0;
+winupdate:
+			/* Find out the word */
+			seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
+
+			/* Find out the bit in the word */
+			bit_pos = (seq - 1) & WORD_MASK;
+
+			/*
+			 * Set the bit corresponding to sequence number
+			 * in window to mark it as received
+			 */
+			window[seqword] |= (1ull << (63 - bit_pos));
+
+			/* wint and winb range from 1 to ex_winsz */
+			ar->wint = ((wint + shift - 1) % ex_winsz) + 1;
+			ar->winb = ((winb + shift - 1) % ex_winsz) + 1;
+
+			ar->base = seq;
+			return 0;
+		}
+
+		/*
+		 * New sequence number is bigger than the base but
+		 * it's not bigger than base + window size
+		 */
+
+		shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
+			     ((wint - 1) >> WORD_SHIFT);
+		if (unlikely(shiftwords)) {
+			tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
+			for (i = 0; i < shiftwords; i++) {
+				tmp %= winwords;
+				window[tmp++] = 0;
+			}
+		}
+
+		goto winupdate;
+	}
+
+	/* Sequence number is before the window */
+	if (unlikely((seq + winsz) <= base))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/* Sequence number is within the window */
+
+	/* Find out the word */
+	seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
+
+	/* Find out the bit in the word */
+	bit_pos = (seq - 1) & WORD_MASK;
+
+	/* Check if this is a replayed packet */
+	if (window[seqword] & (1ull << (63 - bit_pos)))
+		return IPSEC_ANTI_REPLAY_FAILED;
+
+	/*
+	 * Set the bit corresponding to sequence number
+	 * in window to mark it as received
+	 */
+	window[seqword] |= (1ull << (63 - bit_pos));
+
+	return 0;
+}
+
+#endif /* __CNXK_SECURITY_AR_H__ */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 15/28] common/cnxk: support inline IPsec rte flow action
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (13 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
                     ` (13 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Satheesh Paul

From: Satheesh Paul <psatheesh@marvell.com>

Add support to configure flow rules with inline IPsec action.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
---
 drivers/common/cnxk/roc_nix_inl.h      |  3 +++
 drivers/common/cnxk/roc_nix_inl_dev.c  |  3 +++
 drivers/common/cnxk/roc_nix_inl_priv.h |  3 +++
 drivers/common/cnxk/roc_npc_mcam.c     | 28 ++++++++++++++++++++++++++--
 4 files changed, 35 insertions(+), 2 deletions(-)

diff --git a/drivers/common/cnxk/roc_nix_inl.h b/drivers/common/cnxk/roc_nix_inl.h
index 6b8c268..ae5e022 100644
--- a/drivers/common/cnxk/roc_nix_inl.h
+++ b/drivers/common/cnxk/roc_nix_inl.h
@@ -107,6 +107,9 @@ struct roc_nix_inl_dev {
 	struct plt_pci_device *pci_dev;
 	uint16_t ipsec_in_max_spi;
 	bool selftest;
+	bool is_multi_channel;
+	uint16_t channel;
+	uint16_t chan_mask;
 	bool attach_cptlf;
 	/* End of input parameters */
 
diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c
index 0789f99..495dd19 100644
--- a/drivers/common/cnxk/roc_nix_inl_dev.c
+++ b/drivers/common/cnxk/roc_nix_inl_dev.c
@@ -543,6 +543,9 @@ roc_nix_inl_dev_init(struct roc_nix_inl_dev *roc_inl_dev)
 	inl_dev->pci_dev = pci_dev;
 	inl_dev->ipsec_in_max_spi = roc_inl_dev->ipsec_in_max_spi;
 	inl_dev->selftest = roc_inl_dev->selftest;
+	inl_dev->is_multi_channel = roc_inl_dev->is_multi_channel;
+	inl_dev->channel = roc_inl_dev->channel;
+	inl_dev->chan_mask = roc_inl_dev->chan_mask;
 	inl_dev->attach_cptlf = roc_inl_dev->attach_cptlf;
 
 	/* Initialize base device */
diff --git a/drivers/common/cnxk/roc_nix_inl_priv.h b/drivers/common/cnxk/roc_nix_inl_priv.h
index 4729a38..3dc526f 100644
--- a/drivers/common/cnxk/roc_nix_inl_priv.h
+++ b/drivers/common/cnxk/roc_nix_inl_priv.h
@@ -50,6 +50,9 @@ struct nix_inl_dev {
 
 	/* Device arguments */
 	uint8_t selftest;
+	uint16_t channel;
+	uint16_t chan_mask;
+	bool is_multi_channel;
 	uint16_t ipsec_in_max_spi;
 	bool attach_cptlf;
 };
diff --git a/drivers/common/cnxk/roc_npc_mcam.c b/drivers/common/cnxk/roc_npc_mcam.c
index 8ccaaad..4985d22 100644
--- a/drivers/common/cnxk/roc_npc_mcam.c
+++ b/drivers/common/cnxk/roc_npc_mcam.c
@@ -503,8 +503,11 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
 {
 	int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
 	struct npc_mcam_write_entry_req *req;
+	struct nix_inl_dev *inl_dev = NULL;
 	struct mbox *mbox = npc->mbox;
 	struct mbox_msghdr *rsp;
+	struct idev_cfg *idev;
+	uint16_t pf_func = 0;
 	uint16_t ctr = ~(0);
 	int rc, idx;
 	int entry;
@@ -553,9 +556,30 @@ npc_mcam_alloc_and_write(struct npc *npc, struct roc_npc_flow *flow,
 		req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
 	}
 
+	idev = idev_get_cfg();
+	if (idev)
+		inl_dev = idev->nix_inl_dev;
+
 	if (flow->nix_intf == NIX_INTF_RX) {
-		req->entry_data.kw[0] |= (uint64_t)npc->channel;
-		req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+		if (inl_dev && inl_dev->is_multi_channel &&
+		    (flow->npc_action & NIX_RX_ACTIONOP_UCAST_IPSEC)) {
+			req->entry_data.kw[0] |= (uint64_t)inl_dev->channel;
+			req->entry_data.kw_mask[0] |=
+				(uint64_t)inl_dev->chan_mask;
+			pf_func = nix_inl_dev_pffunc_get();
+			req->entry_data.action &= ~(GENMASK(19, 4));
+			req->entry_data.action |= (uint64_t)pf_func << 4;
+
+			flow->npc_action &= ~(GENMASK(19, 4));
+			flow->npc_action |= (uint64_t)pf_func << 4;
+			flow->mcam_data[0] |= (uint64_t)inl_dev->channel;
+			flow->mcam_mask[0] |= (uint64_t)inl_dev->chan_mask;
+		} else {
+			req->entry_data.kw[0] |= (uint64_t)npc->channel;
+			req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
+			flow->mcam_data[0] |= (uint64_t)npc->channel;
+			flow->mcam_mask[0] |= (BIT_ULL(12) - 1);
+		}
 	} else {
 		uint16_t pf_func = (flow->npc_action >> 4) & 0xffff;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (14 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-06 16:21     ` Ferruh Yigit
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
                     ` (12 subsequent siblings)
  28 siblings, 1 reply; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella, Anatoly Burakov
  Cc: dev

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/cnxk/cn9k_ethdev.c         |  23 +++
 drivers/net/cnxk/cn9k_ethdev.h         |  61 +++++++
 drivers/net/cnxk/cn9k_ethdev_sec.c     | 313 +++++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn9k_rx.h             |   1 +
 drivers/net/cnxk/cn9k_tx.h             |   1 +
 drivers/net/cnxk/cnxk_ethdev.c         | 230 +++++++++++++++++++++++-
 drivers/net/cnxk/cnxk_ethdev.h         | 121 ++++++++++++-
 drivers/net/cnxk/cnxk_ethdev_devargs.c |  88 ++++++++-
 drivers/net/cnxk/cnxk_ethdev_sec.c     | 278 +++++++++++++++++++++++++++++
 drivers/net/cnxk/cnxk_lookup.c         |  50 +++++-
 drivers/net/cnxk/meson.build           |   2 +
 drivers/net/cnxk/version.map           |   5 +
 12 files changed, 1162 insertions(+), 11 deletions(-)
 create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
 create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c

diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678..08c86f9 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -179,8 +185,10 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_cpt_lf *inl_lf;
 	struct cn9k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -200,6 +208,19 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(BIT_ULL(16) == ROC_NIX_INL_SA_BASE_ALIGN);
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -508,6 +529,8 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn9k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index 3d4a206..f8818b8 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN9K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn9k_eth_txq {
 	uint64_t cmd[8];
@@ -15,6 +16,10 @@ struct cn9k_eth_txq {
 	uint64_t lso_tun_fmt;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 } __plt_cache_aligned;
 
 struct cn9k_eth_rxq {
@@ -32,8 +37,64 @@ struct cn9k_eth_rxq {
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */
+struct cn9k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_onf_ipsec_outb_sa */
+struct cn9k_outb_priv_data {
+	union {
+		uint64_t esn;
+		struct {
+			uint32_t seq;
+			uint32_t esn_hi;
+		};
+	};
+
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+
+	/* IP identifier */
+	uint16_t ip_id;
+
+	/* SA index */
+	uint32_t sa_idx;
+
+	/* Flags */
+	uint16_t copy_salt : 1;
+
+	/* Salt */
+	uint32_t nonce;
+
+	/* User data pointer */
+	void *userdata;
+
+	/* Back pointer to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+struct cn9k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn9k_eth_sec_ops_override(void);
+
 #endif /* __CN9K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
new file mode 100644
index 0000000..3ec7497
--- /dev/null
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -0,0 +1,313 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn9k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn9k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn9k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static int
+cn9k_eth_sec_session_create(void *device,
+			    struct rte_security_session_conf *conf,
+			    struct rte_security_session *sess,
+			    struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn9k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+
+	/* Search if a session already exists */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	if (inbound) {
+		struct cn9k_inb_priv_data *inb_priv;
+		struct roc_onf_ipsec_inb_sa *inb_sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn9k_inb_priv_data) <
+				  ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE. Assume no inline
+		 * device always for CN9K.
+		 */
+		inb_sa = (struct roc_onf_ipsec_inb_sa *)
+			roc_nix_inl_inb_sa_get(&dev->nix, false, ipsec->spi);
+		if (!inb_sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		/* Check if SA is already in use */
+		if (inb_sa->ctl.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_onf_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_onf_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn9k_outb_priv_data *outb_priv;
+		struct roc_onf_ipsec_outb_sa *outb_sa;
+		uintptr_t sa_base = dev->outb.sa_base;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn9k_outb_priv_data) <
+				  ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_onf_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_onf_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_onf_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+		/* Start sequence number with 1 */
+		outb_priv->seq = 1;
+
+		memcpy(&outb_priv->nonce, outb_sa->nonce, 4);
+		if (outb_sa->ctl.enc_type == ROC_IE_ON_SA_ENC_AES_GCM)
+			outb_priv->copy_salt = 1;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync SA content */
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn9k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_onf_ipsec_outb_sa *outb_sa;
+	struct roc_onf_ipsec_inb_sa *inb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->ctl.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->ctl.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync SA content */
+	plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn9k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn9k_eth_sec_capabilities;
+}
+
+void
+cn9k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn9k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn9k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn9k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index a3bf4e0..59545af 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -17,6 +17,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index ed65cd3..a27ff76 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 8629193..a2e134c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -38,6 +38,162 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	return speed_capa;
 }
 
+int
+cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev)
+{
+	struct roc_nix *nix = &dev->nix;
+
+	if (dev->inb.inl_dev == use_inl_dev)
+		return 0;
+
+	plt_nix_dbg("Security sessions(%u) still active, inl=%u!!!",
+		    dev->inb.nb_sess, !!dev->inb.inl_dev);
+
+	/* Change the mode */
+	dev->inb.inl_dev = use_inl_dev;
+
+	/* Update RoC for NPC rule insertion */
+	roc_nix_inb_mode_set(nix, use_inl_dev);
+
+	/* Setup lookup mem */
+	return cnxk_nix_lookup_mem_sa_base_set(dev);
+}
+
+static int
+nix_security_setup(struct cnxk_eth_dev *dev)
+{
+	struct roc_nix *nix = &dev->nix;
+	int i, rc = 0;
+
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Setup Inline Inbound */
+		rc = roc_nix_inl_inb_init(nix);
+		if (rc) {
+			plt_err("Failed to initialize nix inline inb, rc=%d",
+				rc);
+			return rc;
+		}
+
+		/* By default pick using inline device for poll mode.
+		 * Will be overridden when event mode rq's are setup.
+		 */
+		cnxk_nix_inb_mode_set(dev, true);
+	}
+
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		struct plt_bitmap *bmap;
+		size_t bmap_sz;
+		void *mem;
+
+		/* Setup enough descriptors for all tx queues */
+		nix->outb_nb_desc = dev->outb.nb_desc;
+		nix->outb_nb_crypto_qs = dev->outb.nb_crypto_qs;
+
+		/* Setup Inline Outbound */
+		rc = roc_nix_inl_outb_init(nix);
+		if (rc) {
+			plt_err("Failed to initialize nix inline outb, rc=%d",
+				rc);
+			goto cleanup;
+		}
+
+		dev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);
+
+		/* Skip the rest if DEV_TX_OFFLOAD_SECURITY is not enabled */
+		if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY))
+			goto done;
+
+		rc = -ENOMEM;
+		/* Allocate a bitmap to alloc and free sa indexes */
+		bmap_sz = plt_bitmap_get_memory_footprint(dev->outb.max_sa);
+		mem = plt_zmalloc(bmap_sz, PLT_CACHE_LINE_SIZE);
+		if (mem == NULL) {
+			plt_err("Outbound SA bmap alloc failed");
+
+			rc |= roc_nix_inl_outb_fini(nix);
+			goto cleanup;
+		}
+
+		rc = -EIO;
+		bmap = plt_bitmap_init(dev->outb.max_sa, mem, bmap_sz);
+		if (!bmap) {
+			plt_err("Outbound SA bmap init failed");
+
+			rc |= roc_nix_inl_outb_fini(nix);
+			plt_free(mem);
+			goto cleanup;
+		}
+
+		for (i = 0; i < dev->outb.max_sa; i++)
+			plt_bitmap_set(bmap, i);
+
+		dev->outb.sa_base = roc_nix_inl_outb_sa_base_get(nix);
+		dev->outb.sa_bmap_mem = mem;
+		dev->outb.sa_bmap = bmap;
+	}
+
+done:
+	return 0;
+cleanup:
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		rc |= roc_nix_inl_inb_fini(nix);
+	return rc;
+}
+
+static int
+nix_security_release(struct cnxk_eth_dev *dev)
+{
+	struct rte_eth_dev *eth_dev = dev->eth_dev;
+	struct cnxk_eth_sec_sess *eth_sec, *tvar;
+	struct roc_nix *nix = &dev->nix;
+	int rc, ret = 0;
+
+	/* Cleanup Inline inbound */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Destroy inbound sessions */
+		tvar = NULL;
+		TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
+			cnxk_eth_sec_ops.session_destroy(eth_dev,
+							 eth_sec->sess);
+
+		/* Clear lookup mem */
+		cnxk_nix_lookup_mem_sa_base_clear(dev);
+
+		rc = roc_nix_inl_inb_fini(nix);
+		if (rc)
+			plt_err("Failed to cleanup nix inline inb, rc=%d", rc);
+		ret |= rc;
+	}
+
+	/* Cleanup Inline outbound */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
+	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Destroy outbound sessions */
+		tvar = NULL;
+		TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
+			cnxk_eth_sec_ops.session_destroy(eth_dev,
+							 eth_sec->sess);
+
+		rc = roc_nix_inl_outb_fini(nix);
+		if (rc)
+			plt_err("Failed to cleanup nix inline outb, rc=%d", rc);
+		ret |= rc;
+
+		plt_bitmap_free(dev->outb.sa_bmap);
+		plt_free(dev->outb.sa_bmap_mem);
+		dev->outb.sa_bmap = NULL;
+		dev->outb.sa_bmap_mem = NULL;
+	}
+
+	dev->inb.inl_dev = false;
+	roc_nix_inb_mode_set(nix, false);
+	dev->nb_rxq_sso = 0;
+	dev->inb.nb_sess = 0;
+	dev->outb.nb_sess = 0;
+	return ret;
+}
+
 static void
 nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 {
@@ -194,6 +350,12 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 		eth_dev->data->tx_queues[qid] = NULL;
 	}
 
+	/* When Tx Security offload is enabled, increase tx desc count by
+	 * max possible outbound desc count.
+	 */
+	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)
+		nb_desc += dev->outb.nb_desc;
+
 	/* Setup ROC SQ */
 	sq = &dev->sqs[qid];
 	sq->qid = qid;
@@ -266,6 +428,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
 	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct rte_mempool_ops *ops;
 	const char *platform_ops;
@@ -303,6 +466,19 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 		eth_dev->data->rx_queues[qid] = NULL;
 	}
 
+	/* Clam up cq limit to size of packet pool aura for LBK
+	 * to avoid meta packet drop as LBK does not currently support
+	 * backpressure.
+	 */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY && roc_nix_is_lbk(nix)) {
+		uint64_t pkt_pool_limit = roc_nix_inl_dev_rq_limit_get();
+
+		/* Use current RQ's aura limit if inl rq is not available */
+		if (!pkt_pool_limit)
+			pkt_pool_limit = roc_npa_aura_op_limit_get(mp->pool_id);
+		nb_desc = RTE_MAX(nb_desc, pkt_pool_limit);
+	}
+
 	/* Setup ROC CQ */
 	cq = &dev->cqs[qid];
 	cq->qid = qid;
@@ -328,6 +504,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rq->later_skip = sizeof(struct rte_mbuf);
 	rq->lpb_size = mp->elt_size;
 
+	/* Enable Inline IPSec on RQ, will not be used for Poll mode */
+	if (roc_nix_inl_inb_is_enabled(nix))
+		rq->ipsech_ena = true;
+
 	rc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started);
 	if (rc) {
 		plt_err("Failed to init roc rq for rq=%d, rc=%d", qid, rc);
@@ -350,6 +530,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		/* Setup rq reference for inline dev if present */
+		rc = roc_nix_inl_dev_rq_get(rq);
+		if (rc)
+			goto free_mem;
+	}
+
 	plt_nix_dbg("rq=%d pool=%s nb_desc=%d->%d", qid, mp->name, nb_desc,
 		    cq->nb_desc);
 
@@ -370,6 +557,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	}
 
 	return 0;
+free_mem:
+	plt_free(rxq_sp);
 rq_fini:
 	rc |= roc_nix_rq_fini(rq);
 cq_fini:
@@ -394,11 +583,15 @@ cnxk_nix_rx_queue_release(void *rxq)
 	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
 	dev = rxq_sp->dev;
 	qid = rxq_sp->qid;
+	rq = &dev->rqs[qid];
 
 	plt_nix_dbg("Releasing rxq %u", qid);
 
+	/* Release rq reference for inline dev if present */
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		roc_nix_inl_dev_rq_put(rq);
+
 	/* Cleanup ROC RQ */
-	rq = &dev->rqs[qid];
 	rc = roc_nix_rq_fini(rq);
 	if (rc)
 		plt_err("Failed to cleanup rq, rc=%d", rc);
@@ -804,6 +997,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		rc = nix_store_queue_cfg_and_then_release(eth_dev);
 		if (rc)
 			goto fail_configure;
+
+		/* Cleanup security support */
+		rc = nix_security_release(dev);
+		if (rc)
+			goto fail_configure;
+
 		roc_nix_tm_fini(nix);
 		roc_nix_lf_free(nix);
 	}
@@ -958,6 +1157,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		plt_err("Failed to initialize flow control rc=%d", rc);
 		goto cq_fini;
 	}
+
+	/* Setup Inline security support */
+	rc = nix_security_setup(dev);
+	if (rc)
+		goto cq_fini;
+
 	/*
 	 * Restore queue config when reconfigure followed by
 	 * reconfigure and no queue configure invoked from application case.
@@ -965,7 +1170,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	if (dev->configured == 1) {
 		rc = nix_restore_queue_cfg(eth_dev);
 		if (rc)
-			goto cq_fini;
+			goto sec_release;
 	}
 
 	/* Update the mac address */
@@ -987,6 +1192,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	dev->nb_txq = data->nb_tx_queues;
 	return 0;
 
+sec_release:
+	rc |= nix_security_release(dev);
 cq_fini:
 	roc_nix_unregister_cq_irqs(nix);
 q_irq_fini:
@@ -1284,12 +1491,25 @@ static int
 cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ctx *sec_ctx;
 	struct roc_nix *nix = &dev->nix;
 	struct rte_pci_device *pci_dev;
 	int rc, max_entries;
 
 	eth_dev->dev_ops = &cnxk_eth_dev_ops;
 
+	/* Alloc security context */
+	sec_ctx = plt_zmalloc(sizeof(struct rte_security_ctx), 0);
+	if (!sec_ctx)
+		return -ENOMEM;
+	sec_ctx->device = eth_dev;
+	sec_ctx->ops = &cnxk_eth_sec_ops;
+	sec_ctx->flags =
+		(RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
+	eth_dev->security_ctx = sec_ctx;
+	TAILQ_INIT(&dev->inb.list);
+	TAILQ_INIT(&dev->outb.list);
+
 	/* For secondary processes, the primary has done all the work */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -1406,6 +1626,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 	struct roc_nix *nix = &dev->nix;
 	int rc, i;
 
+	plt_free(eth_dev->security_ctx);
+	eth_dev->security_ctx = NULL;
+
 	/* Nothing to be done for secondary processes */
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -1440,6 +1663,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)
 	}
 	eth_dev->data->nb_rx_queues = 0;
 
+	/* Free security resources */
+	nix_security_release(dev);
+
 	/* Free tm resources */
 	roc_nix_tm_fini(nix);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 10e05e6..b2368c8 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -13,6 +13,9 @@
 #include <rte_mbuf.h>
 #include <rte_mbuf_pool_ops.h>
 #include <rte_mempool.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+#include <rte_tailq.h>
 #include <rte_time.h>
 
 #include "roc_api.h"
@@ -70,14 +73,14 @@
 	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
 	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
 	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM)
+	 DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
 	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
 	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
 	 DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
 	 DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |                  \
-	 DEV_RX_OFFLOAD_VLAN_STRIP)
+	 DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SECURITY)
 
 #define RSS_IPV4_ENABLE                                                        \
 	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
@@ -112,6 +115,11 @@
 #define PTYPE_TUNNEL_ARRAY_SZ	  BIT(PTYPE_TUNNEL_WIDTH)
 #define PTYPE_ARRAY_SZ                                                         \
 	((PTYPE_NON_TUNNEL_ARRAY_SZ + PTYPE_TUNNEL_ARRAY_SZ) * sizeof(uint16_t))
+
+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
+#define ERRCODE_ERRLEN_WIDTH 12
+#define ERR_ARRAY_SZ	     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
+
 /* Fastpath lookup */
 #define CNXK_NIX_FASTPATH_LOOKUP_MEM "cnxk_nix_fastpath_lookup_mem"
 
@@ -119,6 +127,9 @@
 	((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \
 	 (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))
 
+/* Subtype from inline outbound error event */
+#define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
+
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
 	uint8_t rx_pause;
@@ -144,6 +155,82 @@ struct cnxk_timesync_info {
 	uint64_t *tx_tstamp;
 } __plt_cache_aligned;
 
+/* Security session private data */
+struct cnxk_eth_sec_sess {
+	/* List entry */
+	TAILQ_ENTRY(cnxk_eth_sec_sess) entry;
+
+	/* Inbound SA is from NIX_RX_IPSEC_SA_BASE or
+	 * Outbound SA from roc_nix_inl_outb_sa_base_get()
+	 */
+	void *sa;
+
+	/* SA index */
+	uint32_t sa_idx;
+
+	/* SPI */
+	uint32_t spi;
+
+	/* Back pointer to session */
+	struct rte_security_session *sess;
+
+	/* Inbound */
+	bool inb;
+
+	/* Inbound session on inl dev */
+	bool inl_dev;
+};
+
+TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess);
+
+/* Inbound security data */
+struct cnxk_eth_dev_sec_inb {
+	/* IPSec inbound max SPI */
+	uint16_t max_spi;
+
+	/* Using inbound with inline device */
+	bool inl_dev;
+
+	/* Device argument to force inline device for inb */
+	bool force_inl_dev;
+
+	/* Active sessions */
+	uint16_t nb_sess;
+
+	/* List of sessions */
+	struct cnxk_eth_sec_sess_list list;
+};
+
+/* Outbound security data */
+struct cnxk_eth_dev_sec_outb {
+	/* IPSec outbound max SA */
+	uint16_t max_sa;
+
+	/* Per CPT LF descriptor count */
+	uint32_t nb_desc;
+
+	/* SA Bitmap */
+	struct plt_bitmap *sa_bmap;
+
+	/* SA bitmap memory */
+	void *sa_bmap_mem;
+
+	/* SA base */
+	uint64_t sa_base;
+
+	/* CPT LF base */
+	struct roc_cpt_lf *lf_base;
+
+	/* Crypto queues => CPT lf count */
+	uint16_t nb_crypto_qs;
+
+	/* Active sessions */
+	uint16_t nb_sess;
+
+	/* List of sessions */
+	struct cnxk_eth_sec_sess_list list;
+};
+
 struct cnxk_eth_dev {
 	/* ROC NIX */
 	struct roc_nix nix;
@@ -159,6 +246,7 @@ struct cnxk_eth_dev {
 	/* Configured queue count */
 	uint16_t nb_rxq;
 	uint16_t nb_txq;
+	uint16_t nb_rxq_sso;
 	uint8_t configured;
 
 	/* Max macfilter entries */
@@ -223,6 +311,10 @@ struct cnxk_eth_dev {
 	/* Per queue statistics counters */
 	uint32_t txq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
 	uint32_t rxq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+
+	/* Security data */
+	struct cnxk_eth_dev_sec_inb inb;
+	struct cnxk_eth_dev_sec_outb outb;
 };
 
 struct cnxk_eth_rxq_sp {
@@ -261,6 +353,9 @@ extern struct eth_dev_ops cnxk_eth_dev_ops;
 /* Common flow ops */
 extern struct rte_flow_ops cnxk_flow_ops;
 
+/* Common security ops */
+extern struct rte_security_ops cnxk_eth_sec_ops;
+
 /* Ops */
 int cnxk_nix_probe(struct rte_pci_driver *pci_drv,
 		   struct rte_pci_device *pci_dev);
@@ -388,6 +483,18 @@ int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs,
 /* Debug */
 int cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
 			 struct rte_dev_reg_info *regs);
+/* Security */
+int cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p);
+int cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx);
+int cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev);
+int cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev);
+__rte_internal
+int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);
+struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
+						       uint32_t spi, bool inb);
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
+			      struct rte_security_session *sess);
 
 /* Other private functions */
 int nix_recalc_mtu(struct rte_eth_dev *eth_dev);
@@ -498,4 +605,14 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
 	}
 }
 
+static __rte_always_inline uintptr_t
+cnxk_nix_sa_base_get(uint16_t port, const void *lookup_mem)
+{
+	uintptr_t sa_base_tbl;
+
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	return *((const uintptr_t *)sa_base_tbl + port);
+}
+
 #endif /* __CNXK_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb..c0b949e 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -8,6 +8,61 @@
 #include "cnxk_ethdev.h"
 
 static int
+parse_outb_nb_desc(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_outb_nb_crypto_qs(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	if (val < 1 || val > 64)
+		return -EINVAL;
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_ipsec_out_max_sa(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
 parse_flow_max_priority(const char *key, const char *value, void *extra_args)
 {
 	RTE_SET_USED(key);
@@ -117,15 +172,25 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)
 #define CNXK_SWITCH_HEADER_TYPE "switch_header"
 #define CNXK_RSS_TAG_AS_XOR	"tag_as_xor"
 #define CNXK_LOCK_RX_CTX	"lock_rx_ctx"
+#define CNXK_IPSEC_IN_MAX_SPI	"ipsec_in_max_spi"
+#define CNXK_IPSEC_OUT_MAX_SA	"ipsec_out_max_sa"
+#define CNXK_OUTB_NB_DESC	"outb_nb_desc"
+#define CNXK_FORCE_INB_INL_DEV	"force_inb_inl_dev"
+#define CNXK_OUTB_NB_CRYPTO_QS	"outb_nb_crypto_qs"
 
 int
 cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
 {
 	uint16_t reta_sz = ROC_NIX_RSS_RETA_SZ_64;
 	uint16_t sqb_count = CNXK_NIX_TX_MAX_SQB;
+	uint16_t ipsec_in_max_spi = BIT(8) - 1;
+	uint16_t ipsec_out_max_sa = BIT(12);
 	uint16_t flow_prealloc_size = 1;
 	uint16_t switch_header_type = 0;
 	uint16_t flow_max_priority = 3;
+	uint16_t force_inb_inl_dev = 0;
+	uint16_t outb_nb_crypto_qs = 1;
+	uint16_t outb_nb_desc = 8200;
 	uint16_t rss_tag_as_xor = 0;
 	uint16_t scalar_enable = 0;
 	uint8_t lock_rx_ctx = 0;
@@ -153,10 +218,27 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
 	rte_kvargs_process(kvlist, CNXK_RSS_TAG_AS_XOR, &parse_flag,
 			   &rss_tag_as_xor);
 	rte_kvargs_process(kvlist, CNXK_LOCK_RX_CTX, &parse_flag, &lock_rx_ctx);
+	rte_kvargs_process(kvlist, CNXK_IPSEC_IN_MAX_SPI,
+			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_process(kvlist, CNXK_IPSEC_OUT_MAX_SA,
+			   &parse_ipsec_out_max_sa, &ipsec_out_max_sa);
+	rte_kvargs_process(kvlist, CNXK_OUTB_NB_DESC, &parse_outb_nb_desc,
+			   &outb_nb_desc);
+	rte_kvargs_process(kvlist, CNXK_OUTB_NB_CRYPTO_QS,
+			   &parse_outb_nb_crypto_qs, &outb_nb_crypto_qs);
+	rte_kvargs_process(kvlist, CNXK_FORCE_INB_INL_DEV, &parse_flag,
+			   &force_inb_inl_dev);
 	rte_kvargs_free(kvlist);
 
 null_devargs:
 	dev->scalar_ena = !!scalar_enable;
+	dev->inb.force_inl_dev = !!force_inb_inl_dev;
+	dev->inb.max_spi = ipsec_in_max_spi;
+	dev->outb.max_sa = ipsec_out_max_sa;
+	dev->outb.nb_desc = outb_nb_desc;
+	dev->outb.nb_crypto_qs = outb_nb_crypto_qs;
+	dev->nix.ipsec_in_max_spi = ipsec_in_max_spi;
+	dev->nix.ipsec_out_max_sa = ipsec_out_max_sa;
 	dev->nix.rss_tag_as_xor = !!rss_tag_as_xor;
 	dev->nix.max_sqb_count = sqb_count;
 	dev->nix.reta_sz = reta_sz;
@@ -177,4 +259,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk,
 			      CNXK_FLOW_PREALLOC_SIZE "=<1-32>"
 			      CNXK_FLOW_MAX_PRIORITY "=<1-32>"
 			      CNXK_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b>"
-			      CNXK_RSS_TAG_AS_XOR "=1");
+			      CNXK_RSS_TAG_AS_XOR "=1"
+			      CNXK_IPSEC_IN_MAX_SPI "=<1-65535>"
+			      CNXK_OUTB_NB_DESC "=<1-65535>"
+			      CNXK_OUTB_NB_CRYPTO_QS "=<1-64>"
+			      CNXK_FORCE_INB_INL_DEV "=1");
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
new file mode 100644
index 0000000..c76e230
--- /dev/null
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <cnxk_ethdev.h>
+
+#define CNXK_NIX_INL_SELFTEST	      "selftest"
+#define CNXK_NIX_INL_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
+
+#define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)
+#define CNXK_NIX_INL_DEV_NAME_LEN                                              \
+	(sizeof(CNXK_NIX_INL_DEV_NAME) + PCI_PRI_STR_SIZE)
+
+static inline int
+bitmap_ctzll(uint64_t slab)
+{
+	if (slab == 0)
+		return 0;
+
+	return __builtin_ctzll(slab);
+}
+
+int
+cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p)
+{
+	uint32_t pos, idx;
+	uint64_t slab;
+	int rc;
+
+	if (!dev->outb.sa_bmap)
+		return -ENOTSUP;
+
+	pos = 0;
+	slab = 0;
+	/* Scan from the beginning */
+	plt_bitmap_scan_init(dev->outb.sa_bmap);
+	/* Scan bitmap to get the free sa index */
+	rc = plt_bitmap_scan(dev->outb.sa_bmap, &pos, &slab);
+	/* Empty bitmap */
+	if (rc == 0) {
+		plt_err("Outbound SA' exhausted, use 'ipsec_out_max_sa' "
+			"devargs to increase");
+		return -ERANGE;
+	}
+
+	/* Get free SA index */
+	idx = pos + bitmap_ctzll(slab);
+	plt_bitmap_clear(dev->outb.sa_bmap, idx);
+	*idx_p = idx;
+	return 0;
+}
+
+int
+cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx)
+{
+	if (idx >= dev->outb.max_sa)
+		return -EINVAL;
+
+	/* Check if it is already free */
+	if (plt_bitmap_get(dev->outb.sa_bmap, idx))
+		return -EINVAL;
+
+	/* Mark index as free */
+	plt_bitmap_set(dev->outb.sa_bmap, idx);
+	return 0;
+}
+
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev, uint32_t spi, bool inb)
+{
+	struct cnxk_eth_sec_sess_list *list;
+	struct cnxk_eth_sec_sess *eth_sec;
+
+	list = inb ? &dev->inb.list : &dev->outb.list;
+	TAILQ_FOREACH(eth_sec, list, entry) {
+		if (eth_sec->spi == spi)
+			return eth_sec;
+	}
+
+	return NULL;
+}
+
+struct cnxk_eth_sec_sess *
+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,
+			      struct rte_security_session *sess)
+{
+	struct cnxk_eth_sec_sess *eth_sec = NULL;
+
+	/* Search in inbound list */
+	TAILQ_FOREACH(eth_sec, &dev->inb.list, entry) {
+		if (eth_sec->sess == sess)
+			return eth_sec;
+	}
+
+	/* Search in outbound list */
+	TAILQ_FOREACH(eth_sec, &dev->outb.list, entry) {
+		if (eth_sec->sess == sess)
+			return eth_sec;
+	}
+
+	return NULL;
+}
+
+static unsigned int
+cnxk_eth_sec_session_get_size(void *device __rte_unused)
+{
+	return sizeof(struct cnxk_eth_sec_sess);
+}
+
+struct rte_security_ops cnxk_eth_sec_ops = {
+	.session_get_size = cnxk_eth_sec_session_get_size
+};
+
+static int
+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint16_t *)extra_args = val;
+
+	return 0;
+}
+
+static int
+parse_selftest(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint32_t val;
+
+	val = atoi(value);
+
+	*(uint8_t *)extra_args = !!(val == 1);
+	return 0;
+}
+
+static int
+nix_inl_parse_devargs(struct rte_devargs *devargs,
+		      struct roc_nix_inl_dev *inl_dev)
+{
+	uint32_t ipsec_in_max_spi = BIT(8) - 1;
+	struct rte_kvargs *kvlist;
+	uint8_t selftest = 0;
+
+	if (devargs == NULL)
+		goto null_devargs;
+
+	kvlist = rte_kvargs_parse(devargs->args, NULL);
+	if (kvlist == NULL)
+		goto exit;
+
+	rte_kvargs_process(kvlist, CNXK_NIX_INL_SELFTEST, &parse_selftest,
+			   &selftest);
+	rte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,
+			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_free(kvlist);
+
+null_devargs:
+	inl_dev->ipsec_in_max_spi = ipsec_in_max_spi;
+	inl_dev->selftest = selftest;
+	return 0;
+exit:
+	return -EINVAL;
+}
+
+static inline char *
+nix_inl_dev_to_name(struct rte_pci_device *pci_dev, char *name)
+{
+	snprintf(name, CNXK_NIX_INL_DEV_NAME_LEN,
+		 CNXK_NIX_INL_DEV_NAME PCI_PRI_FMT, pci_dev->addr.domain,
+		 pci_dev->addr.bus, pci_dev->addr.devid,
+		 pci_dev->addr.function);
+
+	return name;
+}
+
+static int
+cnxk_nix_inl_dev_remove(struct rte_pci_device *pci_dev)
+{
+	char name[CNXK_NIX_INL_DEV_NAME_LEN];
+	const struct rte_memzone *mz;
+	struct roc_nix_inl_dev *dev;
+	int rc;
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	mz = rte_memzone_lookup(nix_inl_dev_to_name(pci_dev, name));
+	if (!mz)
+		return 0;
+
+	dev = mz->addr;
+
+	/* Cleanup inline dev */
+	rc = roc_nix_inl_dev_fini(dev);
+	if (rc) {
+		plt_err("Failed to cleanup inl dev, rc=%d(%s)", rc,
+			roc_error_msg_get(rc));
+		return rc;
+	}
+
+	rte_memzone_free(mz);
+	return 0;
+}
+
+static int
+cnxk_nix_inl_dev_probe(struct rte_pci_driver *pci_drv,
+		       struct rte_pci_device *pci_dev)
+{
+	char name[CNXK_NIX_INL_DEV_NAME_LEN];
+	struct roc_nix_inl_dev *inl_dev;
+	const struct rte_memzone *mz;
+	int rc = -ENOMEM;
+
+	RTE_SET_USED(pci_drv);
+
+	rc = roc_plt_init();
+	if (rc) {
+		plt_err("Failed to initialize platform model, rc=%d", rc);
+		return rc;
+	}
+
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	mz = rte_memzone_reserve_aligned(nix_inl_dev_to_name(pci_dev, name),
+					 sizeof(*inl_dev), SOCKET_ID_ANY, 0,
+					 RTE_CACHE_LINE_SIZE);
+	if (mz == NULL)
+		return rc;
+
+	inl_dev = mz->addr;
+	inl_dev->pci_dev = pci_dev;
+
+	/* Parse devargs string */
+	rc = nix_inl_parse_devargs(pci_dev->device.devargs, inl_dev);
+	if (rc) {
+		plt_err("Failed to parse devargs rc=%d", rc);
+		goto free_mem;
+	}
+
+	rc = roc_nix_inl_dev_init(inl_dev);
+	if (rc) {
+		plt_err("Failed to init nix inl device, rc=%d(%s)", rc,
+			roc_error_msg_get(rc));
+		goto free_mem;
+	}
+
+	return 0;
+free_mem:
+	rte_memzone_free(mz);
+	return rc;
+}
+
+static const struct rte_pci_id cnxk_nix_inl_pci_map[] = {
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_PF)},
+	{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_VF)},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct rte_pci_driver cnxk_nix_inl_pci = {
+	.id_table = cnxk_nix_inl_pci_map,
+	.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
+	.probe = cnxk_nix_inl_dev_probe,
+	.remove = cnxk_nix_inl_dev_remove,
+};
+
+RTE_PMD_REGISTER_PCI(cnxk_nix_inl, cnxk_nix_inl_pci);
+RTE_PMD_REGISTER_PCI_TABLE(cnxk_nix_inl, cnxk_nix_inl_pci_map);
+RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, "vfio-pci");
+
+RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,
+			      CNXK_NIX_INL_SELFTEST "=1"
+			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>");
diff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c
index 0152ad9..f6ec768 100644
--- a/drivers/net/cnxk/cnxk_lookup.c
+++ b/drivers/net/cnxk/cnxk_lookup.c
@@ -7,12 +7,8 @@
 
 #include "cnxk_ethdev.h"
 
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ	     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))
-
-#define SA_TBL_SZ	(RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_TBL_SZ)
+#define SA_BASE_TBL_SZ	(RTE_MAX_ETHPORTS * sizeof(uintptr_t))
+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ)
 const uint32_t *
 cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 {
@@ -324,3 +320,45 @@ cnxk_nix_fastpath_lookup_mem_get(void)
 	}
 	return NULL;
 }
+
+int
+cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev)
+{
+	void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+	uint16_t port = dev->eth_dev->data->port_id;
+	uintptr_t sa_base_tbl;
+	uintptr_t sa_base;
+	uint8_t sa_w;
+
+	if (!lookup_mem)
+		return -EIO;
+
+	sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix, dev->inb.inl_dev);
+	if (!sa_base)
+		return -ENOTSUP;
+
+	sa_w = plt_log2_u32(dev->nix.ipsec_in_max_spi + 1);
+
+	/* Set SA Base in lookup mem */
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	*((uintptr_t *)sa_base_tbl + port) = sa_base | sa_w;
+	return 0;
+}
+
+int
+cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev)
+{
+	void *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
+	uint16_t port = dev->eth_dev->data->port_id;
+	uintptr_t sa_base_tbl;
+
+	if (!lookup_mem)
+		return -EIO;
+
+	/* Set SA Base in lookup mem */
+	sa_base_tbl = (uintptr_t)lookup_mem;
+	sa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;
+	*((uintptr_t *)sa_base_tbl + port) = 0;
+	return 0;
+}
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index 1e86144..c00da62 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -12,6 +12,7 @@ sources = files(
         'cnxk_ethdev.c',
         'cnxk_ethdev_devargs.c',
         'cnxk_ethdev_ops.c',
+        'cnxk_ethdev_sec.c',
         'cnxk_link.c',
         'cnxk_lookup.c',
         'cnxk_ptp.c',
@@ -23,6 +24,7 @@ sources = files(
 # CN9K
 sources += files(
         'cn9k_ethdev.c',
+        'cn9k_ethdev_sec.c',
         'cn9k_rte_flow.c',
         'cn9k_rx.c',
         'cn9k_rx_mseg.c',
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index c2e0723..b9da6b1 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -1,3 +1,8 @@
 DPDK_22 {
 	local: *;
 };
+
+INTERNAL {
+	global:
+	cnxk_nix_inb_mode_set;
+};
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 17/28] net/cnxk: support inline security setup for cn10k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (15 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
                     ` (11 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Pavan Nikhilesh, Shijith Thotton, Anatoly Burakov
  Cc: dev

Add support for inline inbound and outbound IPSec for SA create,
destroy and other NIX / CPT LF configurations.

This patch also changes dpdk-devbind.py to list new inline
device as misc device.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 doc/guides/nics/cnxk.rst                 | 102 ++++++++
 doc/guides/nics/features/cnxk.ini        |   1 +
 doc/guides/nics/features/cnxk_vec.ini    |   1 +
 doc/guides/nics/features/cnxk_vf.ini     |   1 +
 doc/guides/rel_notes/release_21_11.rst   |   2 +
 drivers/event/cnxk/cnxk_eventdev_adptr.c |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.c          |  36 ++-
 drivers/net/cnxk/cn10k_ethdev.h          |  43 ++++
 drivers/net/cnxk/cn10k_ethdev_sec.c      | 426 +++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn10k_rx.h              |   1 +
 drivers/net/cnxk/cn10k_tx.h              |   1 +
 drivers/net/cnxk/meson.build             |   1 +
 usertools/dpdk-devbind.py                |   8 +-
 13 files changed, 654 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 90d27db..b542437 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -34,6 +34,7 @@ Features of the CNXK Ethdev PMD are:
 - Vector Poll mode driver
 - Debug utilities - Context dump and error interrupt support
 - Support Rx interrupt
+- Inline IPsec processing support
 
 Prerequisites
 -------------
@@ -185,6 +186,74 @@ Runtime Config Options
 
       -a 0002:02:00.0,tag_as_xor=1
 
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127).
+
+- ``Max SA's for outbound inline IPsec`` (default ``4096``)
+
+   Max number of SA's supported for outbound inline IPsec processing can be
+   specified by ``ipsec_out_max_sa`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,ipsec_out_max_sa=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 outbound SAs.
+
+- ``Outbound CPT LF queue size`` (default ``8200``)
+
+   Size of Outbound CPT LF queue in number of descriptors can be specified by
+   ``outb_nb_desc`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_desc=16384
+
+    With the above configuration, Outbound CPT LF will be created to accommodate
+    at max 16384 descriptors at any given time.
+
+- ``Outbound CPT LF count`` (default ``1``)
+
+   Number of CPT LF's to attach for Outbound processing can be specified by
+   ``outb_nb_crypto_qs`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,outb_nb_crypto_qs=2
+
+   With the above confiuration, two CPT LF's are setup and distributed among
+   all the Tx queues for outbound processing.
+
+- ``Force using inline ipsec device for inbound`` (default ``0``)
+
+   In CN10K, in event mode, driver can work in two modes,
+
+   1. Inbound encrypted traffic received by probed ipsec inline device while
+      plain traffic post decryption is received by ethdev.
+
+   2. Both Inbound encrypted traffic and plain traffic post decryption are
+      received by ethdev.
+
+   By default event mode works without using inline device i.e mode ``2``.
+   This behaviour can be changed to pick mode ``1`` by using
+   ``force_inb_inl_dev`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:02:00.0,force_inb_inl_dev=1 -a 0002:03:00.0,force_inb_inl_dev=1
+
+   With the above configuration, inbound encrypted traffic from both the ports
+   is received by ipsec inline device.
 
 .. note::
 
@@ -250,6 +319,39 @@ Example usage in testpmd::
    testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
           spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
 
+Inline device support for CN10K
+-------------------------------
+
+CN10K HW provides a misc device Inline device that supports ethernet devices in
+providing following features.
+
+  - Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet
+    devices to be processed by the single inline IPSec device. This allows
+    single rte security session to accept traffic from multiple ports.
+
+  - Support for event generation on outbound inline IPsec processing errors.
+
+  - Support CN106xx poll mode of operation for inline IPSec inbound processing.
+
+Inline IPsec device is identified by PCI PF vendid:devid ``177D:A0F0`` or
+VF ``177D:A0F1``.
+
+Runtime Config Options for inline device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Max SPI for inbound inline IPsec`` (default ``255``)
+
+   Max SPI supported for inbound inline IPsec processing can be specified by
+   ``ipsec_in_max_spi`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,ipsec_in_max_spi=128
+
+   With the above configuration, application can enable inline IPsec processing
+   for 128 inbound SAs (SPI 0-127) for traffic aggregated on inline device.
+
+
 Debugging Options
 -----------------
 
diff --git a/doc/guides/nics/features/cnxk.ini b/doc/guides/nics/features/cnxk.ini
index 5d45625..1ced3ee 100644
--- a/doc/guides/nics/features/cnxk.ini
+++ b/doc/guides/nics/features/cnxk.ini
@@ -27,6 +27,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Flow control         = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
diff --git a/doc/guides/nics/features/cnxk_vec.ini b/doc/guides/nics/features/cnxk_vec.ini
index abf2b8d..12ca0a5 100644
--- a/doc/guides/nics/features/cnxk_vec.ini
+++ b/doc/guides/nics/features/cnxk_vec.ini
@@ -26,6 +26,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Flow control         = Y
 Jumbo frame          = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/nics/features/cnxk_vf.ini b/doc/guides/nics/features/cnxk_vf.ini
index 7b4299f..139d9b9 100644
--- a/doc/guides/nics/features/cnxk_vf.ini
+++ b/doc/guides/nics/features/cnxk_vf.ini
@@ -22,6 +22,7 @@ RSS hash             = Y
 RSS key update       = Y
 RSS reta update      = Y
 Inner RSS            = Y
+Inline protocol      = Y
 Jumbo frame          = Y
 Scattered Rx         = Y
 L3 checksum offload  = Y
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index f099b1c..354d063 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -88,6 +88,8 @@ New Features
 
   * Added rte_flow support for dual VLAN insert and strip actions.
   * Added rte_tm support.
+  * Added support for Inline IPsec for CN9K event mode and CN10K
+    poll mode and event mode.
 
 * **Updated Marvell cnxk crypto PMD.**
 
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index baf2f2a..a34efbb 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -123,7 +123,9 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		    uint16_t port_id, const struct rte_event *ev,
 		    uint8_t custom_flowid)
 {
+	struct roc_nix *nix = &cnxk_eth_dev->nix;
 	struct roc_nix_rq *rq;
+	int rc;
 
 	rq = &cnxk_eth_dev->rqs[rq_id];
 	rq->sso_ena = 1;
@@ -140,7 +142,24 @@ cnxk_sso_rxq_enable(struct cnxk_eth_dev *cnxk_eth_dev, uint16_t rq_id,
 		rq->tag_mask |= ev->flow_id;
 	}
 
-	return roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	rc = roc_nix_rq_modify(&cnxk_eth_dev->nix, rq, 0);
+	if (rc)
+		return rc;
+
+	if (rq_id == 0 && roc_nix_inl_inb_is_enabled(nix)) {
+		uint32_t sec_tag_const;
+
+		/* IPSec tag const is 8-bit left shifted value of tag_mask
+		 * as it applies to bit 32:8 of tag only.
+		 */
+		sec_tag_const = rq->tag_mask >> 8;
+		rc = roc_nix_inl_inb_tag_update(nix, sec_tag_const,
+						ev->sched_type);
+		if (rc)
+			plt_err("Failed to set tag conf for ipsec, rc=%d", rc);
+	}
+
+	return rc;
 }
 
 static int
@@ -186,6 +205,7 @@ cnxk_sso_rx_adapter_queue_add(
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, true,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso++;
 	}
 
 	if (rc < 0) {
@@ -196,6 +216,14 @@ cnxk_sso_rx_adapter_queue_add(
 
 	dev->rx_offloads |= cnxk_eth_dev->rx_offload_flags;
 
+	/* Switch to use PF/VF's NIX LF instead of inline device for inbound
+	 * when all the RQ's are switched to event dev mode. We do this only
+	 * when using inline device is not forced by dev args.
+	 */
+	if (!cnxk_eth_dev->inb.force_inl_dev &&
+	    cnxk_eth_dev->nb_rxq_sso == cnxk_eth_dev->nb_rxq)
+		cnxk_nix_inb_mode_set(cnxk_eth_dev, false);
+
 	return 0;
 }
 
@@ -220,12 +248,18 @@ cnxk_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
 		rox_nix_fc_npa_bp_cfg(&cnxk_eth_dev->nix,
 				      rxq_sp->qconf.mp->pool_id, false,
 				      dev->force_ena_bp);
+		cnxk_eth_dev->nb_rxq_sso--;
 	}
 
 	if (rc < 0)
 		plt_err("Failed to clear Rx adapter config port=%d, q=%d",
 			eth_dev->data->port_id, rx_queue_id);
 
+	/* Removing RQ from Rx adapter implies need to use
+	 * inline device for CQ/Poll mode.
+	 */
+	cnxk_nix_inb_mode_set(cnxk_eth_dev, true);
+
 	return rc;
 }
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6c..fa2343c 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	if (!dev->ptype_disable)
 		flags |= NIX_RX_OFFLOAD_PTYPE_F;
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+		flags |= NIX_RX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
+	if (conf & DEV_TX_OFFLOAD_SECURITY)
+		flags |= NIX_TX_OFFLOAD_SECURITY_F;
+
 	return flags;
 }
 
@@ -181,8 +187,11 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 const struct rte_eth_txconf *tx_conf)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_nix *nix = &dev->nix;
+	struct roc_cpt_lf *inl_lf;
 	struct cn10k_eth_txq *txq;
 	struct roc_nix_sq *sq;
+	uint16_t crypto_qid;
 	int rc;
 
 	RTE_SET_USED(socket);
@@ -198,11 +207,24 @@ cn10k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq = eth_dev->data->tx_queues[qid];
 	txq->fc_mem = sq->fc;
 	/* Store lmt base in tx queue for easy access */
-	txq->lmt_base = dev->nix.lmt_base;
+	txq->lmt_base = nix->lmt_base;
 	txq->io_addr = sq->io_addr;
 	txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;
 	txq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;
 
+	/* Fetch CPT LF info for outbound if present */
+	if (dev->outb.lf_base) {
+		crypto_qid = qid % dev->outb.nb_crypto_qs;
+		inl_lf = dev->outb.lf_base + crypto_qid;
+
+		txq->cpt_io_addr = inl_lf->io_addr;
+		txq->cpt_fc = inl_lf->fc_addr;
+		txq->cpt_desc = inl_lf->nb_desc * 0.7;
+		txq->sa_base = (uint64_t)dev->outb.sa_base;
+		txq->sa_base |= eth_dev->data->port_id;
+		PLT_STATIC_ASSERT(ROC_NIX_INL_SA_BASE_ALIGN == BIT_ULL(16));
+	}
+
 	nix_form_default_desc(dev, txq, qid);
 	txq->lso_tun_fmt = dev->lso_tun_fmt;
 	return 0;
@@ -215,6 +237,7 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 			 struct rte_mempool *mp)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct cnxk_eth_rxq_sp *rxq_sp;
 	struct cn10k_eth_rxq *rxq;
 	struct roc_nix_rq *rq;
 	struct roc_nix_cq *cq;
@@ -250,6 +273,15 @@ cn10k_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq->data_off = rq->first_skip;
 	rxq->mbuf_initializer = cnxk_nix_rxq_mbuf_setup(dev);
 
+	/* Setup security related info */
+	if (dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		rxq->lmt_base = dev->nix.lmt_base;
+		rxq->sa_base = roc_nix_inl_inb_sa_base_get(&dev->nix,
+							   dev->inb.inl_dev);
+	}
+	rxq_sp = cnxk_eth_rxq_to_sp(rxq);
+	rxq->aura_handle = rxq_sp->qconf.mp->pool_id;
+
 	/* Lookup mem */
 	rxq->lookup_mem = cnxk_nix_fastpath_lookup_mem_get();
 	return 0;
@@ -500,6 +532,8 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	nix_eth_dev_ops_override();
 	npc_flow_ops_override();
 
+	cn10k_eth_sec_ops_override();
+
 	/* Common probe */
 	rc = cnxk_nix_probe(pci_drv, pci_dev);
 	if (rc)
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 8b6e0f2..a888364 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -5,6 +5,7 @@
 #define __CN10K_ETHDEV_H__
 
 #include <cnxk_ethdev.h>
+#include <cnxk_security.h>
 
 struct cn10k_eth_txq {
 	uint64_t send_hdr_w0;
@@ -15,6 +16,10 @@ struct cn10k_eth_txq {
 	rte_iova_t io_addr;
 	uint16_t sqes_per_sqb_log2;
 	int16_t nb_sqb_bufs_adj;
+	rte_iova_t cpt_io_addr;
+	uint64_t sa_base;
+	uint64_t *cpt_fc;
+	uint16_t cpt_desc;
 	uint64_t cmd[4];
 	uint64_t lso_tun_fmt;
 } __plt_cache_aligned;
@@ -30,12 +35,50 @@ struct cn10k_eth_rxq {
 	uint32_t qmask;
 	uint32_t available;
 	uint16_t data_off;
+	uint64_t sa_base;
+	uint64_t lmt_base;
+	uint64_t aura_handle;
 	uint16_t rq;
 	struct cnxk_timesync_info *tstamp;
 } __plt_cache_aligned;
 
+/* Private data in sw rsvd area of struct roc_ot_ipsec_inb_sa */
+struct cn10k_inb_priv_data {
+	void *userdata;
+	struct cnxk_eth_sec_sess *eth_sec;
+};
+
+/* Private data in sw rsvd area of struct roc_ot_ipsec_outb_sa */
+struct cn10k_outb_priv_data {
+	void *userdata;
+	/* Rlen computation data */
+	struct cnxk_ipsec_outb_rlens rlens;
+	/* Back pinter to eth sec session */
+	struct cnxk_eth_sec_sess *eth_sec;
+	/* SA index */
+	uint32_t sa_idx;
+};
+
+struct cn10k_sec_sess_priv {
+	union {
+		struct {
+			uint32_t sa_idx;
+			uint8_t inb_sa : 1;
+			uint8_t rsvd1 : 2;
+			uint8_t roundup_byte : 5;
+			uint8_t roundup_len;
+			uint16_t partial_len;
+		};
+
+		uint64_t u64;
+	};
+} __rte_packed;
+
 /* Rx and Tx routines */
 void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev);
 void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 
+/* Security context setup */
+void cn10k_eth_sec_ops_override(void);
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
new file mode 100644
index 0000000..3ffd824
--- /dev/null
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -0,0 +1,426 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_eventdev.h>
+#include <rte_security.h>
+#include <rte_security_driver.h>
+
+#include <cn10k_ethdev.h>
+#include <cnxk_security.h>
+
+static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
+	{	/* AES GCM */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
+			{.aead = {
+				.algo = RTE_CRYPTO_AEAD_AES_GCM,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.aad_size = {
+					.min = 8,
+					.max = 12,
+					.increment = 4
+				},
+				.iv_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
+	{	/* IPsec Inline Protocol ESP Tunnel Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Tunnel Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static void
+cn10k_eth_sec_sso_work_cb(uint64_t *gw, void *args)
+{
+	struct rte_eth_event_ipsec_desc desc;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct cn10k_outb_priv_data *priv;
+	struct roc_ot_ipsec_outb_sa *sa;
+	struct cpt_cn10k_res_s *res;
+	struct rte_eth_dev *eth_dev;
+	struct cnxk_eth_dev *dev;
+	uint16_t dlen_adj, rlen;
+	struct rte_mbuf *mbuf;
+	uintptr_t sa_base;
+	uintptr_t nixtx;
+	uint8_t port;
+
+	RTE_SET_USED(args);
+
+	switch ((gw[0] >> 28) & 0xF) {
+	case RTE_EVENT_TYPE_ETHDEV:
+		/* Event from inbound inline dev due to IPSEC packet bad L4 */
+		mbuf = (struct rte_mbuf *)(gw[1] - sizeof(struct rte_mbuf));
+		plt_nix_dbg("Received mbuf %p from inline dev inbound", mbuf);
+		rte_pktmbuf_free(mbuf);
+		return;
+	case RTE_EVENT_TYPE_CPU:
+		/* Check for subtype */
+		if (((gw[0] >> 20) & 0xFF) == CNXK_ETHDEV_SEC_OUTB_EV_SUB) {
+			/* Event from outbound inline error */
+			mbuf = (struct rte_mbuf *)gw[1];
+			break;
+		}
+		/* Fall through */
+	default:
+		plt_err("Unknown event gw[0] = 0x%016lx, gw[1] = 0x%016lx",
+			gw[0], gw[1]);
+		return;
+	}
+
+	/* Get ethdev port from tag */
+	port = gw[0] & 0xFF;
+	eth_dev = &rte_eth_devices[port];
+	dev = cnxk_eth_pmd_priv(eth_dev);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+	/* Calculate dlen adj */
+	dlen_adj = mbuf->pkt_len - mbuf->l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Find the res area residing on next cacheline after end of data */
+	nixtx = rte_pktmbuf_mtod(mbuf, uintptr_t) + mbuf->pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+	res = (struct cpt_cn10k_res_s *)nixtx;
+
+	plt_nix_dbg("Outbound error, mbuf %p, sa_index %u, compcode %x uc %x",
+		    mbuf, sess_priv.sa_idx, res->compcode, res->uc_compcode);
+
+	sess_priv.u64 = *rte_security_dynfield(mbuf);
+
+	sa_base = dev->outb.sa_base;
+	sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(sa);
+
+	memset(&desc, 0, sizeof(desc));
+
+	switch (res->uc_compcode) {
+	case ROC_IE_OT_UCC_ERR_SA_OVERFLOW:
+		desc.subtype = RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW;
+		break;
+	default:
+		plt_warn("Outbound error, mbuf %p, sa_index %u, "
+			 "compcode %x uc %x", mbuf, sess_priv.sa_idx,
+			 res->compcode, res->uc_compcode);
+		desc.subtype = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+		break;
+	}
+
+	desc.metadata = (uint64_t)priv->userdata;
+	rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_IPSEC, &desc);
+	rte_pktmbuf_free(mbuf);
+}
+
+static int
+cn10k_eth_sec_session_create(void *device,
+			     struct rte_security_session_conf *conf,
+			     struct rte_security_session *sess,
+			     struct rte_mempool *mempool)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct rte_security_ipsec_xform *ipsec;
+	struct cn10k_sec_sess_priv sess_priv;
+	struct rte_crypto_sym_xform *crypto;
+	struct cnxk_eth_sec_sess *eth_sec;
+	bool inbound, inl_dev;
+	int rc = 0;
+
+	if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
+		return -ENOTSUP;
+
+	if (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)
+		return -ENOTSUP;
+
+	if (rte_security_dynfield_register() < 0)
+		return -ENOTSUP;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		roc_nix_inl_cb_register(cn10k_eth_sec_sso_work_cb, NULL);
+
+	ipsec = &conf->ipsec;
+	crypto = conf->crypto_xform;
+	inbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);
+	inl_dev = !!dev->inb.inl_dev;
+
+	/* Search if a session already exits */
+	if (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {
+		plt_err("%s SA with SPI %u already in use",
+			inbound ? "Inbound" : "Outbound", ipsec->spi);
+		return -EEXIST;
+	}
+
+	if (rte_mempool_get(mempool, (void **)&eth_sec)) {
+		plt_err("Could not allocate security session private data");
+		return -ENOMEM;
+	}
+
+	memset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));
+	sess_priv.u64 = 0;
+
+	/* Acquire lock on inline dev for inbound */
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (inbound) {
+		struct cn10k_inb_priv_data *inb_priv;
+		struct roc_ot_ipsec_inb_sa *inb_sa;
+		uintptr_t sa;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_inb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_INB_SW_RSVD);
+
+		/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE */
+		sa = roc_nix_inl_inb_sa_get(&dev->nix, inl_dev, ipsec->spi);
+		if (!sa && dev->inb.inl_dev) {
+			plt_err("Failed to create ingress sa, inline dev "
+				"not found or spi not in range");
+			rc = -ENOTSUP;
+			goto mempool_put;
+		} else if (!sa) {
+			plt_err("Failed to create ingress sa");
+			rc = -EFAULT;
+			goto mempool_put;
+		}
+
+		inb_sa = (struct roc_ot_ipsec_inb_sa *)sa;
+
+		/* Check if SA is already in use */
+		if (inb_sa->w2.s.valid) {
+			plt_err("Inbound SA with SPI %u already in use",
+				ipsec->spi);
+			rc = -EBUSY;
+			goto mempool_put;
+		}
+
+		memset(inb_sa, 0, sizeof(struct roc_ot_ipsec_inb_sa));
+
+		/* Fill inbound sa params */
+		rc = cnxk_ot_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init inbound sa, rc=%d", rc);
+			goto mempool_put;
+		}
+
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+		/* Back pointer to get eth_sec */
+		inb_priv->eth_sec = eth_sec;
+		/* Save userdata in inb private area */
+		inb_priv->userdata = conf->userdata;
+
+		/* Save SA index/SPI in cookie for now */
+		inb_sa->w1.s.cookie = rte_cpu_to_be_32(ipsec->spi);
+
+		/* Prepare session priv */
+		sess_priv.inb_sa = 1;
+		sess_priv.sa_idx = ipsec->spi;
+
+		/* Pointer from eth_sec -> inb_sa */
+		eth_sec->sa = inb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = ipsec->spi;
+		eth_sec->spi = ipsec->spi;
+		eth_sec->inl_dev = !!dev->inb.inl_dev;
+		eth_sec->inb = true;
+
+		TAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess++;
+	} else {
+		struct cn10k_outb_priv_data *outb_priv;
+		struct roc_ot_ipsec_outb_sa *outb_sa;
+		struct cnxk_ipsec_outb_rlens *rlens;
+		uint64_t sa_base = dev->outb.sa_base;
+		uint32_t sa_idx;
+
+		PLT_STATIC_ASSERT(sizeof(struct cn10k_outb_priv_data) <
+				  ROC_NIX_INL_OT_IPSEC_OUTB_SW_RSVD);
+
+		/* Alloc an sa index */
+		rc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);
+		if (rc)
+			goto mempool_put;
+
+		outb_sa = roc_nix_inl_ot_ipsec_outb_sa(sa_base, sa_idx);
+		outb_priv = roc_nix_inl_ot_ipsec_outb_sa_sw_rsvd(outb_sa);
+		rlens = &outb_priv->rlens;
+
+		memset(outb_sa, 0, sizeof(struct roc_ot_ipsec_outb_sa));
+
+		/* Fill outbound sa params */
+		rc = cnxk_ot_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);
+		if (rc) {
+			plt_err("Failed to init outbound sa, rc=%d", rc);
+			rc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);
+			goto mempool_put;
+		}
+
+		/* Save userdata */
+		outb_priv->userdata = conf->userdata;
+		outb_priv->sa_idx = sa_idx;
+		outb_priv->eth_sec = eth_sec;
+
+		/* Save rlen info */
+		cnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);
+
+		/* Prepare session priv */
+		sess_priv.sa_idx = outb_priv->sa_idx;
+		sess_priv.roundup_byte = rlens->roundup_byte;
+		sess_priv.roundup_len = rlens->roundup_len;
+		sess_priv.partial_len = rlens->partial_len;
+
+		/* Pointer from eth_sec -> outb_sa */
+		eth_sec->sa = outb_sa;
+		eth_sec->sess = sess;
+		eth_sec->sa_idx = sa_idx;
+		eth_sec->spi = ipsec->spi;
+
+		TAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess++;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Created %s session with spi=%u, sa_idx=%u inl_dev=%u",
+		    inbound ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+	/*
+	 * Update fast path info in priv area.
+	 */
+	set_sec_session_private_data(sess, (void *)sess_priv.u64);
+
+	return 0;
+mempool_put:
+	if (inbound && inl_dev)
+		roc_nix_inl_dev_unlock();
+	rte_mempool_put(mempool, eth_sec);
+	return rc;
+}
+
+static int
+cn10k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)
+{
+	struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;
+	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
+	struct roc_ot_ipsec_inb_sa *inb_sa;
+	struct roc_ot_ipsec_outb_sa *outb_sa;
+	struct cnxk_eth_sec_sess *eth_sec;
+	struct rte_mempool *mp;
+
+	eth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);
+	if (!eth_sec)
+		return -ENOENT;
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_lock();
+
+	if (eth_sec->inb) {
+		inb_sa = eth_sec->sa;
+		/* Disable SA */
+		inb_sa->w2.s.valid = 0;
+
+		TAILQ_REMOVE(&dev->inb.list, eth_sec, entry);
+		dev->inb.nb_sess--;
+	} else {
+		outb_sa = eth_sec->sa;
+		/* Disable SA */
+		outb_sa->w2.s.valid = 0;
+
+		/* Release Outbound SA index */
+		cnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);
+		TAILQ_REMOVE(&dev->outb.list, eth_sec, entry);
+		dev->outb.nb_sess--;
+	}
+
+	/* Sync session in context cache */
+	roc_nix_inl_sa_sync(&dev->nix, eth_sec->sa, eth_sec->inb,
+			    ROC_NIX_INL_SA_OP_RELOAD);
+
+	if (eth_sec->inl_dev)
+		roc_nix_inl_dev_unlock();
+
+	plt_nix_dbg("Destroyed %s session with spi=%u, sa_idx=%u, inl_dev=%u",
+		    eth_sec->inb ? "inbound" : "outbound", eth_sec->spi,
+		    eth_sec->sa_idx, eth_sec->inl_dev);
+
+	/* Put eth_sec object back to pool */
+	mp = rte_mempool_from_obj(eth_sec);
+	set_sec_session_private_data(sess, NULL);
+	rte_mempool_put(mp, eth_sec);
+	return 0;
+}
+
+static const struct rte_security_capability *
+cn10k_eth_sec_capabilities_get(void *device __rte_unused)
+{
+	return cn10k_eth_sec_capabilities;
+}
+
+void
+cn10k_eth_sec_ops_override(void)
+{
+	static int init_once;
+
+	if (init_once)
+		return;
+	init_once = 1;
+
+	/* Update platform specific ops */
+	cnxk_eth_sec_ops.session_create = cn10k_eth_sec_session_create;
+	cnxk_eth_sec_ops.session_destroy = cn10k_eth_sec_session_destroy;
+	cnxk_eth_sec_ops.capabilities_get = cn10k_eth_sec_capabilities_get;
+}
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index 68219b8..d27a231 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -16,6 +16,7 @@
 #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)
 #define NIX_RX_OFFLOAD_TSTAMP_F	     BIT(4)
 #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)
+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)
 
 /* Flags to control cqe_to_mbuf conversion function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index f75cae0..8577a7b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -13,6 +13,7 @@
 #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)
 #define NIX_TX_OFFLOAD_TSO_F	      BIT(4)
 #define NIX_TX_OFFLOAD_TSTAMP_F	      BIT(5)
+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)
 
 /* Flags to control xmit_prepare function.
  * Defining it from backwards to denote its been
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index c00da62..d86188f 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -38,6 +38,7 @@ sources += files(
 # CN10K
 sources += files(
         'cn10k_ethdev.c',
+        'cn10k_ethdev_sec.c',
         'cn10k_rte_flow.c',
         'cn10k_rx.c',
         'cn10k_rx_mseg.c',
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 74d16e4..5f0e817 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -49,6 +49,8 @@
              'SVendor': None, 'SDevice': None}
 cnxk_bphy_cgx = {'Class': '08', 'Vendor': '177d', 'Device': 'a059,a060',
                  'SVendor': None, 'SDevice': None}
+cnxk_inl_dev = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f0,a0f1',
+                'SVendor': None, 'SDevice': None}
 
 intel_dlb = {'Class': '0b', 'Vendor': '8086', 'Device': '270b,2710,2714',
              'SVendor': None, 'SDevice': None}
@@ -73,9 +75,9 @@
 mempool_devices = [cavium_fpa, octeontx2_npa]
 compress_devices = [cavium_zip]
 regex_devices = [octeontx2_ree]
-misc_devices = [cnxk_bphy, cnxk_bphy_cgx, intel_ioat_bdw, intel_ioat_skx, intel_ioat_icx, intel_idxd_spr,
-                intel_ntb_skx, intel_ntb_icx,
-                octeontx2_dma]
+misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev, intel_ioat_bdw,
+	        intel_ioat_skx, intel_ioat_icx, intel_idxd_spr, intel_ntb_skx,
+		intel_ntb_icx, octeontx2_dma]
 
 # global dict ethernet devices present. Dictionary indexed by PCI address.
 # Each device within this is itself a dictionary of device properties
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 18/28] net/cnxk: support Rx security offload on cn9k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (16 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 19/28] net/cnxk: support Tx " Nithin Dabilpuram
                     ` (10 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to receive CPT processed packets on Rx for
CN9K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn9k_eventdev.c              | 153 ++++----
 drivers/event/cnxk/cn9k_worker.h                |   7 +-
 drivers/event/cnxk/cn9k_worker_deq.c            |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_burst.c      |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_ca.c         |   2 +-
 drivers/event/cnxk/cn9k_worker_deq_tmo.c        |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq.c       |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_burst.c |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_ca.c    |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c   |   2 +-
 drivers/net/cnxk/cn9k_rx.c                      |  31 +-
 drivers/net/cnxk/cn9k_rx.h                      | 440 +++++++++++++++++++-----
 drivers/net/cnxk/cn9k_rx_mseg.c                 |   2 +-
 drivers/net/cnxk/cn9k_rx_vec.c                  |   2 +-
 drivers/net/cnxk/cn9k_rx_vec_mseg.c             |   2 +-
 drivers/net/cnxk/cnxk_ethdev.h                  |   3 +
 16 files changed, 461 insertions(+), 195 deletions(-)

diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 59a3dc2..64d9ded 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -10,7 +10,8 @@
 #define CN9K_DUAL_WS_PAIR_ID(x, id) (((x)*CN9K_DUAL_WS_NB_WS) + id)
 
 #define CN9K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops)                            \
-	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
+	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]    \
+			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]      \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]    \
@@ -330,178 +331,184 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
 	/* Single WS modes */
-	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
+	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
+	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name,
+	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name,
+	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
+	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name,
+		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name,
+		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	/* Dual WS modes */
-	const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
+	const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_dual_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_dual_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name,
+	const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name,
+		sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name,
+	const event_dequeue_t sso_hws_dual_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name,
+		sso_hws_dual_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
+		sso_hws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_burst_##name,
+		sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_dual_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_ca_seg_burst_##name,
+		sso_hws_dual_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index 3e8f214..f1d2e47 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -5,6 +5,9 @@
 #ifndef __CN9K_WORKER_H__
 #define __CN9K_WORKER_H__
 
+#include <rte_eventdev.h>
+#include <rte_vect.h>
+
 #include "cnxk_ethdev.h"
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
@@ -380,7 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[],
 uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
 					    uint16_t nb_events);
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_##name(                            \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name(                      \
@@ -415,7 +418,7 @@ uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[],
 NIX_RX_FASTPATH_MODES
 #undef R
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name(                 \
diff --git a/drivers/event/cnxk/cn9k_worker_deq.c b/drivers/event/cnxk/cn9k_worker_deq.c
index 51ccaf4..d65c72a 100644
--- a/drivers/event/cnxk/cn9k_worker_deq.c
+++ b/drivers/event/cnxk/cn9k_worker_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_##name(                            \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_burst.c b/drivers/event/cnxk/cn9k_worker_deq_burst.c
index 4e28014..42dc59b 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_burst.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name(                      \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_ca.c b/drivers/event/cnxk/cn9k_worker_deq_ca.c
index dbdbba1..b5d0263 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_ca.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_ca_##name(                         \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_deq_tmo.c
index 9713d1e..b41a590 100644
--- a/drivers/event/cnxk/cn9k_worker_deq_tmo.c
+++ b/drivers/event/cnxk/cn9k_worker_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_deq_tmo_##name(                        \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq.c b/drivers/event/cnxk/cn9k_worker_dual_deq.c
index 709fa2d..440b66e 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
index d50e1cf..4d913f9 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name(                 \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
index dc9191f..b66e2cf 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_ca_##name(                    \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
index a0508fd..78a4b3d 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_##name(                   \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd..5c4387e 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(	       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -17,12 +17,13 @@ NIX_RX_FASTPATH_MODES
 
 static inline void
 pick_rx_func(struct rte_eth_dev *eth_dev,
-	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [TSP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
 	eth_dev->rx_pkt_burst = rx_burst
+		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
@@ -38,33 +39,33 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
+	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_recv_pkts_vec_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
@@ -73,7 +74,7 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 	/* Copy multi seg version with no offload for tear down sequence */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		dev->rx_pkt_burst_no_offload =
-			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
+			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
 		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index 59545af..bdedeab 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -166,24 +166,104 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 	mbuf->next = NULL;
 }
 
+static __rte_always_inline uint64_t
+nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
+		       uintptr_t sa_base, uint64_t *rearm_val, uint16_t *len)
+{
+	uintptr_t res_sg0 = ((uintptr_t)cq + ROC_ONF_IPSEC_INB_RES_OFF - 8);
+	const union nix_rx_parse_u *rx =
+		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
+	struct cn9k_inb_priv_data *sa_priv;
+	struct roc_onf_ipsec_inb_sa *sa;
+	uint8_t lcptr = rx->lcptr;
+	struct rte_ipv4_hdr *ipv4;
+	uint16_t data_off, res;
+	uint32_t spi_mask;
+	uint32_t spi;
+	uintptr_t data;
+	__uint128_t dw;
+	uint8_t sa_w;
+
+	res = *(uint64_t *)(res_sg0 + 8);
+	data_off = *rearm_val & (BIT_ULL(16) - 1);
+	data = (uintptr_t)m->buf_addr;
+	data += data_off;
+
+	rte_prefetch0((void *)data);
+
+	if (unlikely(res != (CPT_COMP_GOOD | ROC_IE_ONF_UCC_SUCCESS << 8)))
+		return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+
+	data += lcptr;
+	/* 20 bits of tag would have the SPI */
+	spi = cq->tag & CNXK_ETHDEV_SPI_TAG_MASK;
+
+	/* Get SA */
+	sa_w = sa_base & (ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	spi_mask = (1ULL << sa_w) - 1;
+	sa = roc_nix_inl_onf_ipsec_inb_sa(sa_base, spi & spi_mask);
+
+	/* Update dynamic field with userdata */
+	sa_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(sa);
+	dw = *(__uint128_t *)sa_priv;
+	*rte_security_dynfield(m) = (uint64_t)dw;
+
+	/* Get total length from IPv4 header. We can assume only IPv4 */
+	ipv4 = (struct rte_ipv4_hdr *)(data + ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
+				       ROC_ONF_IPSEC_INB_MAX_L2_SZ);
+
+	/* Update data offset */
+	data_off += (ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
+		     ROC_ONF_IPSEC_INB_MAX_L2_SZ);
+	*rearm_val = *rearm_val & ~(BIT_ULL(16) - 1);
+	*rearm_val |= data_off;
+
+	*len = rte_be_to_cpu_16(ipv4->total_length) + lcptr;
+	return PKT_RX_SEC_OFFLOAD;
+}
+
 static __rte_always_inline void
 cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		     struct rte_mbuf *mbuf, const void *lookup_mem,
-		     const uint64_t val, const uint16_t flag)
+		     uint64_t val, const uint16_t flag)
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
-	const uint16_t len = rx->cn9k.pkt_lenm1 + 1;
+	uint16_t len = rx->cn9k.pkt_lenm1 + 1;
 	const uint64_t w1 = *(const uint64_t *)rx;
+	uint32_t packet_type;
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
 	__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1);
 
 	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
-		mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
+		packet_type = nix_ptype_get(lookup_mem, w1);
 	else
-		mbuf->packet_type = 0;
+		packet_type = 0;
+
+	if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
+	    cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
+		uint16_t port = val >> 48;
+		uintptr_t sa_base;
+
+		/* Get SA Base from lookup mem */
+		sa_base = cnxk_nix_sa_base_get(port, lookup_mem);
+
+		ol_flags |= nix_rx_sec_mbuf_update(cq, mbuf, sa_base, &val,
+						   &len);
+
+		/* Only Tunnel inner IPv4 is supported */
+		packet_type = (packet_type &
+			       ~(RTE_PTYPE_L3_MASK | RTE_PTYPE_TUNNEL_MASK));
+		packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;
+		mbuf->packet_type = packet_type;
+		goto skip_parse;
+	}
+
+	if (flag & NIX_RX_OFFLOAD_PTYPE_F)
+		mbuf->packet_type = packet_type;
 
 	if (flag & NIX_RX_OFFLOAD_RSS_F) {
 		mbuf->hash.rss = tag;
@@ -193,6 +273,7 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
 		ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
 
+skip_parse:
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->cn9k.vtag0_gone) {
 			ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
@@ -208,11 +289,12 @@ cn9k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		ol_flags =
 			nix_update_match_id(rx->cn9k.match_id, ol_flags, mbuf);
 
-	mbuf->ol_flags = ol_flags;
 	mbuf->pkt_len = len;
 	mbuf->data_len = len;
 	*(uint64_t *)(&mbuf->rearm_data) = val;
 
+	mbuf->ol_flags = ol_flags;
+
 	if (flag & NIX_RX_MULTI_SEG_F)
 		nix_cqe_xtract_mseg(rx, mbuf, val, flag);
 	else
@@ -670,98 +752,268 @@ cn9k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 #define MARK_F	  NIX_RX_OFFLOAD_MARK_UPDATE_F
 #define TS_F	  NIX_RX_OFFLOAD_TSTAMP_F
 #define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define R_SEC_F   NIX_RX_OFFLOAD_SECURITY_F
 
-/* [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
+/* [R_SEC_F] [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
 #define NIX_RX_FASTPATH_MODES						       \
-R(no_offload,			0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE)	       \
-R(rss,				0, 0, 0, 0, 0, 1, RSS_F)		       \
-R(ptype,			0, 0, 0, 0, 1, 0, PTYPE_F)		       \
-R(ptype_rss,			0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F)	       \
-R(cksum,			0, 0, 0, 1, 0, 0, CKSUM_F)		       \
-R(cksum_rss,			0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F)	       \
-R(cksum_ptype,			0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F)	       \
-R(cksum_ptype_rss,		0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)   \
-R(mark,				0, 0, 1, 0, 0, 0, MARK_F)		       \
-R(mark_rss,			0, 0, 1, 0, 0, 1, MARK_F | RSS_F)	       \
-R(mark_ptype,			0, 0, 1, 0, 1, 0, MARK_F | PTYPE_F)	       \
-R(mark_ptype_rss,		0, 0, 1, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)    \
-R(mark_cksum,			0, 0, 1, 1, 0, 0, MARK_F | CKSUM_F)	       \
-R(mark_cksum_rss,		0, 0, 1, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)    \
-R(mark_cksum_ptype,		0, 0, 1, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)  \
-R(mark_cksum_ptype_rss,		0, 0, 1, 1, 1, 1,			       \
-			MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts,				0, 1, 0, 0, 0, 0, TS_F)			       \
-R(ts_rss,			0, 1, 0, 0, 0, 1, TS_F | RSS_F)		       \
-R(ts_ptype,			0, 1, 0, 0, 1, 0, TS_F | PTYPE_F)	       \
-R(ts_ptype_rss,			0, 1, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)      \
-R(ts_cksum,			0, 1, 0, 1, 0, 0, TS_F | CKSUM_F)	       \
-R(ts_cksum_rss,			0, 1, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)      \
-R(ts_cksum_ptype,		0, 1, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)    \
-R(ts_cksum_ptype_rss,		0, 1, 0, 1, 1, 1,			       \
-			TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts_mark,			0, 1, 1, 0, 0, 0, TS_F | MARK_F)	       \
-R(ts_mark_rss,			0, 1, 1, 0, 0, 1, TS_F | MARK_F | RSS_F)       \
-R(ts_mark_ptype,		0, 1, 1, 0, 1, 0, TS_F | MARK_F | PTYPE_F)     \
-R(ts_mark_ptype_rss,		0, 1, 1, 0, 1, 1,			       \
-			TS_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(ts_mark_cksum,		0, 1, 1, 1, 0, 0, TS_F | MARK_F | CKSUM_F)     \
-R(ts_mark_cksum_rss,		0, 1, 1, 1, 0, 1,			       \
-			TS_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(ts_mark_cksum_ptype,		0, 1, 1, 1, 1, 0,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan,				1, 0, 0, 0, 0, 0, RX_VLAN_F)		       \
-R(vlan_rss,			1, 0, 0, 0, 0, 1, RX_VLAN_F | RSS_F)	       \
-R(vlan_ptype,			1, 0, 0, 0, 1, 0, RX_VLAN_F | PTYPE_F)	       \
-R(vlan_ptype_rss,		1, 0, 0, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum,			1, 0, 0, 1, 0, 0, RX_VLAN_F | CKSUM_F)	       \
-R(vlan_cksum_rss,		1, 0, 0, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype,		1, 0, 0, 1, 1, 0,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F)			       \
-R(vlan_cksum_ptype_rss,		1, 0, 0, 1, 1, 1,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark,			1, 0, 1, 0, 0, 0, RX_VLAN_F | MARK_F)	       \
-R(vlan_mark_rss,		1, 0, 1, 0, 0, 1, RX_VLAN_F | MARK_F | RSS_F)  \
-R(vlan_mark_ptype,		1, 0, 1, 0, 1, 0, RX_VLAN_F | MARK_F | PTYPE_F)\
-R(vlan_mark_ptype_rss,		1, 0, 1, 0, 1, 1,			       \
-			RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark_cksum,		1, 0, 1, 1, 0, 0, RX_VLAN_F | MARK_F | CKSUM_F)\
-R(vlan_mark_cksum_rss,		1, 0, 1, 1, 0, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(vlan_mark_cksum_ptype,	1, 0, 1, 1, 1, 0,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts,			1, 1, 0, 0, 0, 0, RX_VLAN_F | TS_F)	       \
-R(vlan_ts_rss,			1, 1, 0, 0, 0, 1, RX_VLAN_F | TS_F | RSS_F)    \
-R(vlan_ts_ptype,		1, 1, 0, 0, 1, 0, RX_VLAN_F | TS_F | PTYPE_F)  \
-R(vlan_ts_ptype_rss,		1, 1, 0, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
-R(vlan_ts_cksum,		1, 1, 0, 1, 0, 0, RX_VLAN_F | TS_F | CKSUM_F)  \
-R(vlan_ts_cksum_rss,		1, 1, 0, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
-R(vlan_ts_cksum_ptype,		1, 1, 0, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_ts_cksum_ptype_rss,	1, 1, 0, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark,			1, 1, 1, 0, 0, 0, RX_VLAN_F | TS_F | MARK_F)   \
-R(vlan_ts_mark_rss,		1, 1, 1, 0, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
-R(vlan_ts_mark_ptype,		1, 1, 1, 0, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
-R(vlan_ts_mark_ptype_rss,	1, 1, 1, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark_cksum,		1, 1, 1, 1, 0, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
-R(vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
-R(vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)	       \
-R(vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
+R(no_offload,			0, 0, 0, 0, 0, 0, 0,			       \
+		NIX_RX_OFFLOAD_NONE)					       \
+R(rss,				0, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(ptype,			0, 0, 0, 0, 0, 1, 0,			       \
+		PTYPE_F)						       \
+R(ptype_rss,			0, 0, 0, 0, 0, 1, 1,			       \
+		PTYPE_F | RSS_F)					       \
+R(cksum,			0, 0, 0, 0, 1, 0, 0,			       \
+		CKSUM_F)						       \
+R(cksum_rss,			0, 0, 0, 0, 1, 0, 1,			       \
+		CKSUM_F | RSS_F)					       \
+R(cksum_ptype,			0, 0, 0, 0, 1, 1, 0,			       \
+		CKSUM_F | PTYPE_F)					       \
+R(cksum_ptype_rss,		0, 0, 0, 0, 1, 1, 1,			       \
+		CKSUM_F | PTYPE_F | RSS_F)				       \
+R(mark,				0, 0, 0, 1, 0, 0, 0,			       \
+		MARK_F)							       \
+R(mark_rss,			0, 0, 0, 1, 0, 0, 1,			       \
+		MARK_F | RSS_F)						       \
+R(mark_ptype,			0, 0, 0, 1, 0, 1, 0,			       \
+		MARK_F | PTYPE_F)					       \
+R(mark_ptype_rss,		0, 0, 0, 1, 0, 1, 1,			       \
+		MARK_F | PTYPE_F | RSS_F)				       \
+R(mark_cksum,			0, 0, 0, 1, 1, 0, 0,			       \
+		MARK_F | CKSUM_F)					       \
+R(mark_cksum_rss,		0, 0, 0, 1, 1, 0, 1,			       \
+		MARK_F | CKSUM_F | RSS_F)				       \
+R(mark_cksum_ptype,		0, 0, 0, 1, 1, 1, 0,			       \
+		MARK_F | CKSUM_F | PTYPE_F)				       \
+R(mark_cksum_ptype_rss,		0, 0, 0, 1, 1, 1, 1,			       \
+		MARK_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts,				0, 0, 1, 0, 0, 0, 0,			       \
+		TS_F)							       \
+R(ts_rss,			0, 0, 1, 0, 0, 0, 1,			       \
+		TS_F | RSS_F)						       \
+R(ts_ptype,			0, 0, 1, 0, 0, 1, 0,			       \
+		TS_F | PTYPE_F)						       \
+R(ts_ptype_rss,			0, 0, 1, 0, 0, 1, 1,			       \
+		TS_F | PTYPE_F | RSS_F)					       \
+R(ts_cksum,			0, 0, 1, 0, 1, 0, 0,			       \
+		TS_F | CKSUM_F)						       \
+R(ts_cksum_rss,			0, 0, 1, 0, 1, 0, 1,			       \
+		TS_F | CKSUM_F | RSS_F)					       \
+R(ts_cksum_ptype,		0, 0, 1, 0, 1, 1, 0,			       \
+		TS_F | CKSUM_F | PTYPE_F)				       \
+R(ts_cksum_ptype_rss,		0, 0, 1, 0, 1, 1, 1,			       \
+		TS_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts_mark,			0, 0, 1, 1, 0, 0, 0,			       \
+		TS_F | MARK_F)						       \
+R(ts_mark_rss,			0, 0, 1, 1, 0, 0, 1,			       \
+		TS_F | MARK_F | RSS_F)					       \
+R(ts_mark_ptype,		0, 0, 1, 1, 0, 1, 0,			       \
+		TS_F | MARK_F | PTYPE_F)				       \
+R(ts_mark_ptype_rss,		0, 0, 1, 1, 0, 1, 1,			       \
+		TS_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(ts_mark_cksum,		0, 0, 1, 1, 1, 0, 0,			       \
+		TS_F | MARK_F | CKSUM_F)				       \
+R(ts_mark_cksum_rss,		0, 0, 1, 1, 1, 0, 1,			       \
+		TS_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(ts_mark_cksum_ptype,		0, 0, 1, 1, 1, 1, 0,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(ts_mark_cksum_ptype_rss,	0, 0, 1, 1, 1, 1, 1,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan,				0, 1, 0, 0, 0, 0, 0,			       \
+		RX_VLAN_F)						       \
+R(vlan_rss,			0, 1, 0, 0, 0, 0, 1,			       \
+		RX_VLAN_F | RSS_F)					       \
+R(vlan_ptype,			0, 1, 0, 0, 0, 1, 0,			       \
+		RX_VLAN_F | PTYPE_F)					       \
+R(vlan_ptype_rss,		0, 1, 0, 0, 0, 1, 1,			       \
+		RX_VLAN_F | PTYPE_F | RSS_F)				       \
+R(vlan_cksum,			0, 1, 0, 0, 1, 0, 0,			       \
+		RX_VLAN_F | CKSUM_F)					       \
+R(vlan_cksum_rss,		0, 1, 0, 0, 1, 0, 1,			       \
+		RX_VLAN_F | CKSUM_F | RSS_F)				       \
+R(vlan_cksum_ptype,		0, 1, 0, 0, 1, 1, 0,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F)				       \
+R(vlan_cksum_ptype_rss,		0, 1, 0, 0, 1, 1, 1,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark,			0, 1, 0, 1, 0, 0, 0,			       \
+		RX_VLAN_F | MARK_F)					       \
+R(vlan_mark_rss,		0, 1, 0, 1, 0, 0, 1,			       \
+		RX_VLAN_F | MARK_F | RSS_F)				       \
+R(vlan_mark_ptype,		0, 1, 0, 1, 0, 1, 0,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F)				       \
+R(vlan_mark_ptype_rss,		0, 1, 0, 1, 0, 1, 1,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark_cksum,		0, 1, 0, 1, 1, 0, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F)				       \
+R(vlan_mark_cksum_rss,		0, 1, 0, 1, 1, 0, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(vlan_mark_cksum_ptype,	0, 1, 0, 1, 1, 1, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_mark_cksum_ptype_rss,	0, 1, 0, 1, 1, 1, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts,			0, 1, 1, 0, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F)					       \
+R(vlan_ts_rss,			0, 1, 1, 0, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | RSS_F)				       \
+R(vlan_ts_ptype,		0, 1, 1, 0, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | PTYPE_F)				       \
+R(vlan_ts_ptype_rss,		0, 1, 1, 0, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | PTYPE_F | RSS_F)			       \
+R(vlan_ts_cksum,		0, 1, 1, 0, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F)				       \
+R(vlan_ts_cksum_rss,		0, 1, 1, 0, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | RSS_F)			       \
+R(vlan_ts_cksum_ptype,		0, 1, 1, 0, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_ts_cksum_ptype_rss,	0, 1, 1, 0, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark,			0, 1, 1, 1, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F)				       \
+R(vlan_ts_mark_rss,		0, 1, 1, 1, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | RSS_F)			       \
+R(vlan_ts_mark_ptype,		0, 1, 1, 1, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F)			       \
+R(vlan_ts_mark_ptype_rss,	0, 1, 1, 1, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark_cksum,		0, 1, 1, 1, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F)			       \
+R(vlan_ts_mark_cksum_rss,	0, 1, 1, 1, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(vlan_ts_mark_cksum_ptype,	0, 1, 1, 1, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(vlan_ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec,				1, 0, 0, 0, 0, 0, 0,			       \
+		R_SEC_F)						       \
+R(sec_rss,			1, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(sec_ptype,			1, 0, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | PTYPE_F)					       \
+R(sec_ptype_rss,		1, 0, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | PTYPE_F | RSS_F)				       \
+R(sec_cksum,			1, 0, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | CKSUM_F)					       \
+R(sec_cksum_rss,		1, 0, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | CKSUM_F | RSS_F)				       \
+R(sec_cksum_ptype,		1, 0, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F)				       \
+R(sec_cksum_ptype_rss,		1, 0, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(sec_mark,			1, 0, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | MARK_F)					       \
+R(sec_mark_rss,			1, 0, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | MARK_F | RSS_F)				       \
+R(sec_mark_ptype,		1, 0, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | MARK_F | PTYPE_F)				       \
+R(sec_mark_ptype_rss,		1, 0, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(sec_mark_cksum,		1, 0, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F)				       \
+R(sec_mark_cksum_rss,		1, 0, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(sec_mark_cksum_ptype,		1, 0, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(sec_mark_cksum_ptype_rss,	1, 0, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts,			1, 0, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | TS_F)						       \
+R(sec_ts_rss,			1, 0, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | TS_F | RSS_F)					       \
+R(sec_ts_ptype,			1, 0, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | TS_F | PTYPE_F)				       \
+R(sec_ts_ptype_rss,		1, 0, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | TS_F | PTYPE_F | RSS_F)			       \
+R(sec_ts_cksum,			1, 0, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F)				       \
+R(sec_ts_cksum_rss,		1, 0, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | RSS_F)			       \
+R(sec_ts_cksum_ptype,		1, 0, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(sec_ts_cksum_ptype_rss,	1, 0, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark,			1, 0, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F)				       \
+R(sec_ts_mark_rss,		1, 0, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | RSS_F)			       \
+R(sec_ts_mark_ptype,		1, 0, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F)			       \
+R(sec_ts_mark_ptype_rss,	1, 0, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark_cksum,		1, 0, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F)			       \
+R(sec_ts_mark_cksum_rss,	1, 0, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_ts_mark_cksum_ptype,	1, 0, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(sec_ts_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan,			1, 1, 0, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F)					       \
+R(sec_vlan_rss,			1, 1, 0, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | RSS_F)				       \
+R(sec_vlan_ptype,		1, 1, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F)				       \
+R(sec_vlan_ptype_rss,		1, 1, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F)			       \
+R(sec_vlan_cksum,		1, 1, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F)				       \
+R(sec_vlan_cksum_rss,		1, 1, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F)			       \
+R(sec_vlan_cksum_ptype,		1, 1, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_cksum_ptype_rss,	1, 1, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_mark,		1, 1, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F)				       \
+R(sec_vlan_mark_rss,		1, 1, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | RSS_F)			       \
+R(sec_vlan_mark_ptype,		1, 1, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F)			       \
+R(sec_vlan_mark_ptype_rss,	1, 1, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_mark_cksum,		1, 1, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F)			       \
+R(sec_vlan_mark_cksum_rss,	1, 1, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_mark_cksum_ptype,	1, 1, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)	       \
+R(sec_vlan_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)      \
+R(sec_vlan_ts,			1, 1, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F)				       \
+R(sec_vlan_ts_rss,		1, 1, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | RSS_F)			       \
+R(sec_vlan_ts_ptype,		1, 1, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F)			       \
+R(sec_vlan_ts_ptype_rss,	1, 1, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_ts_cksum,		1, 1, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F)			       \
+R(sec_vlan_ts_cksum_rss,	1, 1, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_ts_cksum_ptype,	1, 1, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_ts_cksum_ptype_rss,	1, 1, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark,		1, 1, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F)			       \
+R(sec_vlan_ts_mark_rss,		1, 1, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
+R(sec_vlan_ts_mark_ptype,	1, 1, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
+R(sec_vlan_ts_mark_ptype_rss,	1, 1, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum,	1, 1, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
+R(sec_vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)       \
+R(sec_vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1, 1,		       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_##name(           \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn9k_rx_mseg.c b/drivers/net/cnxk/cn9k_rx_mseg.c
index d7e19b1..06509e8 100644
--- a/drivers/net/cnxk/cn9k_rx_mseg.c
+++ b/drivers/net/cnxk/cn9k_rx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_mseg_##name(      \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx_vec.c b/drivers/net/cnxk/cn9k_rx_vec.c
index ef5f771..c96f61c 100644
--- a/drivers/net/cnxk/cn9k_rx_vec.c
+++ b/drivers/net/cnxk/cn9k_rx_vec.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_vec_##name(       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_rx_vec_mseg.c b/drivers/net/cnxk/cn9k_rx_vec_mseg.c
index e46d8a4..938b1c0 100644
--- a/drivers/net/cnxk/cn9k_rx_vec_mseg.c
+++ b/drivers/net/cnxk/cn9k_rx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_recv_pkts_vec_mseg_##name(  \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index b2368c8..88589d3 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -130,6 +130,9 @@
 /* Subtype from inline outbound error event */
 #define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL
 
+/* SPI will be in 20 bits of tag */
+#define CNXK_ETHDEV_SPI_TAG_MASK 0xFFFFFUL
+
 struct cnxk_fc_cfg {
 	enum rte_eth_fc_mode mode;
 	uint8_t rx_pause;
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 19/28] net/cnxk: support Tx security offload on cn9k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (17 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
                     ` (9 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to create and submit CPT instructions on Tx
on CN9K SoC.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 drivers/event/cnxk/cn9k_eventdev.c               |  29 +-
 drivers/event/cnxk/cn9k_worker.h                 | 163 +++++++++-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |   2 +-
 drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |   2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq.c          |   2 +-
 drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |   2 +-
 drivers/net/cnxk/cn9k_tx.c                       |  29 +-
 drivers/net/cnxk/cn9k_tx.h                       | 392 +++++++++++++++--------
 drivers/net/cnxk/cn9k_tx_mseg.c                  |   2 +-
 drivers/net/cnxk/cn9k_tx_vec.c                   |   2 +-
 drivers/net/cnxk/cn9k_tx_vec_mseg.c              |   2 +-
 11 files changed, 459 insertions(+), 168 deletions(-)

diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 64d9ded..806dcb0 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -19,8 +19,8 @@
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)])
 
 #define CN9K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                            \
-	(enq_op =                                                              \
-		 enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
+	(enq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]    \
+			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \
@@ -515,33 +515,34 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	/* Tx modes */
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
+		sso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
+		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
+		sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
+		sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] =                                         \
+			cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h
index f1d2e47..6be9be0 100644
--- a/drivers/event/cnxk/cn9k_worker.h
+++ b/drivers/event/cnxk/cn9k_worker.h
@@ -478,6 +478,145 @@ cn9k_sso_hws_prepare_pkt(const struct cn9k_eth_txq *txq, struct rte_mbuf *m,
 	cn9k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
 }
 
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline void
+cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base,
+			  struct rte_mbuf *m, uint64_t *cmd,
+			  uint32_t flags)
+{
+	struct cn9k_outb_priv_data *outb_priv;
+	rte_iova_t io_addr = txq->cpt_io_addr;
+	uint64_t *lmt_addr = txq->lmt_addr;
+	struct cn9k_sec_sess_priv mdata;
+	struct nix_send_hdr_s *send_hdr;
+	uint64_t sa_base = txq->sa_base;
+	uint32_t pkt_len, dlen_adj, rlen;
+	uint64x2_t cmd01, cmd23;
+	uint64_t lmt_status, sa;
+	union nix_send_sg_s *sg;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t esn, *iv;
+	uint8_t l2_len;
+
+	mdata.u64 = *rte_security_dynfield(m);
+	send_hdr = (struct nix_send_hdr_s *)cmd;
+	if (flags & NIX_TX_NEED_EXT_HDR)
+		sg = (union nix_send_sg_s *)&cmd[4];
+	else
+		sg = (union nix_send_sg_s *)&cmd[2];
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = cmd[1] & 0xFF;
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = *(uint64_t *)(sg + 1);
+	pkt_len = send_hdr->w0.total;
+
+	/* Calculate rlen */
+	rlen = pkt_len - l2_len;
+	rlen = (rlen + mdata.roundup_len) + (mdata.roundup_byte - 1);
+	rlen &= ~(uint64_t)(mdata.roundup_byte - 1);
+	rlen += mdata.partial_len;
+	dlen_adj = rlen - pkt_len + l2_len;
+
+	/* Update send descriptors. Security is single segment only */
+	send_hdr->w0.total = pkt_len + dlen_adj;
+	sg->seg1_size = pkt_len + dlen_adj;
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	roc_lmt_mov((void *)(nixtx + 16), cmd, cn9k_nix_tx_ext_subs(flags));
+
+	/* Load opcode and cptr already prepared at pkt metadata set */
+	pkt_len -= l2_len;
+	pkt_len += sizeof(struct roc_onf_ipsec_outb_hdr) +
+		    ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ;
+	sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+
+	sa = (uintptr_t)roc_nix_inl_onf_ipsec_outb_sa(sa_base, mdata.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | sa);
+	ucode_cmd[0] = (ROC_IE_ONF_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 |
+			0x40UL << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1 */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn9k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len - ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ -
+		sizeof(struct roc_onf_ipsec_outb_hdr);
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Update IV to zero and l2 sz */
+	*(uint16_t *)(dptr + sizeof(struct roc_onf_ipsec_outb_hdr)) =
+		rte_cpu_to_be_16(ROC_ONF_IPSEC_OUTB_MAX_L2_INFO_SZ);
+	iv = (uint64_t *)(dptr + 8);
+	iv[0] = 0;
+	iv[1] = 0;
+
+	/* Head wait if needed */
+	if (base)
+		roc_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);
+
+	/* ESN */
+	outb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd((void *)sa);
+	esn = outb_priv->esn;
+	outb_priv->esn = esn + 1;
+
+	ucode_cmd[0] |= (esn >> 32) << 16;
+	esn = rte_cpu_to_be_32(esn & (BIT_ULL(32) - 1));
+
+	/* Update ESN and IPID and IV */
+	*(uint64_t *)dptr = esn << 32 | esn;
+
+	rte_io_wmb();
+	cn9k_sso_txq_fc_wait(txq);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(lmt_addr, cmd01);
+	vst1q_u64(lmt_addr + 2, cmd23);
+
+	roc_lmt_mov_seg(lmt_addr + 4, ucode_cmd, 2);
+
+	if (roc_lmt_submit_ldeor(io_addr) == 0) {
+		do {
+			vst1q_u64(lmt_addr, cmd01);
+			vst1q_u64(lmt_addr + 2, cmd23);
+			roc_lmt_mov_seg(lmt_addr + 4, ucode_cmd, 2);
+
+			lmt_status = roc_lmt_submit_ldeor(io_addr);
+		} while (lmt_status == 0);
+	}
+}
+#else
+
+static inline void
+cn9k_sso_hws_xmit_sec_one(const struct cn9k_eth_txq *txq, uint64_t base,
+			  struct rte_mbuf *m, uint64_t *cmd,
+			  uint32_t flags)
+{
+	RTE_SET_USED(txq);
+	RTE_SET_USED(base);
+	RTE_SET_USED(m);
+	RTE_SET_USED(cmd);
+	RTE_SET_USED(flags);
+}
+#endif
+
 static __rte_always_inline uint16_t
 cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 		      const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
@@ -494,11 +633,30 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	 * In case of fast free is not set, both cn9k_nix_prepare_mseg()
 	 * and cn9k_nix_xmit_prepare() has a barrier after refcnt update.
 	 */
-	if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+	if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
+	    !(flags & NIX_TX_OFFLOAD_SECURITY_F))
 		rte_io_wmb();
 	txq = cn9k_sso_hws_xtract_meta(m, txq_data);
 	cn9k_sso_hws_prepare_pkt(txq, m, cmd, flags);
 
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		uint64_t ol_flags = m->ol_flags;
+
+		if (ol_flags & PKT_TX_SEC_OFFLOAD) {
+			uintptr_t ssow_base = base;
+
+			if (ev->sched_type)
+				ssow_base = 0;
+
+			cn9k_sso_hws_xmit_sec_one(txq, ssow_base, m, cmd,
+						  flags);
+			goto done;
+		}
+
+		if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
+			rte_io_wmb();
+	}
+
 	if (flags & NIX_TX_MULTI_SEG_F) {
 		const uint16_t segdw = cn9k_nix_prepare_mseg(m, cmd, flags);
 		if (!CNXK_TT_FROM_EVENT(ev->event)) {
@@ -526,6 +684,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 		}
 	}
 
+done:
 	if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
 		if (ref_cnt > 1)
 			return 1;
@@ -537,7 +696,7 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
 	return 1;
 }
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \
 		void *port, struct rte_event ev[], uint16_t nb_events);        \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
index 92e2981..db045d0 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name(              \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
index dfb574c..95d711f 100644
--- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name(          \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq.c b/drivers/event/cnxk/cn9k_worker_tx_enq.c
index 3df649c..026cef8 100644
--- a/drivers/event/cnxk/cn9k_worker_tx_enq.c
+++ b/drivers/event/cnxk/cn9k_worker_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
index 0efe291..97cd7c7 100644
--- a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn9k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a1..e5691a2 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(	       \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -23,12 +23,13 @@ NIX_TX_FASTPATH_MODES
 
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
-	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])
+	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [TS] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
 	eth_dev->tx_pkt_burst = tx_burst
+		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
@@ -42,33 +43,33 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
+	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn9k_nix_xmit_pkts_vec_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h
index a27ff76..44273ec 100644
--- a/drivers/net/cnxk/cn9k_tx.h
+++ b/drivers/net/cnxk/cn9k_tx.h
@@ -1819,139 +1819,269 @@ cn9k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 #define NOFF_F	     NIX_TX_OFFLOAD_MBUF_NOFF_F
 #define TSO_F	     NIX_TX_OFFLOAD_TSO_F
 #define TSP_F	     NIX_TX_OFFLOAD_TSTAMP_F
+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F
 
-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES						       \
-T(no_offload,				0, 0, 0, 0, 0, 0,	4,	       \
-		NIX_TX_OFFLOAD_NONE)					       \
-T(l3l4csum,				0, 0, 0, 0, 0, 1,	4,	       \
-		L3L4CSUM_F)						       \
-T(ol3ol4csum,				0, 0, 0, 0, 1, 0,	4,	       \
-		OL3OL4CSUM_F)						       \
-T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 1, 1,	4,	       \
-		OL3OL4CSUM_F | L3L4CSUM_F)				       \
-T(vlan,					0, 0, 0, 1, 0, 0,	6,	       \
-		VLAN_F)							       \
-T(vlan_l3l4csum,			0, 0, 0, 1, 0, 1,	6,	       \
-		VLAN_F | L3L4CSUM_F)					       \
-T(vlan_ol3ol4csum,			0, 0, 0, 1, 1, 0,	6,	       \
-		VLAN_F | OL3OL4CSUM_F)					       \
-T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 1, 1,	6,	       \
-		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			       \
-T(noff,					0, 0, 1, 0, 0, 0,	4,	       \
-		NOFF_F)							       \
-T(noff_l3l4csum,			0, 0, 1, 0, 0, 1,	4,	       \
-		NOFF_F | L3L4CSUM_F)					       \
-T(noff_ol3ol4csum,			0, 0, 1, 0, 1, 0,	4,	       \
-		NOFF_F | OL3OL4CSUM_F)					       \
-T(noff_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1,	4,	       \
-		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			       \
-T(noff_vlan,				0, 0, 1, 1, 0, 0,	6,	       \
-		NOFF_F | VLAN_F)					       \
-T(noff_vlan_l3l4csum,			0, 0, 1, 1, 0, 1,	6,	       \
-		NOFF_F | VLAN_F | L3L4CSUM_F)				       \
-T(noff_vlan_ol3ol4csum,			0, 0, 1, 1, 1, 0,	6,	       \
-		NOFF_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1,	6,	       \
-		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		       \
-T(tso,					0, 1, 0, 0, 0, 0,	6,	       \
-		TSO_F)							       \
-T(tso_l3l4csum,				0, 1, 0, 0, 0, 1,	6,	       \
-		TSO_F | L3L4CSUM_F)					       \
-T(tso_ol3ol4csum,			0, 1, 0, 0, 1, 0,	6,	       \
-		TSO_F | OL3OL4CSUM_F)					       \
-T(tso_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1,	6,	       \
-		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			       \
-T(tso_vlan,				0, 1, 0, 1, 0, 0,	6,	       \
-		TSO_F | VLAN_F)						       \
-T(tso_vlan_l3l4csum,			0, 1, 0, 1, 0, 1,	6,	       \
-		TSO_F | VLAN_F | L3L4CSUM_F)				       \
-T(tso_vlan_ol3ol4csum,			0, 1, 0, 1, 1, 0,	6,	       \
-		TSO_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(tso_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 1, 1,	6,	       \
-		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(tso_noff,				0, 1, 1, 0, 0, 0,	6,	       \
-		TSO_F | NOFF_F)						       \
-T(tso_noff_l3l4csum,			0, 1, 1, 0, 0, 1,	6,	       \
-		TSO_F | NOFF_F | L3L4CSUM_F)				       \
-T(tso_noff_ol3ol4csum,			0, 1, 1, 0, 1, 0,	6,	       \
-		TSO_F | NOFF_F | OL3OL4CSUM_F)				       \
-T(tso_noff_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 1, 1,	6,	       \
-		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(tso_noff_vlan,			0, 1, 1, 1, 0, 0,	6,	       \
-		TSO_F | NOFF_F | VLAN_F)				       \
-T(tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 0, 1,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			       \
-T(tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 0,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1,	6,	       \
-		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	       \
-T(ts,					1, 0, 0, 0, 0, 0,	8,	       \
-		TSP_F)							       \
-T(ts_l3l4csum,				1, 0, 0, 0, 0, 1,	8,	       \
-		TSP_F | L3L4CSUM_F)					       \
-T(ts_ol3ol4csum,			1, 0, 0, 0, 1, 0,	8,	       \
-		TSP_F | OL3OL4CSUM_F)					       \
-T(ts_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1,	8,	       \
-		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			       \
-T(ts_vlan,				1, 0, 0, 1, 0, 0,	8,	       \
-		TSP_F | VLAN_F)						       \
-T(ts_vlan_l3l4csum,			1, 0, 0, 1, 0, 1,	8,	       \
-		TSP_F | VLAN_F | L3L4CSUM_F)				       \
-T(ts_vlan_ol3ol4csum,			1, 0, 0, 1, 1, 0,	8,	       \
-		TSP_F | VLAN_F | OL3OL4CSUM_F)				       \
-T(ts_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 1, 1,	8,	       \
-		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(ts_noff,				1, 0, 1, 0, 0, 0,	8,	       \
-		TSP_F | NOFF_F)						       \
-T(ts_noff_l3l4csum,			1, 0, 1, 0, 0, 1,	8,	       \
-		TSP_F | NOFF_F | L3L4CSUM_F)				       \
-T(ts_noff_ol3ol4csum,			1, 0, 1, 0, 1, 0,	8,	       \
-		TSP_F | NOFF_F | OL3OL4CSUM_F)				       \
-T(ts_noff_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 1, 1,	8,	       \
-		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		       \
-T(ts_noff_vlan,				1, 0, 1, 1, 0, 0,	8,	       \
-		TSP_F | NOFF_F | VLAN_F)				       \
-T(ts_noff_vlan_l3l4csum,		1, 0, 1, 1, 0, 1,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			       \
-T(ts_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 0,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 1, 1,	8,	       \
-		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	       \
-T(ts_tso,				1, 1, 0, 0, 0, 0,	8,	       \
-		TSP_F | TSO_F)						       \
-T(ts_tso_l3l4csum,			1, 1, 0, 0, 0, 1,	8,	       \
-		TSP_F | TSO_F | L3L4CSUM_F)				       \
-T(ts_tso_ol3ol4csum,			1, 1, 0, 0, 1, 0,	8,	       \
-		TSP_F | TSO_F | OL3OL4CSUM_F)				       \
-T(ts_tso_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 1, 1,	8,	       \
-		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		       \
-T(ts_tso_vlan,				1, 1, 0, 1, 0, 0,	8,	       \
-		TSP_F | TSO_F | VLAN_F)					       \
-T(ts_tso_vlan_l3l4csum,			1, 1, 0, 1, 0, 1,	8,	       \
-		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			       \
-T(ts_tso_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 0,	8,	       \
-		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			       \
-T(ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1,	8,	       \
-		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	       \
-T(ts_tso_noff,				1, 1, 1, 0, 0, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F)					       \
-T(ts_tso_noff_l3l4csum,			1, 1, 1, 0, 0, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			       \
-T(ts_tso_noff_ol3ol4csum,		1, 1, 1, 0, 1, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			       \
-T(ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	       \
-T(ts_tso_noff_vlan,			1, 1, 1, 1, 0, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F)			       \
-T(ts_tso_noff_vlan_l3l4csum,		1, 1, 1, 1, 0, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		       \
-T(ts_tso_noff_vlan_ol3ol4csum,		1, 1, 1, 1, 1, 0,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		       \
-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 1, 1,	8,	       \
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+#define NIX_TX_FASTPATH_MODES						\
+T(no_offload,				0, 0, 0, 0, 0, 0, 0,	4,	\
+		NIX_TX_OFFLOAD_NONE)					\
+T(l3l4csum,				0, 0, 0, 0, 0, 0, 1,	4,	\
+		L3L4CSUM_F)						\
+T(ol3ol4csum,				0, 0, 0, 0, 0, 1, 0,	4,	\
+		OL3OL4CSUM_F)						\
+T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 0, 1, 1,	4,	\
+		OL3OL4CSUM_F | L3L4CSUM_F)				\
+T(vlan,					0, 0, 0, 0, 1, 0, 0,	6,	\
+		VLAN_F)							\
+T(vlan_l3l4csum,			0, 0, 0, 0, 1, 0, 1,	6,	\
+		VLAN_F | L3L4CSUM_F)					\
+T(vlan_ol3ol4csum,			0, 0, 0, 0, 1, 1, 0,	6,	\
+		VLAN_F | OL3OL4CSUM_F)					\
+T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 0, 1, 1, 1,	6,	\
+		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
+T(noff,					0, 0, 0, 1, 0, 0, 0,	4,	\
+		NOFF_F)							\
+T(noff_l3l4csum,			0, 0, 0, 1, 0, 0, 1,	4,	\
+		NOFF_F | L3L4CSUM_F)					\
+T(noff_ol3ol4csum,			0, 0, 0, 1, 0, 1, 0,	4,	\
+		NOFF_F | OL3OL4CSUM_F)					\
+T(noff_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 0, 1, 1,	4,	\
+		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
+T(noff_vlan,				0, 0, 0, 1, 1, 0, 0,	6,	\
+		NOFF_F | VLAN_F)					\
+T(noff_vlan_l3l4csum,			0, 0, 0, 1, 1, 0, 1,	6,	\
+		NOFF_F | VLAN_F | L3L4CSUM_F)				\
+T(noff_vlan_ol3ol4csum,			0, 0, 0, 1, 1, 1, 0,	6,	\
+		NOFF_F | VLAN_F | OL3OL4CSUM_F)				\
+T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 0, 1, 1, 1, 1,	6,	\
+		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(tso,					0, 0, 1, 0, 0, 0, 0,	6,	\
+		TSO_F)							\
+T(tso_l3l4csum,				0, 0, 1, 0, 0, 0, 1,	6,	\
+		TSO_F | L3L4CSUM_F)					\
+T(tso_ol3ol4csum,			0, 0, 1, 0, 0, 1, 0,	6,	\
+		TSO_F | OL3OL4CSUM_F)					\
+T(tso_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 0, 1, 1,	6,	\
+		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(tso_vlan,				0, 0, 1, 0, 1, 0, 0,	6,	\
+		TSO_F | VLAN_F)						\
+T(tso_vlan_l3l4csum,			0, 0, 1, 0, 1, 0, 1,	6,	\
+		TSO_F | VLAN_F | L3L4CSUM_F)				\
+T(tso_vlan_ol3ol4csum,			0, 0, 1, 0, 1, 1, 0,	6,	\
+		TSO_F | VLAN_F | OL3OL4CSUM_F)				\
+T(tso_vlan_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1, 1,	6,	\
+		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(tso_noff,				0, 0, 1, 1, 0, 0, 0,	6,	\
+		TSO_F | NOFF_F)						\
+T(tso_noff_l3l4csum,			0, 0, 1, 1, 0, 0, 1,	6,	\
+		TSO_F | NOFF_F | L3L4CSUM_F)				\
+T(tso_noff_ol3ol4csum,			0, 0, 1, 1, 0, 1, 0,	6,	\
+		TSO_F | NOFF_F | OL3OL4CSUM_F)				\
+T(tso_noff_ol3ol4csum_l3l4csum,		0, 0, 1, 1, 0, 1, 1,	6,	\
+		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(tso_noff_vlan,			0, 0, 1, 1, 1, 0, 0,	6,	\
+		TSO_F | NOFF_F | VLAN_F)				\
+T(tso_noff_vlan_l3l4csum,		0, 0, 1, 1, 1, 0, 1,	6,	\
+		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(tso_noff_vlan_ol3ol4csum,		0, 0, 1, 1, 1, 1, 0,	6,	\
+		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
+T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1, 1,	6,	\
+		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(ts,					0, 1, 0, 0, 0, 0, 0,	8,	\
+		TSP_F)							\
+T(ts_l3l4csum,				0, 1, 0, 0, 0, 0, 1,	8,	\
+		TSP_F | L3L4CSUM_F)					\
+T(ts_ol3ol4csum,			0, 1, 0, 0, 0, 1, 0,	8,	\
+		TSP_F | OL3OL4CSUM_F)					\
+T(ts_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 0, 1, 1,	8,	\
+		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(ts_vlan,				0, 1, 0, 0, 1, 0, 0,	8,	\
+		TSP_F | VLAN_F)						\
+T(ts_vlan_l3l4csum,			0, 1, 0, 0, 1, 0, 1,	8,	\
+		TSP_F | VLAN_F | L3L4CSUM_F)				\
+T(ts_vlan_ol3ol4csum,			0, 1, 0, 0, 1, 1, 0,	8,	\
+		TSP_F | VLAN_F | OL3OL4CSUM_F)				\
+T(ts_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1, 1,	8,	\
+		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(ts_noff,				0, 1, 0, 1, 0, 0, 0,	8,	\
+		TSP_F | NOFF_F)						\
+T(ts_noff_l3l4csum,			0, 1, 0, 1, 0, 0, 1,	8,	\
+		TSP_F | NOFF_F | L3L4CSUM_F)				\
+T(ts_noff_ol3ol4csum,			0, 1, 0, 1, 0, 1, 0,	8,	\
+		TSP_F | NOFF_F | OL3OL4CSUM_F)				\
+T(ts_noff_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 0, 1, 1,	8,	\
+		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
+T(ts_noff_vlan,				0, 1, 0, 1, 1, 0, 0,	8,	\
+		TSP_F | NOFF_F | VLAN_F)				\
+T(ts_noff_vlan_l3l4csum,		0, 1, 0, 1, 1, 0, 1,	8,	\
+		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(ts_noff_vlan_ol3ol4csum,		0, 1, 0, 1, 1, 1, 0,	8,	\
+		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
+T(ts_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 0, 1, 1, 1, 1,	8,	\
+		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(ts_tso,				0, 1, 1, 0, 0, 0, 0,	8,	\
+		TSP_F | TSO_F)						\
+T(ts_tso_l3l4csum,			0, 1, 1, 0, 0, 0, 1,	8,	\
+		TSP_F | TSO_F | L3L4CSUM_F)				\
+T(ts_tso_ol3ol4csum,			0, 1, 1, 0, 0, 1, 0,	8,	\
+		TSP_F | TSO_F | OL3OL4CSUM_F)				\
+T(ts_tso_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 0, 1, 1,	8,	\
+		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(ts_tso_vlan,				0, 1, 1, 0, 1, 0, 0,	8,	\
+		TSP_F | TSO_F | VLAN_F)					\
+T(ts_tso_vlan_l3l4csum,			0, 1, 1, 0, 1, 0, 1,	8,	\
+		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(ts_tso_vlan_ol3ol4csum,		0, 1, 1, 0, 1, 1, 0,	8,	\
+		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			\
+T(ts_tso_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 0, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(ts_tso_noff,				0, 1, 1, 1, 0, 0, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F)					\
+T(ts_tso_noff_l3l4csum,			0, 1, 1, 1, 0, 0, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(ts_tso_noff_ol3ol4csum,		0, 1, 1, 1, 0, 1, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			\
+T(ts_tso_noff_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 0, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(ts_tso_noff_vlan,			0, 1, 1, 1, 1, 0, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F)			\
+T(ts_tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 1, 0, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(ts_tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 1, 0,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec,					1, 0, 0, 0, 0, 0, 0,	4,	\
+		T_SEC_F)						\
+T(sec_l3l4csum,				1, 0, 0, 0, 0, 0, 1,	4,	\
+		T_SEC_F | L3L4CSUM_F)					\
+T(sec_ol3ol4csum,			1, 0, 0, 0, 0, 1, 0,	4,	\
+		T_SEC_F | OL3OL4CSUM_F)					\
+T(sec_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 0, 1, 1,	4,	\
+		T_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(sec_vlan,				1, 0, 0, 0, 1, 0, 0,	6,	\
+		T_SEC_F | VLAN_F)					\
+T(sec_vlan_l3l4csum,			1, 0, 0, 0, 1, 0, 1,	6,	\
+		T_SEC_F | VLAN_F | L3L4CSUM_F)				\
+T(sec_vlan_ol3ol4csum,			1, 0, 0, 0, 1, 1, 0,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F)			\
+T(sec_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1, 1,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff,				1, 0, 0, 1, 0, 0, 0,	4,	\
+		T_SEC_F | NOFF_F)					\
+T(sec_noff_l3l4csum,			1, 0, 0, 1, 0, 0, 1,	4,	\
+		T_SEC_F | NOFF_F | L3L4CSUM_F)				\
+T(sec_noff_ol3ol4csum,			1, 0, 0, 1, 0, 1, 0,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F)			\
+T(sec_noff_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 0, 1, 1,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff_vlan,			1, 0, 0, 1, 1, 0, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F)				\
+T(sec_noff_vlan_l3l4csum,		1, 0, 0, 1, 1, 0, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_noff_vlan_ol3ol4csum,		1, 0, 0, 1, 1, 1, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 0, 1, 1, 1, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso,				1, 0, 1, 0, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F)					\
+T(sec_tso_l3l4csum,			1, 0, 1, 0, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | L3L4CSUM_F)				\
+T(sec_tso_ol3ol4csum,			1, 0, 1, 0, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F)				\
+T(sec_tso_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_tso_vlan,				1, 0, 1, 0, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F)				\
+T(sec_tso_vlan_l3l4csum,		1, 0, 1, 0, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_tso_vlan_ol3ol4csum,		1, 0, 1, 0, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_tso_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 0, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff,				1, 0, 1, 1, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F)				\
+T(sec_tso_noff_l3l4csum,		1, 0, 1, 1, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_tso_noff_ol3ol4csum,		1, 0, 1, 1, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_tso_noff_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff_vlan,			1, 0, 1, 1, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F)			\
+T(sec_tso_noff_vlan_l3l4csum,		1, 0, 1, 1, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_tso_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts,				1, 1, 0, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F)					\
+T(sec_ts_l3l4csum,			1, 1, 0, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | L3L4CSUM_F)				\
+T(sec_ts_ol3ol4csum,			1, 1, 0, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F)				\
+T(sec_ts_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_ts_vlan,				1, 1, 0, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F)				\
+T(sec_ts_vlan_l3l4csum,			1, 1, 0, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_ts_vlan_ol3ol4csum,		1, 1, 0, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_ts_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff,				1, 1, 0, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F)				\
+T(sec_ts_noff_l3l4csum,			1, 1, 0, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_ts_noff_ol3ol4csum,		1, 1, 0, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_ts_noff_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff_vlan,			1, 1, 0, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F)			\
+T(sec_ts_noff_vlan_l3l4csum,		1, 1, 0, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_noff_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso,				1, 1, 1, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F)				\
+T(sec_ts_tso_l3l4csum,			1, 1, 1, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum,		1, 1, 1, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_tso_vlan,			1, 1, 1, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F)			\
+T(sec_ts_tso_vlan_l3l4csum,		1, 1, 1, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_tso_vlan_ol3ol4csum,		1, 1, 1, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff,			1, 1, 1, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F)			\
+T(sec_ts_tso_noff_l3l4csum,		1, 1, 1, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)		\
+T(sec_ts_tso_noff_ol3ol4csum,		1, 1, 1, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff_vlan,			1, 1, 1, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)		\
+T(sec_ts_tso_noff_vlan_l3l4csum,	1, 1, 1, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)	\
+T(sec_ts_tso_noff_vlan_ol3ol4csum,	1, 1, 1, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\
+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
+		L3L4CSUM_F)
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_##name(           \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn9k_tx_mseg.c b/drivers/net/cnxk/cn9k_tx_mseg.c
index f3c427c..37cba78 100644
--- a/drivers/net/cnxk/cn9k_tx_mseg.c
+++ b/drivers/net/cnxk/cn9k_tx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn9k_nix_xmit_pkts_mseg_##name(void *tx_queue,                 \
 					       struct rte_mbuf **tx_pkts,      \
diff --git a/drivers/net/cnxk/cn9k_tx_vec.c b/drivers/net/cnxk/cn9k_tx_vec.c
index 56a3e25..b424f95 100644
--- a/drivers/net/cnxk/cn9k_tx_vec.c
+++ b/drivers/net/cnxk/cn9k_tx_vec.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn9k_nix_xmit_pkts_vec_##name(void *tx_queue,                  \
 					      struct rte_mbuf **tx_pkts,       \
diff --git a/drivers/net/cnxk/cn9k_tx_vec_mseg.c b/drivers/net/cnxk/cn9k_tx_vec_mseg.c
index 0256efd..5fdf0a9 100644
--- a/drivers/net/cnxk/cn9k_tx_vec_mseg.c
+++ b/drivers/net/cnxk/cn9k_tx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn9k_ethdev.h"
 #include "cn9k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_noinline __rte_hot cn9k_nix_xmit_pkts_vec_mseg_##name(  \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 20/28] net/cnxk: support Rx security offload on cn10k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (18 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 19/28] net/cnxk: support Tx " Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 21/28] net/cnxk: support Tx " Nithin Dabilpuram
                     ` (8 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to receive CPT processed packets on Rx via
second pass on CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn10k_eventdev.c         |  80 ++--
 drivers/event/cnxk/cn10k_worker.h           |  73 +++-
 drivers/event/cnxk/cn10k_worker_deq.c       |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_burst.c |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_ca.c    |   2 +-
 drivers/event/cnxk/cn10k_worker_deq_tmo.c   |   2 +-
 drivers/net/cnxk/cn10k_ethdev.h             |   4 +
 drivers/net/cnxk/cn10k_rx.c                 |  31 +-
 drivers/net/cnxk/cn10k_rx.h                 | 648 +++++++++++++++++++++++-----
 drivers/net/cnxk/cn10k_rx_mseg.c            |   2 +-
 drivers/net/cnxk/cn10k_rx_vec.c             |   4 +-
 drivers/net/cnxk/cn10k_rx_vec_mseg.c        |   4 +-
 drivers/net/cnxk/cn10k_tx.h                 |   3 -
 13 files changed, 688 insertions(+), 169 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 8af273a..9c0d84b 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -7,7 +7,8 @@
 #include "cnxk_worker.h"
 
 #define CN10K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops)                           \
-	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
+	(deq_op = deq_ops[!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]    \
+			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]  \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]      \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] \
 			 [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]    \
@@ -288,88 +289,91 @@ static void
 cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 {
 	struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
-	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
+	const event_dequeue_t sso_hws_deq[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                            \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
+	const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name,
+	const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_tmo_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name,
+	const event_dequeue_t sso_hws_deq_ca[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_ca_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_ca_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
+	const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
+	const event_dequeue_burst_t
+		sso_hws_deq_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name,
+	const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name,
+		sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 		};
 
-	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name,
+	const event_dequeue_t sso_hws_deq_ca_seg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_##name,
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
 	const event_dequeue_burst_t
-		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name,
+		sso_hws_deq_ca_seg_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_ca_seg_burst_##name,
 			NIX_RX_FASTPATH_MODES
 #undef R
 	};
@@ -385,7 +389,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	const event_tx_adapter_enqueue
 		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \
 	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index e5ed043..b79bd90 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -106,12 +106,17 @@ cn10k_wqe_to_mbuf(uint64_t wqe, const uint64_t mbuf, uint8_t port_id,
 
 static __rte_always_inline void
 cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
-		   void *lookup_mem, void *tstamp)
+		   void *lookup_mem, void *tstamp, uintptr_t lbase)
 {
 	uint64_t mbuf_init = 0x100010000ULL | RTE_PKTMBUF_HEADROOM |
 			     (flags & NIX_RX_OFFLOAD_TSTAMP_F ? 8 : 0);
 	struct rte_event_vector *vec;
+	uint64_t aura_handle, laddr;
 	uint16_t nb_mbufs, non_vec;
+	uint16_t lmt_id, d_off;
+	struct rte_mbuf *mbuf;
+	uint8_t loff = 0;
+	uint64_t sa_base;
 	uint64_t **wqe;
 
 	mbuf_init |= ((uint64_t)port_id) << 48;
@@ -121,17 +126,41 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
 	nb_mbufs = RTE_ALIGN_FLOOR(vec->nb_elem, NIX_DESCS_PER_LOOP);
 	nb_mbufs = cn10k_nix_recv_pkts_vector(&mbuf_init, vec->mbufs, nb_mbufs,
 					      flags | NIX_RX_VWQE_F, lookup_mem,
-					      tstamp);
+					      tstamp, lbase);
 	wqe += nb_mbufs;
 	non_vec = vec->nb_elem - nb_mbufs;
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && non_vec) {
+		mbuf = (struct rte_mbuf *)((uintptr_t)wqe[0] -
+					   sizeof(struct rte_mbuf));
+		/* Pick first mbuf's aura handle assuming all
+		 * mbufs are from a vec and are from same RQ.
+		 */
+		aura_handle = mbuf->pool->pool_id;
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		laddr = lbase;
+		laddr += 8;
+		d_off = ((uintptr_t)mbuf->buf_addr - (uintptr_t)mbuf);
+		d_off += (mbuf_init & 0xFFFF);
+		sa_base = cnxk_nix_sa_base_get(mbuf_init >> 48, lookup_mem);
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+	}
+
 	while (non_vec) {
 		struct nix_cqe_hdr_s *cqe = (struct nix_cqe_hdr_s *)wqe[0];
-		struct rte_mbuf *mbuf;
 		uint64_t tstamp_ptr;
 
 		mbuf = (struct rte_mbuf *)((char *)cqe -
 					   sizeof(struct rte_mbuf));
+
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			const uint64_t cq_w1 = *((const uint64_t *)cqe + 1);
+
+			mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, sa_base, laddr,
+						       &loff, mbuf, d_off);
+		}
+
 		cn10k_nix_cqe_to_mbuf(cqe, cqe->tag, mbuf, lookup_mem,
 				      mbuf_init, flags);
 		/* Extracting tstamp, if PTP enabled*/
@@ -145,6 +174,12 @@ cn10k_process_vwqe(uintptr_t vwqe, uint16_t port_id, const uint32_t flags,
 		non_vec--;
 		wqe++;
 	}
+
+	/* Free remaining meta buffers if any */
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id, loff, aura_handle);
+		plt_io_wmb();
+	}
 }
 
 static __rte_always_inline uint16_t
@@ -188,6 +223,34 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
 			   RTE_EVENT_TYPE_ETHDEV) {
 			uint8_t port = CNXK_SUB_EVENT_FROM_TAG(gw.u64[0]);
 
+			if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+				struct rte_mbuf *m;
+				uintptr_t sa_base;
+				uint64_t iova = 0;
+				uint8_t loff = 0;
+				uint16_t d_off;
+				uint64_t cq_w1;
+
+				m = (struct rte_mbuf *)mbuf;
+				d_off = (uintptr_t)(m->buf_addr) - (uintptr_t)m;
+				d_off += RTE_PKTMBUF_HEADROOM;
+
+				cq_w1 = *(uint64_t *)(gw.u64[1] + 8);
+
+				sa_base = cnxk_nix_sa_base_get(port,
+							       lookup_mem);
+				sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+
+				mbuf = (uint64_t)nix_sec_meta_to_mbuf_sc(cq_w1,
+						sa_base, (uintptr_t)&iova,
+						&loff, (struct rte_mbuf *)mbuf,
+						d_off);
+				if (loff)
+					roc_npa_aura_op_free(m->pool->pool_id,
+							     0, iova);
+
+			}
+
 			gw.u64[0] = CNXK_CLR_SUB_EVENT(gw.u64[0]);
 			cn10k_wqe_to_mbuf(gw.u64[1], mbuf, port,
 					  gw.u64[0] & 0xFFFFF, flags,
@@ -212,7 +275,7 @@ cn10k_sso_hws_get_work(struct cn10k_sso_hws *ws, struct rte_event *ev,
 				   ((uint64_t)port << 32);
 			*(uint64_t *)gw.u64[1] = (uint64_t)vwqe_hdr;
 			cn10k_process_vwqe(gw.u64[1], port, flags, lookup_mem,
-					   ws->tstamp);
+					   ws->tstamp, ws->lmt_base);
 		}
 	}
 
@@ -290,7 +353,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,
 uint16_t __rte_hot cn10k_sso_hws_ca_enq(void *port, struct rte_event ev[],
 					uint16_t nb_events);
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_##name(                           \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks);     \
 	uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name(                     \
diff --git a/drivers/event/cnxk/cn10k_worker_deq.c b/drivers/event/cnxk/cn10k_worker_deq.c
index 36ec454..6083f69 100644
--- a/drivers/event/cnxk/cn10k_worker_deq.c
+++ b/drivers/event/cnxk/cn10k_worker_deq.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_##name(                           \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_burst.c b/drivers/event/cnxk/cn10k_worker_deq_burst.c
index 29ecc55..8539d5d 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_burst.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_burst.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name(                     \
 		void *port, struct rte_event ev[], uint16_t nb_events,         \
 		uint64_t timeout_ticks)                                        \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_ca.c b/drivers/event/cnxk/cn10k_worker_deq_ca.c
index c90f6a9..15c698e 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_ca.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_ca.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_hot cn10k_sso_hws_deq_ca_##name(                        \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_deq_tmo.c b/drivers/event/cnxk/cn10k_worker_deq_tmo.c
index c8524a2..537ae37 100644
--- a/drivers/event/cnxk/cn10k_worker_deq_tmo.c
+++ b/drivers/event/cnxk/cn10k_worker_deq_tmo.c
@@ -6,7 +6,7 @@
 #include "cnxk_eventdev.h"
 #include "cnxk_worker.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_hot cn10k_sso_hws_deq_tmo_##name(                       \
 		void *port, struct rte_event *ev, uint64_t timeout_ticks)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index a888364..200cd93 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -81,4 +81,8 @@ void cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev);
 /* Security context setup */
 void cn10k_eth_sec_ops_override(void);
 
+#define LMT_OFF(lmt_addr, lmt_num, offset)                                     \
+	(void *)((uintptr_t)(lmt_addr) +                                       \
+		 ((uint64_t)(lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset))
+
 #endif /* __CN10K_ETHDEV_H__ */
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767a..d6af54b 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(	       \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -17,12 +17,13 @@ NIX_RX_FASTPATH_MODES
 
 static inline void
 pick_rx_func(struct rte_eth_dev *eth_dev,
-	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2])
+	     const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
 	/* [VLAN] [TSP] [MARK] [CKSUM] [PTYPE] [RSS] */
 	eth_dev->rx_pkt_burst = rx_burst
+		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
@@ -38,33 +39,33 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
+	const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			      \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
 	};
 
-	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_mseg_##name,
+	const eth_rx_burst_t nix_eth_rx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                            \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_recv_pkts_vec_mseg_##name,
 
 		NIX_RX_FASTPATH_MODES
 #undef R
@@ -73,7 +74,7 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 	/* Copy multi seg version with no offload for tear down sequence */
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
 		dev->rx_pkt_burst_no_offload =
-			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
+			nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
 		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h
index d27a231..fcc451a 100644
--- a/drivers/net/cnxk/cn10k_rx.h
+++ b/drivers/net/cnxk/cn10k_rx.h
@@ -65,6 +65,130 @@ nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
 	return (struct rte_mbuf *)(buff - data_off);
 }
 
+static __rte_always_inline void
+nix_sec_flush_meta(uintptr_t laddr, uint16_t lmt_id, uint8_t loff,
+		   uintptr_t aura_handle)
+{
+	uint64_t pa;
+
+	/* laddr is pointing to first pointer */
+	laddr -= 8;
+
+	/* Trigger free either on lmtline full or different aura handle */
+	pa = roc_npa_aura_handle_to_base(aura_handle) + NPA_LF_AURA_BATCH_FREE0;
+
+	/* Update aura handle */
+	*(uint64_t *)laddr = (((uint64_t)(loff & 0x1) << 32) |
+			      roc_npa_aura_handle_to_aura(aura_handle));
+
+	pa |= ((loff >> 1) << 4);
+	roc_lmt_submit_steorl(lmt_id, pa);
+}
+
+static __rte_always_inline struct rte_mbuf *
+nix_sec_meta_to_mbuf_sc(uint64_t cq_w1, const uint64_t sa_base, uintptr_t laddr,
+			uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off)
+{
+	const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
+	const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+	struct cn10k_inb_priv_data *inb_priv;
+	struct rte_mbuf *inner;
+	uint32_t sa_idx;
+	void *inb_sa;
+	uint64_t w0;
+
+	if (cq_w1 & BIT(11)) {
+		inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
+					    sizeof(struct rte_mbuf));
+
+		/* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+		w0 = hdr->w0.u64;
+		sa_idx = w0 >> 32;
+
+		inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+
+		/* Update dynamic field with userdata */
+		*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
+
+		/* Update l2 hdr length first */
+		inner->pkt_len = (hdr->w2.il3_off -
+				  sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7));
+
+		/* Store meta in lmtline to free
+		 * Assume all meta's from same aura.
+		 */
+		*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
+		*loff = *loff + 1;
+
+		return inner;
+	}
+	return mbuf;
+}
+
+#if defined(RTE_ARCH_ARM64)
+
+static __rte_always_inline struct rte_mbuf *
+nix_sec_meta_to_mbuf(uint64_t cq_w1, uintptr_t sa_base, uintptr_t laddr,
+		     uint8_t *loff, struct rte_mbuf *mbuf, uint16_t data_off,
+		     uint8x16_t *rx_desc_field1, uint64_t *ol_flags)
+{
+	const void *__p = (void *)((uintptr_t)mbuf + (uint16_t)data_off);
+	const struct cpt_parse_hdr_s *hdr = (const struct cpt_parse_hdr_s *)__p;
+	struct cn10k_inb_priv_data *inb_priv;
+	struct rte_mbuf *inner;
+	uint64_t *sg, res_w1;
+	uint32_t sa_idx;
+	void *inb_sa;
+	uint16_t len;
+	uint64_t w0;
+
+	if (cq_w1 & BIT(11)) {
+		inner = (struct rte_mbuf *)(rte_be_to_cpu_64(hdr->wqe_ptr) -
+					    sizeof(struct rte_mbuf));
+		/* Get SPI from CPT_PARSE_S's cookie(already swapped) */
+		w0 = hdr->w0.u64;
+		sa_idx = w0 >> 32;
+
+		inb_sa = roc_nix_inl_ot_ipsec_inb_sa(sa_base, sa_idx);
+		inb_priv = roc_nix_inl_ot_ipsec_inb_sa_sw_rsvd(inb_sa);
+
+		/* Update dynamic field with userdata */
+		*rte_security_dynfield(inner) = (uint64_t)inb_priv->userdata;
+
+		/* CPT result(struct cpt_cn10k_res_s) is at
+		 * after first IOVA in meta
+		 */
+		sg = (uint64_t *)(inner + 1);
+		res_w1 = sg[10];
+
+		/* Clear checksum flags and update security flag */
+		*ol_flags &= ~(PKT_RX_L4_CKSUM_MASK | PKT_RX_IP_CKSUM_MASK);
+		*ol_flags |= (((res_w1 & 0xFF) == CPT_COMP_WARN) ?
+			      PKT_RX_SEC_OFFLOAD :
+			      (PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED));
+		/* Calculate inner packet length */
+		len = ((res_w1 >> 16) & 0xFFFF) + hdr->w2.il3_off -
+			sizeof(struct cpt_parse_hdr_s) - (w0 & 0x7);
+		/* Update pkt_len and data_len */
+		*rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 2);
+		*rx_desc_field1 = vsetq_lane_u16(len, *rx_desc_field1, 4);
+
+		/* Store meta in lmtline to free
+		 * Assume all meta's from same aura.
+		 */
+		*(uint64_t *)(laddr + (*loff << 3)) = (uint64_t)mbuf;
+		*loff = *loff + 1;
+
+		/* Return inner mbuf */
+		return inner;
+	}
+
+	/* Return same mbuf as it is not a decrypted pkt */
+	return mbuf;
+}
+#endif
+
 static __rte_always_inline uint32_t
 nix_ptype_get(const void *const lookup_mem, const uint64_t in)
 {
@@ -177,8 +301,8 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 {
 	const union nix_rx_parse_u *rx =
 		(const union nix_rx_parse_u *)((const uint64_t *)cq + 1);
-	const uint16_t len = rx->pkt_lenm1 + 1;
 	const uint64_t w1 = *(const uint64_t *)rx;
+	uint16_t len = rx->pkt_lenm1 + 1;
 	uint64_t ol_flags = 0;
 
 	/* Mark mempool obj as "get" as it is alloc'ed by NIX */
@@ -194,8 +318,30 @@ cn10k_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
 		ol_flags |= PKT_RX_RSS_HASH;
 	}
 
-	if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
-		ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+	/* Process Security packets */
+	if (flag & NIX_RX_OFFLOAD_SECURITY_F) {
+		if (w1 & BIT(11)) {
+			/* CPT result(struct cpt_cn10k_res_s) is at
+			 * after first IOVA in meta
+			 */
+			const uint64_t *sg = (const uint64_t *)(mbuf + 1);
+			const uint64_t res_w1 = sg[10];
+			const uint16_t uc_cc = res_w1 & 0xFF;
+
+			/* Rlen */
+			len = ((res_w1 >> 16) & 0xFFFF) + mbuf->pkt_len;
+			ol_flags |= ((uc_cc == CPT_COMP_WARN) ?
+						   PKT_RX_SEC_OFFLOAD :
+						   (PKT_RX_SEC_OFFLOAD |
+					      PKT_RX_SEC_OFFLOAD_FAILED));
+		} else {
+			if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+				ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+		}
+	} else {
+		if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
+			ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
+	}
 
 	if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 		if (rx->vtag0_gone) {
@@ -263,13 +409,28 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 	const uintptr_t desc = rxq->desc;
 	const uint64_t wdata = rxq->wdata;
 	const uint32_t qmask = rxq->qmask;
+	uint64_t lbase = rxq->lmt_base;
 	uint16_t packets = 0, nb_pkts;
+	uint8_t loff = 0, lnum = 0;
 	uint32_t head = rxq->head;
 	struct nix_cqe_hdr_s *cq;
 	struct rte_mbuf *mbuf;
+	uint64_t aura_handle;
+	uint64_t sa_base;
+	uint16_t lmt_id;
+	uint64_t laddr;
 
 	nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		aura_handle = rxq->aura_handle;
+		sa_base = rxq->sa_base;
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		laddr = lbase;
+		laddr += 8;
+	}
+
 	while (packets < nb_pkts) {
 		/* Prefetch N desc ahead */
 		rte_prefetch_non_temporal(
@@ -278,6 +439,14 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 
 		mbuf = nix_get_mbuf_from_cqe(cq, data_off);
 
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			const uint64_t cq_w1 = *((const uint64_t *)cq + 1);
+
+			mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, sa_base, laddr,
+						       &loff, mbuf, data_off);
+		}
+
 		cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
 				      flags);
 		cnxk_nix_mbuf_to_tstamp(mbuf, rxq->tstamp,
@@ -289,6 +458,20 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 		roc_prefetch_store_keep(mbuf);
 		head++;
 		head &= qmask;
+
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Flush when we don't have space for 4 meta */
+			if ((15 - loff) < 1) {
+				nix_sec_flush_meta(laddr, lmt_id + lnum, loff,
+						   aura_handle);
+				lnum++;
+				lnum &= BIT_ULL(ROC_LMT_LINES_PER_CORE_LOG2) -
+					1;
+				/* First pointer starts at 8B offset */
+				laddr = (uintptr_t)LMT_OFF(lbase, lnum, 8);
+				loff = 0;
+			}
+		}
 	}
 
 	rxq->head = head;
@@ -297,6 +480,12 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts,
 	/* Free all the CQs that we've processed */
 	plt_write64((wdata | nb_pkts), rxq->cq_door);
 
+	/* Free remaining meta buffers if any */
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
+		plt_io_wmb();
+	}
+
 	return nb_pkts;
 }
 
@@ -327,7 +516,8 @@ nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
 static __rte_always_inline uint16_t
 cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			   const uint16_t flags, void *lookup_mem,
-			   struct cnxk_timesync_info *tstamp)
+			   struct cnxk_timesync_info *tstamp,
+			   uintptr_t lmt_base)
 {
 	struct cn10k_eth_rxq *rxq = args;
 	const uint64_t mbuf_initializer = (flags & NIX_RX_VWQE_F) ?
@@ -346,9 +536,13 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 	uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
 	uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
 	struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
+	uint64_t aura_handle, lbase, laddr;
+	uint8_t loff = 0, lnum = 0;
 	uint8x16_t f0, f1, f2, f3;
+	uint16_t lmt_id, d_off;
 	uint16_t packets = 0;
 	uint16_t pkts_left;
+	uintptr_t sa_base;
 	uint32_t head;
 	uintptr_t cq0;
 
@@ -366,6 +560,38 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		RTE_SET_USED(head);
 	}
 
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+		if (flags & NIX_RX_VWQE_F) {
+			uint16_t port;
+
+			mbuf0 = (struct rte_mbuf *)((uintptr_t)mbufs[0] -
+						    sizeof(struct rte_mbuf));
+			/* Pick first mbuf's aura handle assuming all
+			 * mbufs are from a vec and are from same RQ.
+			 */
+			aura_handle = mbuf0->pool->pool_id;
+			/* Calculate offset from mbuf to actual data area */
+			d_off = ((uintptr_t)mbuf0->buf_addr - (uintptr_t)mbuf0);
+			d_off += (mbuf_initializer & 0xFFFF);
+
+			/* Get SA Base from lookup tbl using port_id */
+			port = mbuf_initializer >> 48;
+			sa_base = cnxk_nix_sa_base_get(port, lookup_mem);
+
+			lbase = lmt_base;
+		} else {
+			aura_handle = rxq->aura_handle;
+			d_off = rxq->data_off;
+			sa_base = rxq->sa_base;
+			lbase = rxq->lmt_base;
+		}
+		sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1);
+		ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+		lnum = 0;
+		laddr = lbase;
+		laddr += 8;
+	}
+
 	while (packets < pkts) {
 		if (!(flags & NIX_RX_VWQE_F)) {
 			/* Exit loop if head is about to wrap and become
@@ -428,6 +654,14 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 		f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
 		f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
 
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Prefetch probable CPT parse header area */
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf0, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf1, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf2, d_off));
+			rte_prefetch_non_temporal(RTE_PTR_ADD(mbuf3, d_off));
+		}
+
 		/* Load CQE word0 and word 1 */
 		const uint64_t cq0_w0 = *CQE_PTR_OFF(cq0, 0, 0, flags);
 		const uint64_t cq0_w1 = *CQE_PTR_OFF(cq0, 0, 8, flags);
@@ -474,6 +708,30 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
 		}
 
+		/* Translate meta to mbuf */
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Checksum ol_flags will be cleared if mbuf is meta */
+			mbuf0 = nix_sec_meta_to_mbuf(cq0_w1, sa_base, laddr,
+						     &loff, mbuf0, d_off, &f0,
+						     &ol_flags0);
+			mbuf01 = vsetq_lane_u64((uint64_t)mbuf0, mbuf01, 0);
+
+			mbuf1 = nix_sec_meta_to_mbuf(cq1_w1, sa_base, laddr,
+						     &loff, mbuf1, d_off, &f1,
+						     &ol_flags1);
+			mbuf01 = vsetq_lane_u64((uint64_t)mbuf1, mbuf01, 1);
+
+			mbuf2 = nix_sec_meta_to_mbuf(cq2_w1, sa_base, laddr,
+						     &loff, mbuf2, d_off, &f2,
+						     &ol_flags2);
+			mbuf23 = vsetq_lane_u64((uint64_t)mbuf2, mbuf23, 0);
+
+			mbuf3 = nix_sec_meta_to_mbuf(cq3_w1, sa_base, laddr,
+						     &loff, mbuf3, d_off, &f3,
+						     &ol_flags3);
+			mbuf23 = vsetq_lane_u64((uint64_t)mbuf3, mbuf23, 1);
+		}
+
 		if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
 			uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
 			uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
@@ -659,6 +917,26 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 			head += NIX_DESCS_PER_LOOP;
 			head &= qmask;
 		}
+
+		if (flags & NIX_RX_OFFLOAD_SECURITY_F) {
+			/* Flush when we don't have space for 4 meta */
+			if ((15 - loff) < 4) {
+				nix_sec_flush_meta(laddr, lmt_id + lnum, loff,
+						   aura_handle);
+				lnum++;
+				lnum &= BIT_ULL(ROC_LMT_LINES_PER_CORE_LOG2) -
+					1;
+				/* First pointer starts at 8B offset */
+				laddr = (uintptr_t)LMT_OFF(lbase, lnum, 8);
+				loff = 0;
+			}
+		}
+	}
+
+	if (flags & NIX_RX_OFFLOAD_SECURITY_F && loff) {
+		nix_sec_flush_meta(laddr, lmt_id + lnum, loff, aura_handle);
+		if (flags & NIX_RX_VWQE_F)
+			plt_io_wmb();
 	}
 
 	if (flags & NIX_RX_VWQE_F)
@@ -681,16 +959,18 @@ cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
 #else
 
 static inline uint16_t
-cn10k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
-			   uint16_t pkts, const uint16_t flags,
-			   void *lookup_mem, void *tstamp)
+cn10k_nix_recv_pkts_vector(void *args, struct rte_mbuf **mbufs, uint16_t pkts,
+			   const uint16_t flags, void *lookup_mem,
+			   struct cnxk_timesync_info *tstamp,
+			   uintptr_t lmt_base)
 {
-	RTE_SET_USED(lookup_mem);
-	RTE_SET_USED(rx_queue);
-	RTE_SET_USED(rx_pkts);
+	RTE_SET_USED(args);
+	RTE_SET_USED(mbufs);
 	RTE_SET_USED(pkts);
 	RTE_SET_USED(flags);
+	RTE_SET_USED(lookup_mem);
 	RTE_SET_USED(tstamp);
+	RTE_SET_USED(lmt_base);
 
 	return 0;
 }
@@ -704,98 +984,268 @@ cn10k_nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
 #define MARK_F	  NIX_RX_OFFLOAD_MARK_UPDATE_F
 #define TS_F      NIX_RX_OFFLOAD_TSTAMP_F
 #define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
+#define R_SEC_F   NIX_RX_OFFLOAD_SECURITY_F
 
-/* [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
+/* [R_SEC_F] [RX_VLAN_F] [TS] [MARK] [CKSUM] [PTYPE] [RSS] */
 #define NIX_RX_FASTPATH_MODES						       \
-R(no_offload,			0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE)	       \
-R(rss,				0, 0, 0, 0, 0, 1, RSS_F)		       \
-R(ptype,			0, 0, 0, 0, 1, 0, PTYPE_F)		       \
-R(ptype_rss,			0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F)	       \
-R(cksum,			0, 0, 0, 1, 0, 0, CKSUM_F)		       \
-R(cksum_rss,			0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F)	       \
-R(cksum_ptype,			0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F)	       \
-R(cksum_ptype_rss,		0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)   \
-R(mark,				0, 0, 1, 0, 0, 0, MARK_F)		       \
-R(mark_rss,			0, 0, 1, 0, 0, 1, MARK_F | RSS_F)	       \
-R(mark_ptype,			0, 0, 1, 0, 1, 0, MARK_F | PTYPE_F)	       \
-R(mark_ptype_rss,		0, 0, 1, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)    \
-R(mark_cksum,			0, 0, 1, 1, 0, 0, MARK_F | CKSUM_F)	       \
-R(mark_cksum_rss,		0, 0, 1, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)    \
-R(mark_cksum_ptype,		0, 0, 1, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)  \
-R(mark_cksum_ptype_rss,		0, 0, 1, 1, 1, 1,			       \
-			MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts,				0, 1, 0, 0, 0, 0, TS_F)			       \
-R(ts_rss,			0, 1, 0, 0, 0, 1, TS_F | RSS_F)		       \
-R(ts_ptype,			0, 1, 0, 0, 1, 0, TS_F | PTYPE_F)	       \
-R(ts_ptype_rss,			0, 1, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)      \
-R(ts_cksum,			0, 1, 0, 1, 0, 0, TS_F | CKSUM_F)	       \
-R(ts_cksum_rss,			0, 1, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)      \
-R(ts_cksum_ptype,		0, 1, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)    \
-R(ts_cksum_ptype_rss,		0, 1, 0, 1, 1, 1,			       \
-			TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(ts_mark,			0, 1, 1, 0, 0, 0, TS_F | MARK_F)	       \
-R(ts_mark_rss,			0, 1, 1, 0, 0, 1, TS_F | MARK_F | RSS_F)       \
-R(ts_mark_ptype,		0, 1, 1, 0, 1, 0, TS_F | MARK_F | PTYPE_F)     \
-R(ts_mark_ptype_rss,		0, 1, 1, 0, 1, 1,			       \
-			TS_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(ts_mark_cksum,		0, 1, 1, 1, 0, 0, TS_F | MARK_F | CKSUM_F)     \
-R(ts_mark_cksum_rss,		0, 1, 1, 1, 0, 1,			       \
-			TS_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(ts_mark_cksum_ptype,		0, 1, 1, 1, 1, 0,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1,			       \
-			TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan,				1, 0, 0, 0, 0, 0, RX_VLAN_F)		       \
-R(vlan_rss,			1, 0, 0, 0, 0, 1, RX_VLAN_F | RSS_F)	       \
-R(vlan_ptype,			1, 0, 0, 0, 1, 0, RX_VLAN_F | PTYPE_F)	       \
-R(vlan_ptype_rss,		1, 0, 0, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum,			1, 0, 0, 1, 0, 0, RX_VLAN_F | CKSUM_F)	       \
-R(vlan_cksum_rss,		1, 0, 0, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype,		1, 0, 0, 1, 1, 0,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F)			       \
-R(vlan_cksum_ptype_rss,		1, 0, 0, 1, 1, 1,			       \
-			RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark,			1, 0, 1, 0, 0, 0, RX_VLAN_F | MARK_F)	       \
-R(vlan_mark_rss,		1, 0, 1, 0, 0, 1, RX_VLAN_F | MARK_F | RSS_F)  \
-R(vlan_mark_ptype,		1, 0, 1, 0, 1, 0, RX_VLAN_F | MARK_F | PTYPE_F)\
-R(vlan_mark_ptype_rss,		1, 0, 1, 0, 1, 1,			       \
-			RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
-R(vlan_mark_cksum,		1, 0, 1, 1, 0, 0, RX_VLAN_F | MARK_F | CKSUM_F)\
-R(vlan_mark_cksum_rss,		1, 0, 1, 1, 0, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
-R(vlan_mark_cksum_ptype,	1, 0, 1, 1, 1, 0,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1,			       \
-			RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts,			1, 1, 0, 0, 0, 0, RX_VLAN_F | TS_F)	       \
-R(vlan_ts_rss,			1, 1, 0, 0, 0, 1, RX_VLAN_F | TS_F | RSS_F)    \
-R(vlan_ts_ptype,		1, 1, 0, 0, 1, 0, RX_VLAN_F | TS_F | PTYPE_F)  \
-R(vlan_ts_ptype_rss,		1, 1, 0, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
-R(vlan_ts_cksum,		1, 1, 0, 1, 0, 0, RX_VLAN_F | TS_F | CKSUM_F)  \
-R(vlan_ts_cksum_rss,		1, 1, 0, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
-R(vlan_ts_cksum_ptype,		1, 1, 0, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
-R(vlan_ts_cksum_ptype_rss,	1, 1, 0, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark,			1, 1, 1, 0, 0, 0, RX_VLAN_F | TS_F | MARK_F)   \
-R(vlan_ts_mark_rss,		1, 1, 1, 0, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
-R(vlan_ts_mark_ptype,		1, 1, 1, 0, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
-R(vlan_ts_mark_ptype_rss,	1, 1, 1, 0, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
-R(vlan_ts_mark_cksum,		1, 1, 1, 1, 0, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
-R(vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 0, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
-R(vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 0,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)	       \
-R(vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1,			       \
-			RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
+R(no_offload,			0, 0, 0, 0, 0, 0, 0,			       \
+		NIX_RX_OFFLOAD_NONE)					       \
+R(rss,				0, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(ptype,			0, 0, 0, 0, 0, 1, 0,			       \
+		PTYPE_F)						       \
+R(ptype_rss,			0, 0, 0, 0, 0, 1, 1,			       \
+		PTYPE_F | RSS_F)					       \
+R(cksum,			0, 0, 0, 0, 1, 0, 0,			       \
+		CKSUM_F)						       \
+R(cksum_rss,			0, 0, 0, 0, 1, 0, 1,			       \
+		CKSUM_F | RSS_F)					       \
+R(cksum_ptype,			0, 0, 0, 0, 1, 1, 0,			       \
+		CKSUM_F | PTYPE_F)					       \
+R(cksum_ptype_rss,		0, 0, 0, 0, 1, 1, 1,			       \
+		CKSUM_F | PTYPE_F | RSS_F)				       \
+R(mark,				0, 0, 0, 1, 0, 0, 0,			       \
+		MARK_F)							       \
+R(mark_rss,			0, 0, 0, 1, 0, 0, 1,			       \
+		MARK_F | RSS_F)						       \
+R(mark_ptype,			0, 0, 0, 1, 0, 1, 0,			       \
+		MARK_F | PTYPE_F)					       \
+R(mark_ptype_rss,		0, 0, 0, 1, 0, 1, 1,			       \
+		MARK_F | PTYPE_F | RSS_F)				       \
+R(mark_cksum,			0, 0, 0, 1, 1, 0, 0,			       \
+		MARK_F | CKSUM_F)					       \
+R(mark_cksum_rss,		0, 0, 0, 1, 1, 0, 1,			       \
+		MARK_F | CKSUM_F | RSS_F)				       \
+R(mark_cksum_ptype,		0, 0, 0, 1, 1, 1, 0,			       \
+		MARK_F | CKSUM_F | PTYPE_F)				       \
+R(mark_cksum_ptype_rss,		0, 0, 0, 1, 1, 1, 1,			       \
+		MARK_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts,				0, 0, 1, 0, 0, 0, 0,			       \
+		TS_F)							       \
+R(ts_rss,			0, 0, 1, 0, 0, 0, 1,			       \
+		TS_F | RSS_F)						       \
+R(ts_ptype,			0, 0, 1, 0, 0, 1, 0,			       \
+		TS_F | PTYPE_F)						       \
+R(ts_ptype_rss,			0, 0, 1, 0, 0, 1, 1,			       \
+		TS_F | PTYPE_F | RSS_F)					       \
+R(ts_cksum,			0, 0, 1, 0, 1, 0, 0,			       \
+		TS_F | CKSUM_F)						       \
+R(ts_cksum_rss,			0, 0, 1, 0, 1, 0, 1,			       \
+		TS_F | CKSUM_F | RSS_F)					       \
+R(ts_cksum_ptype,		0, 0, 1, 0, 1, 1, 0,			       \
+		TS_F | CKSUM_F | PTYPE_F)				       \
+R(ts_cksum_ptype_rss,		0, 0, 1, 0, 1, 1, 1,			       \
+		TS_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(ts_mark,			0, 0, 1, 1, 0, 0, 0,			       \
+		TS_F | MARK_F)						       \
+R(ts_mark_rss,			0, 0, 1, 1, 0, 0, 1,			       \
+		TS_F | MARK_F | RSS_F)					       \
+R(ts_mark_ptype,		0, 0, 1, 1, 0, 1, 0,			       \
+		TS_F | MARK_F | PTYPE_F)				       \
+R(ts_mark_ptype_rss,		0, 0, 1, 1, 0, 1, 1,			       \
+		TS_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(ts_mark_cksum,		0, 0, 1, 1, 1, 0, 0,			       \
+		TS_F | MARK_F | CKSUM_F)				       \
+R(ts_mark_cksum_rss,		0, 0, 1, 1, 1, 0, 1,			       \
+		TS_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(ts_mark_cksum_ptype,		0, 0, 1, 1, 1, 1, 0,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(ts_mark_cksum_ptype_rss,	0, 0, 1, 1, 1, 1, 1,			       \
+		TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan,				0, 1, 0, 0, 0, 0, 0,			       \
+		RX_VLAN_F)						       \
+R(vlan_rss,			0, 1, 0, 0, 0, 0, 1,			       \
+		RX_VLAN_F | RSS_F)					       \
+R(vlan_ptype,			0, 1, 0, 0, 0, 1, 0,			       \
+		RX_VLAN_F | PTYPE_F)					       \
+R(vlan_ptype_rss,		0, 1, 0, 0, 0, 1, 1,			       \
+		RX_VLAN_F | PTYPE_F | RSS_F)				       \
+R(vlan_cksum,			0, 1, 0, 0, 1, 0, 0,			       \
+		RX_VLAN_F | CKSUM_F)					       \
+R(vlan_cksum_rss,		0, 1, 0, 0, 1, 0, 1,			       \
+		RX_VLAN_F | CKSUM_F | RSS_F)				       \
+R(vlan_cksum_ptype,		0, 1, 0, 0, 1, 1, 0,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F)				       \
+R(vlan_cksum_ptype_rss,		0, 1, 0, 0, 1, 1, 1,			       \
+		RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark,			0, 1, 0, 1, 0, 0, 0,			       \
+		RX_VLAN_F | MARK_F)					       \
+R(vlan_mark_rss,		0, 1, 0, 1, 0, 0, 1,			       \
+		RX_VLAN_F | MARK_F | RSS_F)				       \
+R(vlan_mark_ptype,		0, 1, 0, 1, 0, 1, 0,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F)				       \
+R(vlan_mark_ptype_rss,		0, 1, 0, 1, 0, 1, 1,			       \
+		RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(vlan_mark_cksum,		0, 1, 0, 1, 1, 0, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F)				       \
+R(vlan_mark_cksum_rss,		0, 1, 0, 1, 1, 0, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(vlan_mark_cksum_ptype,	0, 1, 0, 1, 1, 1, 0,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_mark_cksum_ptype_rss,	0, 1, 0, 1, 1, 1, 1,			       \
+		RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts,			0, 1, 1, 0, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F)					       \
+R(vlan_ts_rss,			0, 1, 1, 0, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | RSS_F)				       \
+R(vlan_ts_ptype,		0, 1, 1, 0, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | PTYPE_F)				       \
+R(vlan_ts_ptype_rss,		0, 1, 1, 0, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | PTYPE_F | RSS_F)			       \
+R(vlan_ts_cksum,		0, 1, 1, 0, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F)				       \
+R(vlan_ts_cksum_rss,		0, 1, 1, 0, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | RSS_F)			       \
+R(vlan_ts_cksum_ptype,		0, 1, 1, 0, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(vlan_ts_cksum_ptype_rss,	0, 1, 1, 0, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark,			0, 1, 1, 1, 0, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F)				       \
+R(vlan_ts_mark_rss,		0, 1, 1, 1, 0, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | RSS_F)			       \
+R(vlan_ts_mark_ptype,		0, 1, 1, 1, 0, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F)			       \
+R(vlan_ts_mark_ptype_rss,	0, 1, 1, 1, 0, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(vlan_ts_mark_cksum,		0, 1, 1, 1, 1, 0, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F)			       \
+R(vlan_ts_mark_cksum_rss,	0, 1, 1, 1, 1, 0, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(vlan_ts_mark_cksum_ptype,	0, 1, 1, 1, 1, 1, 0,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(vlan_ts_mark_cksum_ptype_rss,	0, 1, 1, 1, 1, 1, 1,			       \
+		RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec,				1, 0, 0, 0, 0, 0, 0,			       \
+		R_SEC_F)						       \
+R(sec_rss,			1, 0, 0, 0, 0, 0, 1,			       \
+		RSS_F)							       \
+R(sec_ptype,			1, 0, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | PTYPE_F)					       \
+R(sec_ptype_rss,		1, 0, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | PTYPE_F | RSS_F)				       \
+R(sec_cksum,			1, 0, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | CKSUM_F)					       \
+R(sec_cksum_rss,		1, 0, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | CKSUM_F | RSS_F)				       \
+R(sec_cksum_ptype,		1, 0, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F)				       \
+R(sec_cksum_ptype_rss,		1, 0, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | CKSUM_F | PTYPE_F | RSS_F)			       \
+R(sec_mark,			1, 0, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | MARK_F)					       \
+R(sec_mark_rss,			1, 0, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | MARK_F | RSS_F)				       \
+R(sec_mark_ptype,		1, 0, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | MARK_F | PTYPE_F)				       \
+R(sec_mark_ptype_rss,		1, 0, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | MARK_F | PTYPE_F | RSS_F)			       \
+R(sec_mark_cksum,		1, 0, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F)				       \
+R(sec_mark_cksum_rss,		1, 0, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | RSS_F)			       \
+R(sec_mark_cksum_ptype,		1, 0, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F)			       \
+R(sec_mark_cksum_ptype_rss,	1, 0, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts,			1, 0, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | TS_F)						       \
+R(sec_ts_rss,			1, 0, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | TS_F | RSS_F)					       \
+R(sec_ts_ptype,			1, 0, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | TS_F | PTYPE_F)				       \
+R(sec_ts_ptype_rss,		1, 0, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | TS_F | PTYPE_F | RSS_F)			       \
+R(sec_ts_cksum,			1, 0, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F)				       \
+R(sec_ts_cksum_rss,		1, 0, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | RSS_F)			       \
+R(sec_ts_cksum_ptype,		1, 0, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F)			       \
+R(sec_ts_cksum_ptype_rss,	1, 0, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark,			1, 0, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F)				       \
+R(sec_ts_mark_rss,		1, 0, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | RSS_F)			       \
+R(sec_ts_mark_ptype,		1, 0, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F)			       \
+R(sec_ts_mark_ptype_rss,	1, 0, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_ts_mark_cksum,		1, 0, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F)			       \
+R(sec_ts_mark_cksum_rss,	1, 0, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_ts_mark_cksum_ptype,	1, 0, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)		       \
+R(sec_ts_mark_cksum_ptype_rss,	1, 0, 1, 1, 1, 1, 1,			       \
+		R_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan,			1, 1, 0, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F)					       \
+R(sec_vlan_rss,			1, 1, 0, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | RSS_F)				       \
+R(sec_vlan_ptype,		1, 1, 0, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F)				       \
+R(sec_vlan_ptype_rss,		1, 1, 0, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F)			       \
+R(sec_vlan_cksum,		1, 1, 0, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F)				       \
+R(sec_vlan_cksum_rss,		1, 1, 0, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F)			       \
+R(sec_vlan_cksum_ptype,		1, 1, 0, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_cksum_ptype_rss,	1, 1, 0, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_mark,		1, 1, 0, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F)				       \
+R(sec_vlan_mark_rss,		1, 1, 0, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | RSS_F)			       \
+R(sec_vlan_mark_ptype,		1, 1, 0, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F)			       \
+R(sec_vlan_mark_ptype_rss,	1, 1, 0, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_mark_cksum,		1, 1, 0, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F)			       \
+R(sec_vlan_mark_cksum_rss,	1, 1, 0, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_mark_cksum_ptype,	1, 1, 0, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F)	       \
+R(sec_vlan_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)      \
+R(sec_vlan_ts,			1, 1, 1, 0, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F)				       \
+R(sec_vlan_ts_rss,		1, 1, 1, 0, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | RSS_F)			       \
+R(sec_vlan_ts_ptype,		1, 1, 1, 0, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F)			       \
+R(sec_vlan_ts_ptype_rss,	1, 1, 1, 0, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | PTYPE_F | RSS_F)		       \
+R(sec_vlan_ts_cksum,		1, 1, 1, 0, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F)			       \
+R(sec_vlan_ts_cksum_rss,	1, 1, 1, 0, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | RSS_F)		       \
+R(sec_vlan_ts_cksum_ptype,	1, 1, 1, 0, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F)		       \
+R(sec_vlan_ts_cksum_ptype_rss,	1, 1, 1, 0, 1, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | CKSUM_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark,		1, 1, 1, 1, 0, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F)			       \
+R(sec_vlan_ts_mark_rss,		1, 1, 1, 1, 0, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | RSS_F)		       \
+R(sec_vlan_ts_mark_ptype,	1, 1, 1, 1, 0, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F)		       \
+R(sec_vlan_ts_mark_ptype_rss,	1, 1, 1, 1, 0, 1, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | PTYPE_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum,	1, 1, 1, 1, 1, 0, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F)		       \
+R(sec_vlan_ts_mark_cksum_rss,	1, 1, 1, 1, 1, 0, 1,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | RSS_F)	       \
+R(sec_vlan_ts_mark_cksum_ptype,	1, 1, 1, 1, 1, 1, 0,			       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F)       \
+R(sec_vlan_ts_mark_cksum_ptype_rss,	1, 1, 1, 1, 1, 1, 1,		       \
+		R_SEC_F | RX_VLAN_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F)
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_##name(          \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn10k_rx_mseg.c b/drivers/net/cnxk/cn10k_rx_mseg.c
index 3340771..e7c2321 100644
--- a/drivers/net/cnxk/cn10k_rx_mseg.c
+++ b/drivers/net/cnxk/cn10k_rx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_mseg_##name(     \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_rx_vec.c b/drivers/net/cnxk/cn10k_rx_vec.c
index 166735a..0ccc4df 100644
--- a/drivers/net/cnxk/cn10k_rx_vec.c
+++ b/drivers/net/cnxk/cn10k_rx_vec.c
@@ -5,14 +5,14 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)				       \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_recv_pkts_vec_##name(void *rx_queue,                 \
 					       struct rte_mbuf **rx_pkts,      \
 					       uint16_t pkts)                  \
 	{                                                                      \
 		return cn10k_nix_recv_pkts_vector(rx_queue, rx_pkts, pkts,     \
-						  (flags), NULL, NULL);        \
+						  (flags), NULL, NULL, 0);     \
 	}
 
 NIX_RX_FASTPATH_MODES
diff --git a/drivers/net/cnxk/cn10k_rx_vec_mseg.c b/drivers/net/cnxk/cn10k_rx_vec_mseg.c
index 1f44ddd..38e0ec3 100644
--- a/drivers/net/cnxk/cn10k_rx_vec_mseg.c
+++ b/drivers/net/cnxk/cn10k_rx_vec_mseg.c
@@ -5,13 +5,13 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_rx.h"
 
-#define R(name, f5, f4, f3, f2, f1, f0, flags)                                 \
+#define R(name, f6, f5, f4, f3, f2, f1, f0, flags)                             \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_recv_pkts_vec_mseg_##name( \
 		void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts)      \
 	{                                                                      \
 		return cn10k_nix_recv_pkts_vector(                             \
 			rx_queue, rx_pkts, pkts, (flags) | NIX_RX_MULTI_SEG_F, \
-			NULL, NULL);                                           \
+			NULL, NULL, 0);                                        \
 	}
 
 NIX_RX_FASTPATH_MODES
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 8577a7b..c81a612 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -51,9 +51,6 @@
 
 #define NIX_NB_SEGS_TO_SEGDW(x) ((NIX_SEGDW_MAGIC >> ((x) << 2)) & 0xF)
 
-#define LMT_OFF(lmt_addr, lmt_num, offset)                                     \
-	(void *)((lmt_addr) + ((lmt_num) << ROC_LMT_LINE_SIZE_LOG2) + (offset))
-
 /* Function to determine no of tx subdesc required in case ext
  * sub desc is enabled.
  */
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 21/28] net/cnxk: support Tx security offload on cn10k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (19 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
                     ` (7 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Pavan Nikhilesh, Shijith Thotton, Nithin Dabilpuram,
	Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev

Add support to create and submit CPT instructions on Tx
on CN10K.

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/event/cnxk/cn10k_eventdev.c          |  15 +-
 drivers/event/cnxk/cn10k_worker.h            |  74 +-
 drivers/event/cnxk/cn10k_worker_tx_enq.c     |   2 +-
 drivers/event/cnxk/cn10k_worker_tx_enq_seg.c |   2 +-
 drivers/net/cnxk/cn10k_tx.c                  |  31 +-
 drivers/net/cnxk/cn10k_tx.h                  | 981 +++++++++++++++++++++++----
 drivers/net/cnxk/cn10k_tx_mseg.c             |   2 +-
 drivers/net/cnxk/cn10k_tx_vec.c              |   2 +-
 drivers/net/cnxk/cn10k_tx_vec_mseg.c         |   2 +-
 9 files changed, 929 insertions(+), 182 deletions(-)

diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9c0d84b..dec1653 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -17,7 +17,8 @@
 
 #define CN10K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                           \
 	(enq_op =                                                              \
-		 enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
+		 enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]     \
+			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \
 			[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \
@@ -380,17 +381,17 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)
 
 	/* Tx modes */
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
+		sso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
 
 	const event_tx_adapter_enqueue
-		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \
-	[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
+		sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,
 			NIX_TX_FASTPATH_MODES
 #undef T
 		};
diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h
index b79bd90..1255662 100644
--- a/drivers/event/cnxk/cn10k_worker.h
+++ b/drivers/event/cnxk/cn10k_worker.h
@@ -423,7 +423,11 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,
 		    ((queue[0] ^ queue[1]) & (queue[2] ^ queue[3]))) {
 
 			for (j = 0; j < 4; j++) {
+				uint8_t lnum = 0, loff = 0, shft = 0;
 				struct rte_mbuf *m = mbufs[i + j];
+				uintptr_t laddr;
+				uint16_t segdw;
+				bool sec;
 
 				txq = (struct cn10k_eth_txq *)
 					txq_data[port[j]][queue[j]];
@@ -434,19 +438,35 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,
 				if (flags & NIX_TX_OFFLOAD_TSO_F)
 					cn10k_nix_xmit_prepare_tso(m, flags);
 
-				cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags,
-						       txq->lso_tun_fmt);
+				cn10k_nix_xmit_prepare(m, cmd, flags,
+						       txq->lso_tun_fmt, &sec);
+
+				laddr = lmt_addr;
+				/* Prepare CPT instruction and get nixtx addr if
+				 * it is for CPT on same lmtline.
+				 */
+				if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+					cn10k_nix_prep_sec(m, cmd, &laddr,
+							   lmt_addr, &lnum,
+							   &loff, &shft,
+							   txq->sa_base, flags);
+
+				/* Move NIX desc to LMT/NIXTX area */
+				cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+
 				if (flags & NIX_TX_MULTI_SEG_F) {
-					const uint16_t segdw =
-						cn10k_nix_prepare_mseg(
-							m, (uint64_t *)lmt_addr,
-							flags);
-					pa = txq->io_addr | ((segdw - 1) << 4);
+					segdw = cn10k_nix_prepare_mseg(m,
+						(uint64_t *)laddr, flags);
 				} else {
-					pa = txq->io_addr |
-					     (cn10k_nix_tx_ext_subs(flags) + 1)
-						     << 4;
+					segdw = cn10k_nix_tx_ext_subs(flags) +
+						2;
 				}
+
+				if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+					pa = txq->cpt_io_addr | 3 << 4;
+				else
+					pa = txq->io_addr | ((segdw - 1) << 4);
+
 				if (!sched_type)
 					roc_sso_hws_head_wait(base +
 							      SSOW_LF_GWS_TAG);
@@ -469,15 +489,19 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 		       const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
 		       const uint32_t flags)
 {
+	uint8_t lnum = 0, loff = 0, shft = 0;
 	struct cn10k_eth_txq *txq;
+	uint16_t ref_cnt, segdw;
 	struct rte_mbuf *m;
 	uintptr_t lmt_addr;
-	uint16_t ref_cnt;
+	uintptr_t c_laddr;
 	uint16_t lmt_id;
 	uintptr_t pa;
+	bool sec;
 
 	lmt_addr = ws->lmt_base;
 	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	c_laddr = lmt_addr;
 
 	if (ev->event_type & RTE_EVENT_TYPE_VECTOR) {
 		struct rte_mbuf **mbufs = ev->vec->mbufs;
@@ -508,14 +532,28 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 	if (flags & NIX_TX_OFFLOAD_TSO_F)
 		cn10k_nix_xmit_prepare_tso(m, flags);
 
-	cn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags, txq->lso_tun_fmt);
+	cn10k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt, &sec);
+
+	/* Prepare CPT instruction and get nixtx addr if
+	 * it is for CPT on same lmtline.
+	 */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+		cn10k_nix_prep_sec(m, cmd, &lmt_addr, c_laddr, &lnum, &loff,
+				   &shft, txq->sa_base, flags);
+
+	/* Move NIX desc to LMT/NIXTX area */
+	cn10k_nix_xmit_mv_lmt_base(lmt_addr, cmd, flags);
 	if (flags & NIX_TX_MULTI_SEG_F) {
-		const uint16_t segdw =
-			cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+		segdw = cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);
+	} else {
+		segdw = cn10k_nix_tx_ext_subs(flags) + 2;
+	}
+
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+		pa = txq->cpt_io_addr | 3 << 4;
+	else
 		pa = txq->io_addr | ((segdw - 1) << 4);
-	} else {
-		pa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4;
-	}
+
 	if (!ev->sched_type)
 		roc_sso_hws_head_wait(ws->tx_base + SSOW_LF_GWS_TAG);
 
@@ -531,7 +569,7 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,
 	return 1;
 }
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \
 		void *port, struct rte_event ev[], uint16_t nb_events);        \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \
diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq.c b/drivers/event/cnxk/cn10k_worker_tx_enq.c
index f9968ac..f14c7fc 100644
--- a/drivers/event/cnxk/cn10k_worker_tx_enq.c
+++ b/drivers/event/cnxk/cn10k_worker_tx_enq.c
@@ -4,7 +4,7 @@
 
 #include "cn10k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
index a24fc42..2ea61e5 100644
--- a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
+++ b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c
@@ -4,7 +4,7 @@
 
 #include "cn10k_worker.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \
 		void *port, struct rte_event ev[], uint16_t nb_events)         \
 	{                                                                      \
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c..eb962ef 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(	       \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
@@ -24,12 +24,13 @@ NIX_TX_FASTPATH_MODES
 
 static inline void
 pick_tx_func(struct rte_eth_dev *eth_dev,
-	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])
+	     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	/* [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
+	/* [SEC] [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
 	eth_dev->tx_pkt_burst = tx_burst
+		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
 		[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
@@ -43,33 +44,33 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 
-	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,
+	const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
 	};
 
-	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
-	[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,
+	const eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
+	[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,
 
 		NIX_TX_FASTPATH_MODES
 #undef T
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index c81a612..52bb71d 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -6,6 +6,8 @@
 
 #include <rte_vect.h>
 
+#include <rte_eventdev.h>
+
 #define NIX_TX_OFFLOAD_NONE	      (0)
 #define NIX_TX_OFFLOAD_L3_L4_CSUM_F   BIT(0)
 #define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
@@ -57,12 +59,22 @@
 static __rte_always_inline int
 cn10k_nix_tx_ext_subs(const uint16_t flags)
 {
-	return (flags & NIX_TX_OFFLOAD_TSTAMP_F)
-		       ? 2
-		       : ((flags &
-			   (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F))
-				  ? 1
-				  : 0);
+	return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ?
+			     2 :
+			     ((flags &
+			 (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
+				      1 :
+				      0);
+}
+
+static __rte_always_inline uint8_t
+cn10k_nix_tx_dwords(const uint16_t flags, const uint8_t segdw)
+{
+	if (!(flags & NIX_TX_MULTI_SEG_F))
+		return cn10k_nix_tx_ext_subs(flags) + 2;
+
+	/* Already everything is accounted for in segdw */
+	return segdw;
 }
 
 static __rte_always_inline uint8_t
@@ -144,6 +156,34 @@ cn10k_nix_tx_steor_vec_data(const uint16_t flags)
 	return data;
 }
 
+static __rte_always_inline uint64_t
+cn10k_cpt_tx_steor_data(void)
+{
+	/* We have two CPT instructions per LMTLine */
+	const uint64_t dw_m1 = ROC_CN10K_TWO_CPT_INST_DW_M1;
+	uint64_t data;
+
+	/* This will be moved to addr area */
+	data = dw_m1 << 16;
+	data |= dw_m1 << 19;
+	data |= dw_m1 << 22;
+	data |= dw_m1 << 25;
+	data |= dw_m1 << 28;
+	data |= dw_m1 << 31;
+	data |= dw_m1 << 34;
+	data |= dw_m1 << 37;
+	data |= dw_m1 << 40;
+	data |= dw_m1 << 43;
+	data |= dw_m1 << 46;
+	data |= dw_m1 << 49;
+	data |= dw_m1 << 52;
+	data |= dw_m1 << 55;
+	data |= dw_m1 << 58;
+	data |= dw_m1 << 61;
+
+	return data;
+}
+
 static __rte_always_inline void
 cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,
 		      const uint16_t flags)
@@ -165,6 +205,236 @@ cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,
 }
 
 static __rte_always_inline void
+cn10k_nix_sec_steorl(uintptr_t io_addr, uint32_t lmt_id, uint8_t lnum,
+		     uint8_t loff, uint8_t shft)
+{
+	uint64_t data;
+	uintptr_t pa;
+
+	/* Check if there is any CPT instruction to submit */
+	if (!lnum && !loff)
+		return;
+
+	data = cn10k_cpt_tx_steor_data();
+	/* Update lmtline use for partial end line */
+	if (loff) {
+		data &= ~(0x7ULL << shft);
+		/* Update it to half full i.e 64B */
+		data |= (0x3UL << shft);
+	}
+
+	pa = io_addr | ((data >> 16) & 0x7) << 4;
+	data &= ~(0x7ULL << 16);
+	/* Update lines - 1 that contain valid data */
+	data |= ((uint64_t)(lnum + loff - 1)) << 12;
+	data |= lmt_id;
+
+	/* STEOR */
+	roc_lmt_submit_steorl(data, pa);
+}
+
+#if defined(RTE_ARCH_ARM64)
+static __rte_always_inline void
+cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
+		       uintptr_t *nixtx_addr, uintptr_t lbase, uint8_t *lnum,
+		       uint8_t *loff, uint8_t *shft, uint64_t sa_base,
+		       const uint16_t flags)
+{
+	struct cn10k_sec_sess_priv sess_priv;
+	uint32_t pkt_len, dlen_adj, rlen;
+	uint64x2_t cmd01, cmd23;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t *laddr;
+	uint8_t l2_len;
+	uint16_t tag;
+	uint64_t sa;
+
+	sess_priv.u64 = *rte_security_dynfield(m);
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = vgetq_lane_u8(*cmd0, 8);
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = vgetq_lane_u64(*cmd1, 1);
+	pkt_len = vgetq_lane_u16(*cmd0, 0);
+
+	/* Calculate dlen adj */
+	dlen_adj = pkt_len - l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Update send descriptors. Security is single segment only */
+	*cmd0 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd0, 0);
+	*cmd1 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd1, 0);
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	/* Return nixtx addr */
+	*nixtx_addr = (nixtx + 16);
+
+	/* DLEN passed is excluding L2HDR */
+	pkt_len -= l2_len;
+	tag = sa_base & 0xFFFFUL;
+	sa_base &= ~0xFFFFUL;
+	sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
+	ucode_cmd[0] =
+		(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1 */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len;
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Move to our line */
+	laddr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(laddr, cmd01);
+	vst1q_u64((laddr + 2), cmd23);
+
+	*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;
+	*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);
+
+	/* Move to next line for every other CPT inst */
+	*loff = !(*loff);
+	*lnum = *lnum + (*loff ? 0 : 1);
+	*shft = *shft + (*loff ? 0 : 3);
+}
+
+static __rte_always_inline void
+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+		   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,
+		   uint64_t sa_base, const uint16_t flags)
+{
+	struct cn10k_sec_sess_priv sess_priv;
+	uint32_t pkt_len, dlen_adj, rlen;
+	struct nix_send_hdr_s *send_hdr;
+	uint64x2_t cmd01, cmd23;
+	union nix_send_sg_s *sg;
+	uintptr_t dptr, nixtx;
+	uint64_t ucode_cmd[4];
+	uint64_t *laddr;
+	uint8_t l2_len;
+	uint16_t tag;
+	uint64_t sa;
+
+	/* Move to our line from base */
+	sess_priv.u64 = *rte_security_dynfield(m);
+	send_hdr = (struct nix_send_hdr_s *)cmd;
+	if (flags & NIX_TX_NEED_EXT_HDR)
+		sg = (union nix_send_sg_s *)&cmd[4];
+	else
+		sg = (union nix_send_sg_s *)&cmd[2];
+
+	if (flags & NIX_TX_NEED_SEND_HDR_W1)
+		l2_len = cmd[1] & 0xFF;
+	else
+		l2_len = m->l2_len;
+
+	/* Retrieve DPTR */
+	dptr = *(uint64_t *)(sg + 1);
+	pkt_len = send_hdr->w0.total;
+
+	/* Calculate dlen adj */
+	dlen_adj = pkt_len - l2_len;
+	rlen = (dlen_adj + sess_priv.roundup_len) +
+	       (sess_priv.roundup_byte - 1);
+	rlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);
+	rlen += sess_priv.partial_len;
+	dlen_adj = rlen - dlen_adj;
+
+	/* Update send descriptors. Security is single segment only */
+	send_hdr->w0.total = pkt_len + dlen_adj;
+	sg->seg1_size = pkt_len + dlen_adj;
+
+	/* Get area where NIX descriptor needs to be stored */
+	nixtx = dptr + pkt_len + dlen_adj;
+	nixtx += BIT_ULL(7);
+	nixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);
+
+	/* Return nixtx addr */
+	*nixtx_addr = (nixtx + 16);
+
+	/* DLEN passed is excluding L2HDR */
+	pkt_len -= l2_len;
+	tag = sa_base & 0xFFFFUL;
+	sa_base &= ~0xFFFFUL;
+	sa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);
+	ucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);
+	ucode_cmd[0] =
+		(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);
+
+	/* CPT Word 0 and Word 1. Assume no multi-seg support */
+	cmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));
+	/* CPT_RES_S is 16B above NIXTX */
+	cmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);
+
+	/* CPT word 2 and 3 */
+	cmd23 = vdupq_n_u64(0);
+	cmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |
+				CNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);
+	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
+
+	dptr += l2_len;
+	ucode_cmd[1] = dptr;
+	ucode_cmd[2] = dptr;
+
+	/* Move to our line */
+	laddr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);
+
+	/* Write CPT instruction to lmt line */
+	vst1q_u64(laddr, cmd01);
+	vst1q_u64((laddr + 2), cmd23);
+
+	*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;
+	*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);
+
+	/* Move to next line for every other CPT inst */
+	*loff = !(*loff);
+	*lnum = *lnum + (*loff ? 0 : 1);
+	*shft = *shft + (*loff ? 0 : 3);
+}
+
+#else
+
+static __rte_always_inline void
+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
+		   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,
+		   uint64_t sa_base, const uint16_t flags)
+{
+	RTE_SET_USED(m);
+	RTE_SET_USED(cmd);
+	RTE_SET_USED(nixtx_addr);
+	RTE_SET_USED(lbase);
+	RTE_SET_USED(lnum);
+	RTE_SET_USED(loff);
+	RTE_SET_USED(shft);
+	RTE_SET_USED(sa_base);
+	RTE_SET_USED(flags);
+}
+#endif
+
+static __rte_always_inline void
 cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 {
 	uint64_t mask, ol_flags = m->ol_flags;
@@ -217,8 +487,8 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
 }
 
 static __rte_always_inline void
-cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
-		       const uint16_t flags, const uint64_t lso_tun_fmt)
+cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
+		       const uint64_t lso_tun_fmt, bool *sec)
 {
 	struct nix_send_ext_s *send_hdr_ext;
 	struct nix_send_hdr_s *send_hdr;
@@ -237,16 +507,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		sg = (union nix_send_sg_s *)(cmd + 2);
 	}
 
-	if (flags & NIX_TX_NEED_SEND_HDR_W1) {
+	if (flags & (NIX_TX_NEED_SEND_HDR_W1 | NIX_TX_OFFLOAD_SECURITY_F)) {
 		ol_flags = m->ol_flags;
 		w1.u = 0;
 	}
 
-	if (!(flags & NIX_TX_MULTI_SEG_F)) {
+	if (!(flags & NIX_TX_MULTI_SEG_F))
 		send_hdr->w0.total = m->data_len;
-		send_hdr->w0.aura =
-			roc_npa_aura_handle_to_aura(m->pool->pool_id);
-	}
+	else
+		send_hdr->w0.total = m->pkt_len;
+	send_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
 
 	/*
 	 * L3type:  2 => IPV4
@@ -376,7 +646,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		send_hdr->w1.u = w1.u;
 
 	if (!(flags & NIX_TX_MULTI_SEG_F)) {
-		sg->seg1_size = m->data_len;
+		sg->seg1_size = send_hdr->w0.total;
 		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
 
 		if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
@@ -389,17 +659,38 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,
 		/* Mark mempool object as "put" since it is freed by NIX */
 		if (!send_hdr->w0.df)
 			__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	} else {
+		sg->seg1_size = m->data_len;
+		*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);
+
+		/* NOFF is handled later for multi-seg */
 	}
 
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+		*sec = !!(ol_flags & PKT_TX_SEC_OFFLOAD);
+}
+
+static __rte_always_inline void
+cn10k_nix_xmit_mv_lmt_base(uintptr_t lmt_addr, uint64_t *cmd,
+			   const uint16_t flags)
+{
+	struct nix_send_ext_s *send_hdr_ext;
+	union nix_send_sg_s *sg;
+
 	/* With minimal offloads, 'cmd' being local could be optimized out to
 	 * registers. In other cases, 'cmd' will be in stack. Intent is
 	 * 'cmd' stores content from txq->cmd which is copied only once.
 	 */
-	*((struct nix_send_hdr_s *)lmt_addr) = *send_hdr;
+	*((struct nix_send_hdr_s *)lmt_addr) = *(struct nix_send_hdr_s *)cmd;
 	lmt_addr += 16;
 	if (flags & NIX_TX_NEED_EXT_HDR) {
+		send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
 		*((struct nix_send_ext_s *)lmt_addr) = *send_hdr_ext;
 		lmt_addr += 16;
+
+		sg = (union nix_send_sg_s *)(cmd + 4);
+	} else {
+		sg = (union nix_send_sg_s *)(cmd + 2);
 	}
 	/* In case of multi-seg, sg template is stored here */
 	*((union nix_send_sg_s *)lmt_addr) = *sg;
@@ -414,7 +705,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,
 	if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
 		const uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);
 		struct nix_send_ext_s *send_hdr_ext =
-					(struct nix_send_ext_s *)lmt_addr + 16;
+			(struct nix_send_ext_s *)lmt_addr + 16;
 		uint64_t *lmt = (uint64_t *)lmt_addr;
 		uint16_t off = (no_segdw - 1) << 1;
 		struct nix_send_mem_s *send_mem;
@@ -457,8 +748,6 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 	uint8_t off, i;
 
 	send_hdr = (struct nix_send_hdr_s *)cmd;
-	send_hdr->w0.total = m->pkt_len;
-	send_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);
 
 	if (flags & NIX_TX_NEED_EXT_HDR)
 		off = 2;
@@ -466,13 +755,27 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		off = 0;
 
 	sg = (union nix_send_sg_s *)&cmd[2 + off];
-	/* Clear sg->u header before use */
-	sg->u &= 0xFC00000000000000;
+
+	/* Start from second segment, first segment is already there */
+	i = 1;
 	sg_u = sg->u;
-	slist = &cmd[3 + off];
+	nb_segs = m->nb_segs - 1;
+	m_next = m->next;
+	slist = &cmd[3 + off + 1];
 
-	i = 0;
-	nb_segs = m->nb_segs;
+	/* Set invert df if buffer is not to be freed by H/W */
+	if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
+		sg_u |= (cnxk_nix_prefree_seg(m) << 55);
+
+		/* Mark mempool object as "put" since it is freed by NIX */
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+	if (!(sg_u & (1ULL << 55)))
+		__mempool_check_cookies(m->pool, (void **)&m, 1, 0);
+	rte_io_wmb();
+#endif
+	m = m_next;
+	if (!m)
+		goto done;
 
 	/* Fill mbuf segments */
 	do {
@@ -504,6 +807,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
 		m = m_next;
 	} while (nb_segs);
 
+done:
 	sg->u = sg_u;
 	sg->segs = i;
 	segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
@@ -522,10 +826,17 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 {
 	struct cn10k_eth_txq *txq = tx_queue;
 	const rte_iova_t io_addr = txq->io_addr;
-	uintptr_t pa, lmt_addr = txq->lmt_base;
+	uint8_t lnum, c_lnum, c_shft, c_loff;
+	uintptr_t pa, lbase = txq->lmt_base;
 	uint16_t lmt_id, burst, left, i;
+	uintptr_t c_lbase = lbase;
+	rte_iova_t c_io_addr;
 	uint64_t lso_tun_fmt;
+	uint16_t c_lmt_id;
+	uint64_t sa_base;
+	uintptr_t laddr;
 	uint64_t data;
+	bool sec;
 
 	if (!(flags & NIX_TX_VWQE_F)) {
 		NIX_XMIT_FC_OR_RETURN(txq, pkts);
@@ -540,10 +851,24 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		lso_tun_fmt = txq->lso_tun_fmt;
 
 	/* Get LMT base address and LMT ID as lcore id */
-	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	burst = left > 32 ? 32 : left;
+
+	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		c_lnum = 0;
+		c_loff = 0;
+		c_shft = 16;
+	}
+
 	for (i = 0; i < burst; i++) {
 		/* Perform header writes for TSO, barrier at
 		 * lmt steorl will suffice.
@@ -551,16 +876,39 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		if (flags & NIX_TX_OFFLOAD_TSO_F)
 			cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
 
-		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,
-				       lso_tun_fmt);
-		cn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],
+		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,
+				       &sec);
+
+		laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
+
+		/* Prepare CPT instruction and get nixtx addr */
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+			cn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,
+					   &c_lnum, &c_loff, &c_shft, sa_base,
+					   flags);
+
+		/* Move NIX desc to LMT/NIXTX area */
+		cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+		cn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],
 					      tx_pkts[i]->ol_flags, 4, flags);
-		lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
+		if (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec)
+			lnum++;
 	}
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+	tx_pkts += burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		/* Reduce pkts to be sent to CPT */
+		burst -= ((c_lnum << 1) + c_loff);
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+	}
+
 	/* Trigger LMTST */
 	if (burst > 16) {
 		data = cn10k_nix_tx_steor_data(flags);
@@ -591,16 +939,9 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,
 		roc_lmt_submit_steorl(data, pa);
 	}
 
-	left -= burst;
 	rte_io_wmb();
-	if (left) {
-		/* Start processing another burst */
-		tx_pkts += burst;
-		/* Reset lmt base addr */
-		lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+	if (left)
 		goto again;
-	}
 
 	return pkts;
 }
@@ -611,13 +952,20 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 			 const uint16_t flags)
 {
 	struct cn10k_eth_txq *txq = tx_queue;
-	uintptr_t pa0, pa1, lmt_addr = txq->lmt_base;
+	uintptr_t pa0, pa1, lbase = txq->lmt_base;
 	const rte_iova_t io_addr = txq->io_addr;
 	uint16_t segdw, lmt_id, burst, left, i;
+	uint8_t lnum, c_lnum, c_loff;
+	uintptr_t c_lbase = lbase;
 	uint64_t data0, data1;
+	rte_iova_t c_io_addr;
 	uint64_t lso_tun_fmt;
+	uint8_t shft, c_shft;
 	__uint128_t data128;
-	uint16_t shft;
+	uint16_t c_lmt_id;
+	uint64_t sa_base;
+	uintptr_t laddr;
+	bool sec;
 
 	NIX_XMIT_FC_OR_RETURN(txq, pkts);
 
@@ -630,12 +978,26 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		lso_tun_fmt = txq->lso_tun_fmt;
 
 	/* Get LMT base address and LMT ID as lcore id */
-	ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);
+	ROC_LMT_BASE_ID_GET(lbase, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	burst = left > 32 ? 32 : left;
 	shft = 16;
 	data128 = 0;
+
+	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		c_lnum = 0;
+		c_loff = 0;
+		c_shft = 16;
+	}
+
 	for (i = 0; i < burst; i++) {
 		/* Perform header writes for TSO, barrier at
 		 * lmt steorl will suffice.
@@ -643,22 +1005,47 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		if (flags & NIX_TX_OFFLOAD_TSO_F)
 			cn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);
 
-		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,
-				       lso_tun_fmt);
+		cn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,
+				       &sec);
+
+		laddr = (uintptr_t)LMT_OFF(lbase, lnum, 0);
+
+		/* Prepare CPT instruction and get nixtx addr */
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)
+			cn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,
+					   &c_lnum, &c_loff, &c_shft, sa_base,
+					   flags);
+
+		/* Move NIX desc to LMT/NIXTX area */
+		cn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);
+
 		/* Store sg list directly on lmt line */
-		segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)lmt_addr,
+		segdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)laddr,
 					       flags);
-		cn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],
+		cn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],
 					      tx_pkts[i]->ol_flags, segdw,
 					      flags);
-		lmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		data128 |= (((__uint128_t)(segdw - 1)) << shft);
-		shft += 3;
+		if (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec) {
+			lnum++;
+			data128 |= (((__uint128_t)(segdw - 1)) << shft);
+			shft += 3;
+		}
 	}
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+	tx_pkts += burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		/* Reduce pkts to be sent to CPT */
+		burst -= ((c_lnum << 1) + c_loff);
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+	}
+
 	data0 = (uint64_t)data128;
 	data1 = (uint64_t)(data128 >> 64);
 	/* Make data0 similar to data1 */
@@ -695,16 +1082,9 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
 		roc_lmt_submit_steorl(data0, pa0);
 	}
 
-	left -= burst;
 	rte_io_wmb();
-	if (left) {
-		/* Start processing another burst */
-		tx_pkts += burst;
-		/* Reset lmt base addr */
-		lmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);
-		lmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));
+	if (left)
 		goto again;
-	}
 
 	return pkts;
 }
@@ -989,6 +1369,90 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,
 	return lmt_used;
 }
 
+static __rte_always_inline void
+cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,
+		   uint8_t *shift, __uint128_t *data128, uintptr_t *next)
+{
+	/* Go to next line if we are out of space */
+	if ((*loff + (dw << 4)) > 128) {
+		*data128 = *data128 |
+			   (((__uint128_t)((*loff >> 4) - 1)) << *shift);
+		*shift = *shift + 3;
+		*loff = 0;
+		*lnum = *lnum + 1;
+	}
+
+	*next = (uintptr_t)LMT_OFF(laddr, *lnum, *loff);
+	*loff = *loff + (dw << 4);
+}
+
+static __rte_always_inline void
+cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,
+		     uint64x2_t cmd0, uint64x2_t cmd1, uint64x2_t cmd2,
+		     uint64x2_t cmd3, const uint16_t flags)
+{
+	uint8_t off;
+
+	/* Handle no fast free when security is enabled without mseg */
+	if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
+	    (flags & NIX_TX_OFFLOAD_SECURITY_F) &&
+	    !(flags & NIX_TX_MULTI_SEG_F)) {
+		union nix_send_sg_s sg;
+
+		sg.u = vgetq_lane_u64(cmd1, 0);
+		sg.u |= (cnxk_nix_prefree_seg(mbuf) << 55);
+		cmd1 = vsetq_lane_u64(sg.u, cmd1, 0);
+
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+		sg.u = vgetq_lane_u64(cmd1, 0);
+		if (!(sg.u & (1ULL << 55)))
+			__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,
+						0);
+		rte_io_wmb();
+#endif
+	}
+	if (flags & NIX_TX_MULTI_SEG_F) {
+		if ((flags & NIX_TX_NEED_EXT_HDR) &&
+		    (flags & NIX_TX_OFFLOAD_TSTAMP_F)) {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+			off = segdw - 4;
+			off <<= 4;
+			vst1q_u64(LMT_OFF(laddr, 0, 48 + off), cmd3);
+		} else if (flags & NIX_TX_NEED_EXT_HDR) {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+		} else {
+			cn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 32),
+						   &cmd0, &cmd1, segdw, flags);
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);
+		}
+	} else if (flags & NIX_TX_NEED_EXT_HDR) {
+		/* Store the prepared send desc to LMT lines */
+		if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+			vst1q_u64(LMT_OFF(laddr, 0, 48), cmd3);
+		} else {
+			vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+			vst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);
+			vst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);
+		}
+	} else {
+		/* Store the prepared send desc to LMT lines */
+		vst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);
+		vst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);
+	}
+}
+
 static __rte_always_inline uint16_t
 cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			   uint16_t pkts, uint64_t *cmd, uintptr_t base,
@@ -998,10 +1462,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
 	uint64x2_t cmd0[NIX_DESCS_PER_LOOP], cmd1[NIX_DESCS_PER_LOOP],
 		cmd2[NIX_DESCS_PER_LOOP], cmd3[NIX_DESCS_PER_LOOP];
+	uint16_t left, scalar, burst, i, lmt_id, c_lmt_id;
 	uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3, pa;
 	uint64x2_t senddesc01_w0, senddesc23_w0;
 	uint64x2_t senddesc01_w1, senddesc23_w1;
-	uint16_t left, scalar, burst, i, lmt_id;
 	uint64x2_t sendext01_w0, sendext23_w0;
 	uint64x2_t sendext01_w1, sendext23_w1;
 	uint64x2_t sendmem01_w0, sendmem23_w0;
@@ -1010,12 +1474,16 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint64x2_t sgdesc01_w0, sgdesc23_w0;
 	uint64x2_t sgdesc01_w1, sgdesc23_w1;
 	struct cn10k_eth_txq *txq = tx_queue;
-	uintptr_t laddr = txq->lmt_base;
 	rte_iova_t io_addr = txq->io_addr;
+	uintptr_t laddr = txq->lmt_base;
+	uint8_t c_lnum, c_shft, c_loff;
 	uint64x2_t ltypes01, ltypes23;
 	uint64x2_t xtmp128, ytmp128;
 	uint64x2_t xmask01, xmask23;
-	uint8_t lnum, shift;
+	uintptr_t c_laddr = laddr;
+	uint8_t lnum, shift, loff;
+	rte_iova_t c_io_addr;
+	uint64_t sa_base;
 	union wdata {
 		__uint128_t data128;
 		uint64_t data[2];
@@ -1061,19 +1529,36 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 
 	/* Get LMT base address and LMT ID as lcore id */
 	ROC_LMT_BASE_ID_GET(laddr, lmt_id);
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		ROC_LMT_CPT_BASE_ID_GET(c_laddr, c_lmt_id);
+		c_io_addr = txq->cpt_io_addr;
+		sa_base = txq->sa_base;
+	}
+
 	left = pkts;
 again:
 	/* Number of packets to prepare depends on offloads enabled. */
 	burst = left > cn10k_nix_pkts_per_vec_brst(flags) ?
 			      cn10k_nix_pkts_per_vec_brst(flags) :
 			      left;
-	if (flags & NIX_TX_MULTI_SEG_F) {
+	if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)) {
 		wd.data128 = 0;
 		shift = 16;
 	}
 	lnum = 0;
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		loff = 0;
+		c_loff = 0;
+		c_lnum = 0;
+		c_shft = 16;
+	}
 
 	for (i = 0; i < burst; i += NIX_DESCS_PER_LOOP) {
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F && c_lnum + 2 > 16) {
+			burst = i;
+			break;
+		}
+
 		if (flags & NIX_TX_MULTI_SEG_F) {
 			uint8_t j;
 
@@ -1833,7 +2318,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		}
 
 		if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&
-		    !(flags & NIX_TX_MULTI_SEG_F)) {
+		    !(flags & NIX_TX_MULTI_SEG_F) &&
+		    !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
 			/* Set don't free bit if reference count > 1 */
 			xmask01 = vdupq_n_u64(0);
 			xmask23 = xmask01;
@@ -1873,7 +2359,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 					(void **)&mbuf3, 1, 0);
 			senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
 			senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-		} else if (!(flags & NIX_TX_MULTI_SEG_F)) {
+		} else if (!(flags & NIX_TX_MULTI_SEG_F) &&
+			   !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {
 			/* Move mbufs to iova */
 			mbuf0 = (uint64_t *)tx_pkts[0];
 			mbuf1 = (uint64_t *)tx_pkts[1];
@@ -1918,7 +2405,84 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 			cmd2[3] = vzip2q_u64(sendext23_w0, sendext23_w1);
 		}
 
-		if (flags & NIX_TX_MULTI_SEG_F) {
+		if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+			const uint64x2_t olf = {PKT_TX_SEC_OFFLOAD,
+						PKT_TX_SEC_OFFLOAD};
+			uintptr_t next;
+			uint8_t dw;
+
+			/* Extract ol_flags. */
+			xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
+			ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
+
+			xtmp128 = vtstq_u64(olf, xtmp128);
+			ytmp128 = vtstq_u64(olf, ytmp128);
+
+			/* Process mbuf0 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[0]);
+			if (vgetq_lane_u64(xtmp128, 0))
+				cn10k_nix_prep_sec_vec(tx_pkts[0], &cmd0[0],
+						       &cmd1[0], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf0 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[0], segdw[0], next,
+					     cmd0[0], cmd1[0], cmd2[0], cmd3[0],
+					     flags);
+
+			/* Process mbuf1 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[1]);
+			if (vgetq_lane_u64(xtmp128, 1))
+				cn10k_nix_prep_sec_vec(tx_pkts[1], &cmd0[1],
+						       &cmd1[1], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf1 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[1], segdw[1], next,
+					     cmd0[1], cmd1[1], cmd2[1], cmd3[1],
+					     flags);
+
+			/* Process mbuf2 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[2]);
+			if (vgetq_lane_u64(ytmp128, 0))
+				cn10k_nix_prep_sec_vec(tx_pkts[2], &cmd0[2],
+						       &cmd1[2], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf2 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[2], segdw[2], next,
+					     cmd0[2], cmd1[2], cmd2[2], cmd3[2],
+					     flags);
+
+			/* Process mbuf3 */
+			dw = cn10k_nix_tx_dwords(flags, segdw[3]);
+			if (vgetq_lane_u64(ytmp128, 1))
+				cn10k_nix_prep_sec_vec(tx_pkts[3], &cmd0[3],
+						       &cmd1[3], &next, c_laddr,
+						       &c_lnum, &c_loff,
+						       &c_shft, sa_base, flags);
+			else
+				cn10k_nix_lmt_next(dw, laddr, &lnum, &loff,
+						   &shift, &wd.data128, &next);
+
+			/* Store mbuf3 to LMTLINE/CPT NIXTX area */
+			cn10k_nix_xmit_store(tx_pkts[3], segdw[3], next,
+					     cmd0[3], cmd1[3], cmd2[3], cmd3[3],
+					     flags);
+
+		} else if (flags & NIX_TX_MULTI_SEG_F) {
 			uint8_t j;
 
 			segdw[4] = 8;
@@ -1982,21 +2546,35 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
 	}
 
-	if (flags & NIX_TX_MULTI_SEG_F)
+	/* Roundup lnum to last line if it is partial */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F) {
+		lnum = lnum + !!loff;
+		wd.data128 = wd.data128 |
+			(((__uint128_t)(((loff >> 4) - 1) & 0x7) << shift));
+	}
+
+	if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 		wd.data[0] >>= 16;
 
 	if (flags & NIX_TX_VWQE_F)
 		roc_sso_hws_head_wait(base);
 
+	left -= burst;
+
+	/* Submit CPT instructions if any */
+	if (flags & NIX_TX_OFFLOAD_SECURITY_F)
+		cn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,
+				     c_shft);
+
 	/* Trigger LMTST */
 	if (lnum > 16) {
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[0] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[0] & 0x7) << 4;
 		wd.data[0] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[0] <<= 16;
 
 		wd.data[0] |= (15ULL << 12);
@@ -2005,13 +2583,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* STEOR0 */
 		roc_lmt_submit_steorl(wd.data[0], pa);
 
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[1] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[1] & 0x7) << 4;
 		wd.data[1] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[1] <<= 16;
 
 		wd.data[1] |= ((uint64_t)(lnum - 17)) << 12;
@@ -2020,13 +2598,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		/* STEOR1 */
 		roc_lmt_submit_steorl(wd.data[1], pa);
 	} else if (lnum) {
-		if (!(flags & NIX_TX_MULTI_SEG_F))
+		if (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))
 			wd.data[0] = cn10k_nix_tx_steor_vec_data(flags);
 
 		pa = io_addr | (wd.data[0] & 0x7) << 4;
 		wd.data[0] &= ~0x7ULL;
 
-		if (flags & NIX_TX_MULTI_SEG_F)
+		if (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))
 			wd.data[0] <<= 16;
 
 		wd.data[0] |= ((uint64_t)(lnum - 1)) << 12;
@@ -2036,7 +2614,6 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 		roc_lmt_submit_steorl(wd.data[0], pa);
 	}
 
-	left -= burst;
 	rte_io_wmb();
 	if (left)
 		goto again;
@@ -2076,139 +2653,269 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
 #define NOFF_F	     NIX_TX_OFFLOAD_MBUF_NOFF_F
 #define TSO_F	     NIX_TX_OFFLOAD_TSO_F
 #define TSP_F	     NIX_TX_OFFLOAD_TSTAMP_F
+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F
 
-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
 #define NIX_TX_FASTPATH_MODES						\
-T(no_offload,				0, 0, 0, 0, 0, 0,	4,	\
+T(no_offload,				0, 0, 0, 0, 0, 0, 0,	4,	\
 		NIX_TX_OFFLOAD_NONE)					\
-T(l3l4csum,				0, 0, 0, 0, 0, 1,	4,	\
+T(l3l4csum,				0, 0, 0, 0, 0, 0, 1,	4,	\
 		L3L4CSUM_F)						\
-T(ol3ol4csum,				0, 0, 0, 0, 1, 0,	4,	\
+T(ol3ol4csum,				0, 0, 0, 0, 0, 1, 0,	4,	\
 		OL3OL4CSUM_F)						\
-T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 1, 1,	4,	\
+T(ol3ol4csum_l3l4csum,			0, 0, 0, 0, 0, 1, 1,	4,	\
 		OL3OL4CSUM_F | L3L4CSUM_F)				\
-T(vlan,					0, 0, 0, 1, 0, 0,	6,	\
+T(vlan,					0, 0, 0, 0, 1, 0, 0,	6,	\
 		VLAN_F)							\
-T(vlan_l3l4csum,			0, 0, 0, 1, 0, 1,	6,	\
+T(vlan_l3l4csum,			0, 0, 0, 0, 1, 0, 1,	6,	\
 		VLAN_F | L3L4CSUM_F)					\
-T(vlan_ol3ol4csum,			0, 0, 0, 1, 1, 0,	6,	\
+T(vlan_ol3ol4csum,			0, 0, 0, 0, 1, 1, 0,	6,	\
 		VLAN_F | OL3OL4CSUM_F)					\
-T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 1, 1,	6,	\
+T(vlan_ol3ol4csum_l3l4csum,		0, 0, 0, 0, 1, 1, 1,	6,	\
 		VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
-T(noff,					0, 0, 1, 0, 0, 0,	4,	\
+T(noff,					0, 0, 0, 1, 0, 0, 0,	4,	\
 		NOFF_F)							\
-T(noff_l3l4csum,			0, 0, 1, 0, 0, 1,	4,	\
+T(noff_l3l4csum,			0, 0, 0, 1, 0, 0, 1,	4,	\
 		NOFF_F | L3L4CSUM_F)					\
-T(noff_ol3ol4csum,			0, 0, 1, 0, 1, 0,	4,	\
+T(noff_ol3ol4csum,			0, 0, 0, 1, 0, 1, 0,	4,	\
 		NOFF_F | OL3OL4CSUM_F)					\
-T(noff_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1,	4,	\
+T(noff_ol3ol4csum_l3l4csum,		0, 0, 0, 1, 0, 1, 1,	4,	\
 		NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)			\
-T(noff_vlan,				0, 0, 1, 1, 0, 0,	6,	\
+T(noff_vlan,				0, 0, 0, 1, 1, 0, 0,	6,	\
 		NOFF_F | VLAN_F)					\
-T(noff_vlan_l3l4csum,			0, 0, 1, 1, 0, 1,	6,	\
+T(noff_vlan_l3l4csum,			0, 0, 0, 1, 1, 0, 1,	6,	\
 		NOFF_F | VLAN_F | L3L4CSUM_F)				\
-T(noff_vlan_ol3ol4csum,			0, 0, 1, 1, 1, 0,	6,	\
+T(noff_vlan_ol3ol4csum,			0, 0, 0, 1, 1, 1, 0,	6,	\
 		NOFF_F | VLAN_F | OL3OL4CSUM_F)				\
-T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1,	6,	\
+T(noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 0, 1, 1, 1, 1,	6,	\
 		NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
-T(tso,					0, 1, 0, 0, 0, 0,	6,	\
+T(tso,					0, 0, 1, 0, 0, 0, 0,	6,	\
 		TSO_F)							\
-T(tso_l3l4csum,				0, 1, 0, 0, 0, 1,	6,	\
+T(tso_l3l4csum,				0, 0, 1, 0, 0, 0, 1,	6,	\
 		TSO_F | L3L4CSUM_F)					\
-T(tso_ol3ol4csum,			0, 1, 0, 0, 1, 0,	6,	\
+T(tso_ol3ol4csum,			0, 0, 1, 0, 0, 1, 0,	6,	\
 		TSO_F | OL3OL4CSUM_F)					\
-T(tso_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1,	6,	\
+T(tso_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 0, 1, 1,	6,	\
 		TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
-T(tso_vlan,				0, 1, 0, 1, 0, 0,	6,	\
+T(tso_vlan,				0, 0, 1, 0, 1, 0, 0,	6,	\
 		TSO_F | VLAN_F)						\
-T(tso_vlan_l3l4csum,			0, 1, 0, 1, 0, 1,	6,	\
+T(tso_vlan_l3l4csum,			0, 0, 1, 0, 1, 0, 1,	6,	\
 		TSO_F | VLAN_F | L3L4CSUM_F)				\
-T(tso_vlan_ol3ol4csum,			0, 1, 0, 1, 1, 0,	6,	\
+T(tso_vlan_ol3ol4csum,			0, 0, 1, 0, 1, 1, 0,	6,	\
 		TSO_F | VLAN_F | OL3OL4CSUM_F)				\
-T(tso_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 1, 1,	6,	\
+T(tso_vlan_ol3ol4csum_l3l4csum,		0, 0, 1, 0, 1, 1, 1,	6,	\
 		TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(tso_noff,				0, 1, 1, 0, 0, 0,	6,	\
+T(tso_noff,				0, 0, 1, 1, 0, 0, 0,	6,	\
 		TSO_F | NOFF_F)						\
-T(tso_noff_l3l4csum,			0, 1, 1, 0, 0, 1,	6,	\
+T(tso_noff_l3l4csum,			0, 0, 1, 1, 0, 0, 1,	6,	\
 		TSO_F | NOFF_F | L3L4CSUM_F)				\
-T(tso_noff_ol3ol4csum,			0, 1, 1, 0, 1, 0,	6,	\
+T(tso_noff_ol3ol4csum,			0, 0, 1, 1, 0, 1, 0,	6,	\
 		TSO_F | NOFF_F | OL3OL4CSUM_F)				\
-T(tso_noff_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 1, 1,	6,	\
+T(tso_noff_ol3ol4csum_l3l4csum,		0, 0, 1, 1, 0, 1, 1,	6,	\
 		TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(tso_noff_vlan,			0, 1, 1, 1, 0, 0,	6,	\
+T(tso_noff_vlan,			0, 0, 1, 1, 1, 0, 0,	6,	\
 		TSO_F | NOFF_F | VLAN_F)				\
-T(tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 0, 1,	6,	\
+T(tso_noff_vlan_l3l4csum,		0, 0, 1, 1, 1, 0, 1,	6,	\
 		TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
-T(tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 0,	6,	\
+T(tso_noff_vlan_ol3ol4csum,		0, 0, 1, 1, 1, 1, 0,	6,	\
 		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
-T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1,	6,	\
+T(tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 0, 1, 1, 1, 1, 1,	6,	\
 		TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
-T(ts,					1, 0, 0, 0, 0, 0,	8,	\
+T(ts,					0, 1, 0, 0, 0, 0, 0,	8,	\
 		TSP_F)							\
-T(ts_l3l4csum,				1, 0, 0, 0, 0, 1,	8,	\
+T(ts_l3l4csum,				0, 1, 0, 0, 0, 0, 1,	8,	\
 		TSP_F | L3L4CSUM_F)					\
-T(ts_ol3ol4csum,			1, 0, 0, 0, 1, 0,	8,	\
+T(ts_ol3ol4csum,			0, 1, 0, 0, 0, 1, 0,	8,	\
 		TSP_F | OL3OL4CSUM_F)					\
-T(ts_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1,	8,	\
+T(ts_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 0, 1, 1,	8,	\
 		TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
-T(ts_vlan,				1, 0, 0, 1, 0, 0,	8,	\
+T(ts_vlan,				0, 1, 0, 0, 1, 0, 0,	8,	\
 		TSP_F | VLAN_F)						\
-T(ts_vlan_l3l4csum,			1, 0, 0, 1, 0, 1,	8,	\
+T(ts_vlan_l3l4csum,			0, 1, 0, 0, 1, 0, 1,	8,	\
 		TSP_F | VLAN_F | L3L4CSUM_F)				\
-T(ts_vlan_ol3ol4csum,			1, 0, 0, 1, 1, 0,	8,	\
+T(ts_vlan_ol3ol4csum,			0, 1, 0, 0, 1, 1, 0,	8,	\
 		TSP_F | VLAN_F | OL3OL4CSUM_F)				\
-T(ts_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 1, 1,	8,	\
+T(ts_vlan_ol3ol4csum_l3l4csum,		0, 1, 0, 0, 1, 1, 1,	8,	\
 		TSP_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(ts_noff,				1, 0, 1, 0, 0, 0,	8,	\
+T(ts_noff,				0, 1, 0, 1, 0, 0, 0,	8,	\
 		TSP_F | NOFF_F)						\
-T(ts_noff_l3l4csum,			1, 0, 1, 0, 0, 1,	8,	\
+T(ts_noff_l3l4csum,			0, 1, 0, 1, 0, 0, 1,	8,	\
 		TSP_F | NOFF_F | L3L4CSUM_F)				\
-T(ts_noff_ol3ol4csum,			1, 0, 1, 0, 1, 0,	8,	\
+T(ts_noff_ol3ol4csum,			0, 1, 0, 1, 0, 1, 0,	8,	\
 		TSP_F | NOFF_F | OL3OL4CSUM_F)				\
-T(ts_noff_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 1, 1,	8,	\
+T(ts_noff_ol3ol4csum_l3l4csum,		0, 1, 0, 1, 0, 1, 1,	8,	\
 		TSP_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)		\
-T(ts_noff_vlan,				1, 0, 1, 1, 0, 0,	8,	\
+T(ts_noff_vlan,				0, 1, 0, 1, 1, 0, 0,	8,	\
 		TSP_F | NOFF_F | VLAN_F)				\
-T(ts_noff_vlan_l3l4csum,		1, 0, 1, 1, 0, 1,	8,	\
+T(ts_noff_vlan_l3l4csum,		0, 1, 0, 1, 1, 0, 1,	8,	\
 		TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
-T(ts_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 0,	8,	\
+T(ts_noff_vlan_ol3ol4csum,		0, 1, 0, 1, 1, 1, 0,	8,	\
 		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)			\
-T(ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 1, 1,	8,	\
+T(ts_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 0, 1, 1, 1, 1,	8,	\
 		TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
-T(ts_tso,				1, 1, 0, 0, 0, 0,	8,	\
+T(ts_tso,				0, 1, 1, 0, 0, 0, 0,	8,	\
 		TSP_F | TSO_F)						\
-T(ts_tso_l3l4csum,			1, 1, 0, 0, 0, 1,	8,	\
+T(ts_tso_l3l4csum,			0, 1, 1, 0, 0, 0, 1,	8,	\
 		TSP_F | TSO_F | L3L4CSUM_F)				\
-T(ts_tso_ol3ol4csum,			1, 1, 0, 0, 1, 0,	8,	\
+T(ts_tso_ol3ol4csum,			0, 1, 1, 0, 0, 1, 0,	8,	\
 		TSP_F | TSO_F | OL3OL4CSUM_F)				\
-T(ts_tso_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 1, 1,	8,	\
+T(ts_tso_ol3ol4csum_l3l4csum,		0, 1, 1, 0, 0, 1, 1,	8,	\
 		TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
-T(ts_tso_vlan,				1, 1, 0, 1, 0, 0,	8,	\
+T(ts_tso_vlan,				0, 1, 1, 0, 1, 0, 0,	8,	\
 		TSP_F | TSO_F | VLAN_F)					\
-T(ts_tso_vlan_l3l4csum,			1, 1, 0, 1, 0, 1,	8,	\
+T(ts_tso_vlan_l3l4csum,			0, 1, 1, 0, 1, 0, 1,	8,	\
 		TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
-T(ts_tso_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 0,	8,	\
+T(ts_tso_vlan_ol3ol4csum,		0, 1, 1, 0, 1, 1, 0,	8,	\
 		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)			\
-T(ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1,	8,	\
+T(ts_tso_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 0, 1, 1, 1,	8,	\
 		TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
-T(ts_tso_noff,				1, 1, 1, 0, 0, 0,	8,	\
+T(ts_tso_noff,				0, 1, 1, 1, 0, 0, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F)					\
-T(ts_tso_noff_l3l4csum,			1, 1, 1, 0, 0, 1,	8,	\
+T(ts_tso_noff_l3l4csum,			0, 1, 1, 1, 0, 0, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
-T(ts_tso_noff_ol3ol4csum,		1, 1, 1, 0, 1, 0,	8,	\
+T(ts_tso_noff_ol3ol4csum,		0, 1, 1, 1, 0, 1, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)			\
-T(ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1,	8,	\
+T(ts_tso_noff_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 0, 1, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
-T(ts_tso_noff_vlan,			1, 1, 1, 1, 0, 0,	8,	\
+T(ts_tso_noff_vlan,			0, 1, 1, 1, 1, 0, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F)			\
-T(ts_tso_noff_vlan_l3l4csum,		1, 1, 1, 1, 0, 1,	8,	\
+T(ts_tso_noff_vlan_l3l4csum,		0, 1, 1, 1, 1, 0, 1,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
-T(ts_tso_noff_vlan_ol3ol4csum,		1, 1, 1, 1, 1, 0,	8,	\
+T(ts_tso_noff_vlan_ol3ol4csum,		0, 1, 1, 1, 1, 1, 0,	8,	\
 		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 1, 1,	8,	\
-		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)
+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,	0, 1, 1, 1, 1, 1, 1,	8,	\
+		TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec,					1, 0, 0, 0, 0, 0, 0,	4,	\
+		T_SEC_F)						\
+T(sec_l3l4csum,				1, 0, 0, 0, 0, 0, 1,	4,	\
+		T_SEC_F | L3L4CSUM_F)					\
+T(sec_ol3ol4csum,			1, 0, 0, 0, 0, 1, 0,	4,	\
+		T_SEC_F | OL3OL4CSUM_F)					\
+T(sec_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 0, 1, 1,	4,	\
+		T_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)			\
+T(sec_vlan,				1, 0, 0, 0, 1, 0, 0,	6,	\
+		T_SEC_F | VLAN_F)					\
+T(sec_vlan_l3l4csum,			1, 0, 0, 0, 1, 0, 1,	6,	\
+		T_SEC_F | VLAN_F | L3L4CSUM_F)				\
+T(sec_vlan_ol3ol4csum,			1, 0, 0, 0, 1, 1, 0,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F)			\
+T(sec_vlan_ol3ol4csum_l3l4csum,		1, 0, 0, 0, 1, 1, 1,	6,	\
+		T_SEC_F | VLAN_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff,				1, 0, 0, 1, 0, 0, 0,	4,	\
+		T_SEC_F | NOFF_F)					\
+T(sec_noff_l3l4csum,			1, 0, 0, 1, 0, 0, 1,	4,	\
+		T_SEC_F | NOFF_F | L3L4CSUM_F)				\
+T(sec_noff_ol3ol4csum,			1, 0, 0, 1, 0, 1, 0,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F)			\
+T(sec_noff_ol3ol4csum_l3l4csum,		1, 0, 0, 1, 0, 1, 1,	4,	\
+		T_SEC_F | NOFF_F | OL3OL4CSUM_F |	L3L4CSUM_F)	\
+T(sec_noff_vlan,			1, 0, 0, 1, 1, 0, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F)				\
+T(sec_noff_vlan_l3l4csum,		1, 0, 0, 1, 1, 0, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_noff_vlan_ol3ol4csum,		1, 0, 0, 1, 1, 1, 0,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_noff_vlan_ol3ol4csum_l3l4csum,	1, 0, 0, 1, 1, 1, 1,	6,	\
+		T_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso,				1, 0, 1, 0, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F)					\
+T(sec_tso_l3l4csum,			1, 0, 1, 0, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | L3L4CSUM_F)				\
+T(sec_tso_ol3ol4csum,			1, 0, 1, 0, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F)				\
+T(sec_tso_ol3ol4csum_l3l4csum,		1, 0, 1, 0, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_tso_vlan,				1, 0, 1, 0, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F)				\
+T(sec_tso_vlan_l3l4csum,		1, 0, 1, 0, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_tso_vlan_ol3ol4csum,		1, 0, 1, 0, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_tso_vlan_ol3ol4csum_l3l4csum,	1, 0, 1, 0, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff,				1, 0, 1, 1, 0, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F)				\
+T(sec_tso_noff_l3l4csum,		1, 0, 1, 1, 0, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_tso_noff_ol3ol4csum,		1, 0, 1, 1, 0, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_tso_noff_ol3ol4csum_l3l4csum,	1, 0, 1, 1, 0, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_tso_noff_vlan,			1, 0, 1, 1, 1, 0, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F)			\
+T(sec_tso_noff_vlan_l3l4csum,		1, 0, 1, 1, 1, 0, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_tso_noff_vlan_ol3ol4csum,		1, 0, 1, 1, 1, 1, 0,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,	6,	\
+		T_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts,				1, 1, 0, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F)					\
+T(sec_ts_l3l4csum,			1, 1, 0, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | L3L4CSUM_F)				\
+T(sec_ts_ol3ol4csum,			1, 1, 0, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F)				\
+T(sec_ts_ol3ol4csum_l3l4csum,		1, 1, 0, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)		\
+T(sec_ts_vlan,				1, 1, 0, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F)				\
+T(sec_ts_vlan_l3l4csum,			1, 1, 0, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)			\
+T(sec_ts_vlan_ol3ol4csum,		1, 1, 0, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)		\
+T(sec_ts_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff,				1, 1, 0, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F)				\
+T(sec_ts_noff_l3l4csum,			1, 1, 0, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)			\
+T(sec_ts_noff_ol3ol4csum,		1, 1, 0, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)		\
+T(sec_ts_noff_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_noff_vlan,			1, 1, 0, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F)			\
+T(sec_ts_noff_vlan_l3l4csum,		1, 1, 0, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_noff_vlan_ol3ol4csum,		1, 1, 0, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,	1, 1, 0, 1, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso,				1, 1, 1, 0, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F)				\
+T(sec_ts_tso_l3l4csum,			1, 1, 1, 0, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum,		1, 1, 1, 0, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)			\
+T(sec_ts_tso_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)	\
+T(sec_ts_tso_vlan,			1, 1, 1, 0, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F)			\
+T(sec_ts_tso_vlan_l3l4csum,		1, 1, 1, 0, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)		\
+T(sec_ts_tso_vlan_ol3ol4csum,		1, 1, 1, 0, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,	1, 1, 1, 0, 1, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
+T(sec_ts_tso_noff,			1, 1, 1, 1, 0, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F)			\
+T(sec_ts_tso_noff_l3l4csum,		1, 1, 1, 1, 0, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)		\
+T(sec_ts_tso_noff_ol3ol4csum,		1, 1, 1, 1, 0, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)	\
+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,	1, 1, 1, 1, 0, 1, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)\
+T(sec_ts_tso_noff_vlan,			1, 1, 1, 1, 1, 0, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)		\
+T(sec_ts_tso_noff_vlan_l3l4csum,	1, 1, 1, 1, 1, 0, 1,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)	\
+T(sec_ts_tso_noff_vlan_ol3ol4csum,	1, 1, 1, 1, 1, 1, 0,	8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\
+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,	\
+		T_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
+		L3L4CSUM_F)
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(          \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \
 									       \
diff --git a/drivers/net/cnxk/cn10k_tx_mseg.c b/drivers/net/cnxk/cn10k_tx_mseg.c
index 4ea4c8a..2b83409 100644
--- a/drivers/net/cnxk/cn10k_tx_mseg.c
+++ b/drivers/net/cnxk/cn10k_tx_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_xmit_pkts_mseg_##name(void *tx_queue,                \
 						struct rte_mbuf **tx_pkts,     \
diff --git a/drivers/net/cnxk/cn10k_tx_vec.c b/drivers/net/cnxk/cn10k_tx_vec.c
index a035049..2789b13 100644
--- a/drivers/net/cnxk/cn10k_tx_vec.c
+++ b/drivers/net/cnxk/cn10k_tx_vec.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)			       \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)			       \
 	uint16_t __rte_noinline __rte_hot				       \
 		cn10k_nix_xmit_pkts_vec_##name(void *tx_queue,                 \
 					       struct rte_mbuf **tx_pkts,      \
diff --git a/drivers/net/cnxk/cn10k_tx_vec_mseg.c b/drivers/net/cnxk/cn10k_tx_vec_mseg.c
index 7f98f79..98000df 100644
--- a/drivers/net/cnxk/cn10k_tx_vec_mseg.c
+++ b/drivers/net/cnxk/cn10k_tx_vec_mseg.c
@@ -5,7 +5,7 @@
 #include "cn10k_ethdev.h"
 #include "cn10k_tx.h"
 
-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \
+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \
 	uint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_vec_mseg_##name( \
 		void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \
 	{                                                                      \
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 22/28] net/cnxk: support IPsec anti replay in cn9k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (20 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 21/28] net/cnxk: support Tx " Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
                     ` (6 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds anti replay support for cn9k platform using
SW anti replay check.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn9k_ethdev.h     |  3 +++
 drivers/net/cnxk/cn9k_ethdev_sec.c | 29 ++++++++++++++++++++
 drivers/net/cnxk/cn9k_rx.h         | 54 +++++++++++++++++++++++++++++++++++++-
 3 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h
index f8818b8..2b452fe 100644
--- a/drivers/net/cnxk/cn9k_ethdev.h
+++ b/drivers/net/cnxk/cn9k_ethdev.h
@@ -6,6 +6,7 @@
 
 #include <cnxk_ethdev.h>
 #include <cnxk_security.h>
+#include <cnxk_security_ar.h>
 
 struct cn9k_eth_txq {
 	uint64_t cmd[8];
@@ -40,6 +41,8 @@ struct cn9k_eth_rxq {
 /* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */
 struct cn9k_inb_priv_data {
 	void *userdata;
+	uint32_t replay_win_sz;
+	struct cnxk_on_ipsec_ar ar;
 	struct cnxk_eth_sec_sess *eth_sec;
 };
 
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
index 3ec7497..deb1daf 100644
--- a/drivers/net/cnxk/cn9k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -73,6 +73,27 @@ static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {
 	}
 };
 
+static inline int
+ar_window_init(struct cn9k_inb_priv_data *inb_priv)
+{
+	if (inb_priv->replay_win_sz > CNXK_ON_AR_WIN_SIZE_MAX) {
+		plt_err("Replay window size:%u is not supported",
+			inb_priv->replay_win_sz);
+		return -ENOTSUP;
+	}
+
+	rte_spinlock_init(&inb_priv->ar.lock);
+	/*
+	 * Set window bottom to 1, base and top to size of
+	 * window
+	 */
+	inb_priv->ar.winb = 1;
+	inb_priv->ar.wint = inb_priv->replay_win_sz;
+	inb_priv->ar.base = inb_priv->replay_win_sz;
+
+	return 0;
+}
+
 static int
 cn9k_eth_sec_session_create(void *device,
 			    struct rte_security_session_conf *conf,
@@ -158,6 +179,14 @@ cn9k_eth_sec_session_create(void *device,
 		/* Save userdata in inb private area */
 		inb_priv->userdata = conf->userdata;
 
+		inb_priv->replay_win_sz = ipsec->replay_win_sz;
+		if (inb_priv->replay_win_sz) {
+			rc = ar_window_init(inb_priv);
+			if (rc)
+				goto mempool_put;
+		}
+
+		/* Prepare session priv */
 		sess_priv.inb_sa = 1;
 		sess_priv.sa_idx = ipsec->spi;
 
diff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h
index bdedeab..7ab415a 100644
--- a/drivers/net/cnxk/cn9k_rx.h
+++ b/drivers/net/cnxk/cn9k_rx.h
@@ -31,6 +31,9 @@
 #define CQE_CAST(x)	     ((struct nix_cqe_hdr_s *)(x))
 #define CQE_SZ(x)	     ((x) * CNXK_NIX_CQ_ENTRY_SZ)
 
+#define IPSEC_SQ_LO_IDX 4
+#define IPSEC_SQ_HI_IDX 8
+
 union mbuf_initializer {
 	struct {
 		uint16_t data_off;
@@ -166,6 +169,48 @@ nix_cqe_xtract_mseg(const union nix_rx_parse_u *rx, struct rte_mbuf *mbuf,
 	mbuf->next = NULL;
 }
 
+static inline int
+ipsec_antireplay_check(struct roc_onf_ipsec_inb_sa *sa,
+		       struct cn9k_inb_priv_data *priv, uintptr_t data,
+		       uint32_t win_sz)
+{
+	struct cnxk_on_ipsec_ar *ar = &priv->ar;
+	uint64_t seq_in_sa;
+	uint32_t seqh = 0;
+	uint32_t seql;
+	uint64_t seq;
+	uint8_t esn;
+	int rc;
+
+	esn = sa->ctl.esn_en;
+	seql = rte_be_to_cpu_32(*((uint32_t *)(data + IPSEC_SQ_LO_IDX)));
+
+	if (!esn) {
+		seq = (uint64_t)seql;
+	} else {
+		seqh = rte_be_to_cpu_32(*((uint32_t *)(data +
+					IPSEC_SQ_HI_IDX)));
+		seq = ((uint64_t)seqh << 32) | seql;
+	}
+
+	if (unlikely(seq == 0))
+		return -1;
+
+	rte_spinlock_lock(&ar->lock);
+	rc = cnxk_on_anti_replay_check(seq, ar, win_sz);
+	if (esn && !rc) {
+		seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
+			    rte_be_to_cpu_32(sa->esn_low);
+		if (seq > seq_in_sa) {
+			sa->esn_low = rte_cpu_to_be_32(seql);
+			sa->esn_hi = rte_cpu_to_be_32(seqh);
+		}
+	}
+	rte_spinlock_unlock(&ar->lock);
+
+	return rc;
+}
+
 static __rte_always_inline uint64_t
 nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 		       uintptr_t sa_base, uint64_t *rearm_val, uint16_t *len)
@@ -178,8 +223,8 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	uint8_t lcptr = rx->lcptr;
 	struct rte_ipv4_hdr *ipv4;
 	uint16_t data_off, res;
+	uint32_t spi, win_sz;
 	uint32_t spi_mask;
-	uint32_t spi;
 	uintptr_t data;
 	__uint128_t dw;
 	uint8_t sa_w;
@@ -209,6 +254,13 @@ nix_rx_sec_mbuf_update(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
 	dw = *(__uint128_t *)sa_priv;
 	*rte_security_dynfield(m) = (uint64_t)dw;
 
+	/* Check if anti-replay is enabled */
+	win_sz = (uint32_t)(dw >> 64);
+	if (win_sz) {
+		if (ipsec_antireplay_check(sa, sa_priv, data, win_sz) < 0)
+			return PKT_RX_SEC_OFFLOAD | PKT_RX_SEC_OFFLOAD_FAILED;
+	}
+
 	/* Get total length from IPv4 header. We can assume only IPv4 */
 	ipv4 = (struct rte_ipv4_hdr *)(data + ROC_ONF_IPSEC_INB_SPI_SEQ_SZ +
 				       ROC_ONF_IPSEC_INB_MAX_L2_SZ);
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 23/28] net/cnxk: support IPsec transport mode in cn10k
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (21 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
                     ` (5 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds IPsec transport mode capability to rte security
capabilities.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev_sec.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 3ffd824..dae5ea7 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -69,6 +69,30 @@ static const struct rte_security_capability cn10k_eth_sec_capabilities[] = {
 		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
 		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
 	},
+	{	/* IPsec Inline Protocol ESP Transport Egress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
+	{	/* IPsec Inline Protocol ESP Transport Ingress */
+		.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+		.protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+		.ipsec = {
+			.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+			.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,
+			.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			.options = { 0 }
+		},
+		.crypto_capabilities = cn10k_eth_sec_crypto_caps,
+		.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
+	},
 	{
 		.action = RTE_SECURITY_ACTION_TYPE_NONE
 	}
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (22 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
                     ` (4 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds support to update ethertype for mixed IPsec tunnel
versions. And also sets et_overwr for inbound IPsec.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/common/cnxk/cnxk_security.c |  1 +
 drivers/net/cnxk/cn10k_ethdev.h     |  3 ++-
 drivers/net/cnxk/cn10k_ethdev_sec.c |  2 ++
 drivers/net/cnxk/cn10k_tx.h         | 19 +++++++++++++++++++
 4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/common/cnxk/cnxk_security.c b/drivers/common/cnxk/cnxk_security.c
index c117fa7..0039a9d 100644
--- a/drivers/common/cnxk/cnxk_security.c
+++ b/drivers/common/cnxk/cnxk_security.c
@@ -344,6 +344,7 @@ cnxk_ot_ipsec_inb_sa_fill(struct roc_ot_ipsec_inb_sa *sa,
 	/* There are two words of CPT_CTX_HW_S for ucode to skip */
 	sa->w0.s.ctx_hdr_size = 1;
 	sa->w0.s.aop_valid = 1;
+	sa->w0.s.et_ovrwr = 1;
 
 	rte_wmb();
 
diff --git a/drivers/net/cnxk/cn10k_ethdev.h b/drivers/net/cnxk/cn10k_ethdev.h
index 200cd93..c2a46ad 100644
--- a/drivers/net/cnxk/cn10k_ethdev.h
+++ b/drivers/net/cnxk/cn10k_ethdev.h
@@ -64,7 +64,8 @@ struct cn10k_sec_sess_priv {
 		struct {
 			uint32_t sa_idx;
 			uint8_t inb_sa : 1;
-			uint8_t rsvd1 : 2;
+			uint8_t outer_ip_ver : 1;
+			uint8_t mode : 1;
 			uint8_t roundup_byte : 5;
 			uint8_t roundup_len;
 			uint16_t partial_len;
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index dae5ea7..c66730a 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -341,6 +341,8 @@ cn10k_eth_sec_session_create(void *device,
 		sess_priv.roundup_byte = rlens->roundup_byte;
 		sess_priv.roundup_len = rlens->roundup_len;
 		sess_priv.partial_len = rlens->partial_len;
+		sess_priv.mode = outb_sa->w2.s.ipsec_mode;
+		sess_priv.outer_ip_ver = outb_sa->w2.s.outer_ip_ver;
 
 		/* Pointer from eth_sec -> outb_sa */
 		eth_sec->sa = outb_sa;
diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index 52bb71d..ad84464 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -302,6 +302,16 @@ cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,
 	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
 
 	dptr += l2_len;
+
+	if (sess_priv.mode == ROC_IE_SA_MODE_TUNNEL) {
+		if (sess_priv.outer_ip_ver == ROC_IE_SA_IP_VERSION_4)
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		else
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	}
+
 	ucode_cmd[1] = dptr;
 	ucode_cmd[2] = dptr;
 
@@ -396,6 +406,15 @@ cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,
 	cmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);
 
 	dptr += l2_len;
+
+	if (sess_priv.mode == ROC_IE_SA_MODE_TUNNEL) {
+		if (sess_priv.outer_ip_ver == ROC_IE_SA_IP_VERSION_4)
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
+		else
+			*((uint16_t *)(dptr - 2)) =
+				rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
+	}
 	ucode_cmd[1] = dptr;
 	ucode_cmd[2] = dptr;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 25/28] net/cnxk: allow zero udp6 checksum for non inline device
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (23 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
                     ` (3 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Sets IP6_UDP_OPT in NIX RX config to allow optional
UDP checksum for IPv6 in case of security offload.
Also disable drop_re when inline inbound is enabled.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev.c | 5 +++++
 drivers/net/cnxk/cnxk_ethdev.c  | 9 +++++++++
 drivers/net/cnxk/cnxk_ethdev.h  | 1 +
 3 files changed, 15 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index fa2343c..9dfea99 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -553,6 +553,11 @@ cn10k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 
 	dev = cnxk_eth_pmd_priv(eth_dev);
 
+	/* DROP_RE is not supported with inline IPSec for CN10K A0 */
+	if (roc_model_is_cn10ka_a0() || roc_model_is_cnf10ka_a0() ||
+	    roc_model_is_cnf10kb_a0())
+		dev->ipsecd_drop_re_dis = 1;
+
 	/* Register up msg callbacks for PTP information */
 	roc_nix_ptp_info_cb_register(&dev->nix, cn10k_nix_ptp_info_update_cb);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index a2e134c..fa9a26f 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1021,6 +1021,15 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		   ROC_NIX_LF_RX_CFG_LEN_IL4 | ROC_NIX_LF_RX_CFG_LEN_IL3 |
 		   ROC_NIX_LF_RX_CFG_LEN_OL4 | ROC_NIX_LF_RX_CFG_LEN_OL3);
 
+	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+		rx_cfg |= ROC_NIX_LF_RX_CFG_IP6_UDP_OPT;
+		/* Disable drop re if rx offload security is enabled and
+		 * platform does not support it.
+		 */
+		if (dev->ipsecd_drop_re_dis)
+			rx_cfg &= ~(ROC_NIX_LF_RX_CFG_DROP_RE);
+	}
+
 	nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
 	nb_txq = RTE_MAX(data->nb_tx_queues, 1);
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 88589d3..3601e4d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -269,6 +269,7 @@ struct cnxk_eth_dev {
 	union {
 		struct {
 			uint64_t cq_min_4k : 1;
+			uint64_t ipsecd_drop_re_dis : 1;
 		};
 		uint64_t hwcap;
 	};
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (24 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
                     ` (2 subsequent siblings)
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Srujana Challa

From: Srujana Challa <schalla@marvell.com>

Adds capabitlities for AES_CBC and HMAC_SHA1 for 9k
security offload.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/net/cnxk/cn10k_ethdev_sec.c | 40 +++++++++++++++++++++++++++++++++++++
 drivers/net/cnxk/cn9k_ethdev_sec.c  | 40 +++++++++++++++++++++++++++++++++++++
 2 files changed, 80 insertions(+)

diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index c66730a..82dc636 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -41,6 +41,46 @@ static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
 			}, }
 		}, }
 	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 20,
+					.max = 64,
+					.increment = 1
+				},
+				.digest_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
diff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c
index deb1daf..b070ad5 100644
--- a/drivers/net/cnxk/cn9k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c
@@ -40,6 +40,46 @@ static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {
 			}, }
 		}, }
 	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 20,
+					.max = 64,
+					.increment = 1
+				},
+				.digest_size = {
+					.min = 12,
+					.max = 12,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 27/28] net/cnxk: support configuring channel mask via devargs
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (25 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
  2021-10-02 13:49   ` [dpdk-dev] [PATCH v3 00/28] net/cnxk: support for inline ipsec Jerin Jacob
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, Satheesh Paul

From: Satheesh Paul <psatheesh@marvell.com>

This patch adds support to configure channel mask which will
be used by rte flow when adding flow rules with inline IPsec
action.

Signed-off-by: Satheesh Paul <psatheesh@marvell.com>
---
 doc/guides/nics/cnxk.rst           | 20 +++++++++++++++++++
 drivers/net/cnxk/cnxk_ethdev_sec.c | 39 +++++++++++++++++++++++++++++++++++++-
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index b542437..dd955d3 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -255,6 +255,26 @@ Runtime Config Options
    With the above configuration, inbound encrypted traffic from both the ports
    is received by ipsec inline device.
 
+- ``Inline IPsec device channel and mask`` (default ``none``)
+
+   Set channel and channel mask configuration for the inline IPSec device. This
+   will be used when creating flow rules with RTE_FLOW_ACTION_TYPE_SECURITY
+   action.
+
+   By default, RTE Flow API sets the channel number of the port on which the
+   rule is created in the MCAM entry and matches it exactly. This behaviour can
+   be modified using the ``inl_cpt_channel`` ``devargs`` parameter.
+
+   For example::
+
+      -a 0002:1d:00.0,inl_cpt_channel=0x100/0xf00
+
+   With the above configuration, RTE Flow rules API will set the channel
+   and channel mask as 0x100 and 0xF00 in the MCAM entries of the  flow rules
+   created with RTE_FLOW_ACTION_TYPE_SECURITY action. Since channel number is
+   set with this custom mask, inbound encrypted traffic from all ports with
+   matching channel number pattern will be directed to the inline IPSec device.
+
 .. note::
 
    Above devarg parameters are configurable per device, user needs to pass the
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index c76e230..ae3e49c 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -6,6 +6,13 @@
 
 #define CNXK_NIX_INL_SELFTEST	      "selftest"
 #define CNXK_NIX_INL_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
+#define CNXK_INL_CPT_CHANNEL	      "inl_cpt_channel"
+
+struct inl_cpt_channel {
+	bool is_multi_channel;
+	uint16_t channel;
+	uint16_t mask;
+};
 
 #define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)
 #define CNXK_NIX_INL_DEV_NAME_LEN                                              \
@@ -137,13 +144,37 @@ parse_selftest(const char *key, const char *value, void *extra_args)
 }
 
 static int
+parse_inl_cpt_channel(const char *key, const char *value, void *extra_args)
+{
+	RTE_SET_USED(key);
+	uint16_t chan = 0, mask = 0;
+	char *next = 0;
+
+	/* next will point to the separator '/' */
+	chan = strtol(value, &next, 16);
+	mask = strtol(++next, 0, 16);
+
+	if (chan > GENMASK(12, 0) || mask > GENMASK(12, 0))
+		return -EINVAL;
+
+	((struct inl_cpt_channel *)extra_args)->channel = chan;
+	((struct inl_cpt_channel *)extra_args)->mask = mask;
+	((struct inl_cpt_channel *)extra_args)->is_multi_channel = true;
+
+	return 0;
+}
+
+static int
 nix_inl_parse_devargs(struct rte_devargs *devargs,
 		      struct roc_nix_inl_dev *inl_dev)
 {
 	uint32_t ipsec_in_max_spi = BIT(8) - 1;
+	struct inl_cpt_channel cpt_channel;
 	struct rte_kvargs *kvlist;
 	uint8_t selftest = 0;
 
+	memset(&cpt_channel, 0, sizeof(cpt_channel));
+
 	if (devargs == NULL)
 		goto null_devargs;
 
@@ -155,11 +186,16 @@ nix_inl_parse_devargs(struct rte_devargs *devargs,
 			   &selftest);
 	rte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,
 			   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
+	rte_kvargs_process(kvlist, CNXK_INL_CPT_CHANNEL, &parse_inl_cpt_channel,
+			   &cpt_channel);
 	rte_kvargs_free(kvlist);
 
 null_devargs:
 	inl_dev->ipsec_in_max_spi = ipsec_in_max_spi;
 	inl_dev->selftest = selftest;
+	inl_dev->channel = cpt_channel.channel;
+	inl_dev->chan_mask = cpt_channel.mask;
+	inl_dev->is_multi_channel = cpt_channel.is_multi_channel;
 	return 0;
 exit:
 	return -EINVAL;
@@ -275,4 +311,5 @@ RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, "vfio-pci");
 
 RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,
 			      CNXK_NIX_INL_SELFTEST "=1"
-			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>");
+			      CNXK_NIX_INL_IPSEC_IN_MAX_SPI "=<1-65535>"
+			      CNXK_INL_CPT_CHANNEL "=<1-4095>/<1-4095>");
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* [dpdk-dev] [PATCH v3 28/28] net/cnxk: reflect globally enabled offloads in queue conf
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (26 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
@ 2021-10-01 13:40   ` Nithin Dabilpuram
  2021-10-02 13:49   ` [dpdk-dev] [PATCH v3 00/28] net/cnxk: support for inline ipsec Jerin Jacob
  28 siblings, 0 replies; 91+ messages in thread
From: Nithin Dabilpuram @ 2021-10-01 13:40 UTC (permalink / raw)
  To: jerinj, Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao
  Cc: dev, stable

Reflect globally enabled Rx and Tx offloads in queue conf.
Also fix issue with lmt data prepare for multi seg.

Fixes: a24af6361e37 ("net/cnxk: add Tx queue setup and release")
Fixes: a86144cd9ded ("net/cnxk: add Rx queue setup and release")
Fixes: 305ca2c4c382 ("net/cnxk: support multi-segment vector Tx")
Cc: stable@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
 drivers/net/cnxk/cn10k_tx.h    | 2 +-
 drivers/net/cnxk/cnxk_ethdev.c | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h
index ad84464..c6f349b 100644
--- a/drivers/net/cnxk/cn10k_tx.h
+++ b/drivers/net/cnxk/cn10k_tx.h
@@ -1280,7 +1280,7 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,
 			vst1q_u64(lmt_addr + 14, cmd1[3]);
 
 			*data128 |= ((__uint128_t)7) << *shift;
-			shift += 3;
+			*shift += 3;
 
 			return 1;
 		}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index fa9a26f..2683bc1 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -380,6 +380,8 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	txq_sp->dev = dev;
 	txq_sp->qid = qid;
 	txq_sp->qconf.conf.tx = *tx_conf;
+	/* Queue config should reflect global offloads */
+	txq_sp->qconf.conf.tx.offloads = dev->tx_offloads;
 	txq_sp->qconf.nb_desc = nb_desc;
 
 	plt_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " lmt_addr=%p"
@@ -527,6 +529,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	rxq_sp->dev = dev;
 	rxq_sp->qid = qid;
 	rxq_sp->qconf.conf.rx = *rx_conf;
+	/* Queue config should reflect global offloads */
+	rxq_sp->qconf.conf.rx.offloads = dev->rx_offloads;
 	rxq_sp->qconf.nb_desc = nb_desc;
 	rxq_sp->qconf.mp = mp;
 
-- 
2.8.4


^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/28] net/cnxk: support for inline ipsec
  2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
                     ` (27 preceding siblings ...)
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
@ 2021-10-02 13:49   ` Jerin Jacob
  28 siblings, 0 replies; 91+ messages in thread
From: Jerin Jacob @ 2021-10-02 13:49 UTC (permalink / raw)
  To: Nithin Dabilpuram, Ferruh Yigit; +Cc: Jerin Jacob, dpdk-dev

On Fri, Oct 1, 2021 at 7:10 PM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> Support for inline ipsec in CN9K event mode and in Cn10K event mode and
> poll mode.
>
> Kommula Shiva Shankar (1):
>   common/cnxk: add CQ enable support in NIX Tx path
>
> Nithin Dabilpuram (18):
>   common/cnxk: support CPT parse header dump
>   common/cnxk: allow reuse of SSO API for inline dev
>   common/cnxk: change NIX debug API and queue API interface
>   common/cnxk: support NIX inline device IRQ
>   common/cnxk: support NIX inline device init and fini
>   common/cnxk: support NIX inline inbound and outbound setup
>   common/cnxk: disable CQ drop when inline inbound is enabled
>   common/cnxk: dump CPT LF registers on error intr
>   common/cnxk: align CPT LF enable/disable sequence
>   common/cnxk: restore NIX sqb pool limit before destroy
>   common/cnxk: setup aura BP conf based on nix
>   net/cnxk: support inline security setup for cn9k
>   net/cnxk: support inline security setup for cn10k
>   net/cnxk: support Rx security offload on cn9k
>   net/cnxk: support Tx security offload on cn9k
>   net/cnxk: support Rx security offload on cn10k
>   net/cnxk: support Tx security offload on cn10k
>   net/cnxk: reflect globally enabled offloads in queue conf
>
> Satheesh Paul (2):
>   common/cnxk: support inline IPsec rte flow action
>   net/cnxk: support configuring channel mask via devargs
>
> Srujana Challa (7):
>   common/cnxk: support cn9k fast path security session
>   common/cnxk: support anti-replay check in SW for cn9k
>   net/cnxk: support IPsec anti replay in cn9k
>   net/cnxk: support IPsec transport mode in cn10k
>   net/cnxk: update ethertype for mixed IPsec tunnel versions
>   net/cnxk: allow zero udp6 checksum for non inline device
>   net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1
>
> v3:
> - Rebased and fixed conflicts
>
> v2:
> - Included bug fixes for second pass packets
> - Updated .ini files.
> - Reworded commit messages with additional description
>   and abbreviation fixes


Series Acked-by: Jerin Jacob <jerinj@marvell.com>
Series applied to dpdk-next-net-mrvl/for-next-net. Thanks.

>
>  doc/guides/nics/cnxk.rst                         |  122 +++
>  doc/guides/nics/features/cnxk.ini                |    1 +
>  doc/guides/nics/features/cnxk_vec.ini            |    1 +
>  doc/guides/nics/features/cnxk_vf.ini             |    1 +
>  doc/guides/rel_notes/release_21_11.rst           |    2 +
>  drivers/common/cnxk/cnxk_security.c              |  212 +++++
>  drivers/common/cnxk/cnxk_security.h              |   12 +
>  drivers/common/cnxk/cnxk_security_ar.h           |  184 ++++
>  drivers/common/cnxk/hw/cpt.h                     |   19 +
>  drivers/common/cnxk/meson.build                  |    3 +
>  drivers/common/cnxk/roc_api.h                    |   49 +-
>  drivers/common/cnxk/roc_constants.h              |   58 ++
>  drivers/common/cnxk/roc_cpt.c                    |   54 +-
>  drivers/common/cnxk/roc_cpt.h                    |   10 +
>  drivers/common/cnxk/roc_cpt_debug.c              |   63 +-
>  drivers/common/cnxk/roc_cpt_priv.h               |    1 +
>  drivers/common/cnxk/roc_idev.c                   |    2 +
>  drivers/common/cnxk/roc_idev_priv.h              |    3 +
>  drivers/common/cnxk/roc_io.h                     |    9 +
>  drivers/common/cnxk/roc_io_generic.h             |    3 +-
>  drivers/common/cnxk/roc_irq.c                    |    7 +-
>  drivers/common/cnxk/roc_nix.c                    |    2 +-
>  drivers/common/cnxk/roc_nix.h                    |    7 +
>  drivers/common/cnxk/roc_nix_debug.c              |  168 +++-
>  drivers/common/cnxk/roc_nix_fc.c                 |   23 +-
>  drivers/common/cnxk/roc_nix_inl.c                |  778 +++++++++++++++++
>  drivers/common/cnxk/roc_nix_inl.h                |  170 ++++
>  drivers/common/cnxk/roc_nix_inl_dev.c            |  639 ++++++++++++++
>  drivers/common/cnxk/roc_nix_inl_dev_irq.c        |  359 ++++++++
>  drivers/common/cnxk/roc_nix_inl_priv.h           |   68 ++
>  drivers/common/cnxk/roc_nix_priv.h               |   31 +
>  drivers/common/cnxk/roc_nix_queue.c              |  137 +--
>  drivers/common/cnxk/roc_npc.c                    |   27 +-
>  drivers/common/cnxk/roc_npc_mcam.c               |   28 +-
>  drivers/common/cnxk/roc_platform.h               |   11 +-
>  drivers/common/cnxk/roc_priv.h                   |    3 +
>  drivers/common/cnxk/roc_sso.c                    |   52 +-
>  drivers/common/cnxk/roc_sso_priv.h               |    9 +
>  drivers/common/cnxk/version.map                  |   34 +
>  drivers/event/cnxk/cn10k_eventdev.c              |   93 +-
>  drivers/event/cnxk/cn10k_worker.h                |  147 +++-
>  drivers/event/cnxk/cn10k_worker_deq.c            |    2 +-
>  drivers/event/cnxk/cn10k_worker_deq_burst.c      |    2 +-
>  drivers/event/cnxk/cn10k_worker_deq_ca.c         |    2 +-
>  drivers/event/cnxk/cn10k_worker_deq_tmo.c        |    2 +-
>  drivers/event/cnxk/cn10k_worker_tx_enq.c         |    2 +-
>  drivers/event/cnxk/cn10k_worker_tx_enq_seg.c     |    2 +-
>  drivers/event/cnxk/cn9k_eventdev.c               |  182 ++--
>  drivers/event/cnxk/cn9k_worker.h                 |  170 +++-
>  drivers/event/cnxk/cn9k_worker_deq.c             |    2 +-
>  drivers/event/cnxk/cn9k_worker_deq_burst.c       |    2 +-
>  drivers/event/cnxk/cn9k_worker_deq_ca.c          |    2 +-
>  drivers/event/cnxk/cn9k_worker_deq_tmo.c         |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq.c        |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq_burst.c  |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq_ca.c     |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c    |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_tx_enq.c     |    2 +-
>  drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c |    2 +-
>  drivers/event/cnxk/cn9k_worker_tx_enq.c          |    2 +-
>  drivers/event/cnxk/cn9k_worker_tx_enq_seg.c      |    2 +-
>  drivers/event/cnxk/cnxk_eventdev_adptr.c         |   36 +-
>  drivers/net/cnxk/cn10k_ethdev.c                  |   41 +-
>  drivers/net/cnxk/cn10k_ethdev.h                  |   48 ++
>  drivers/net/cnxk/cn10k_ethdev_sec.c              |  492 +++++++++++
>  drivers/net/cnxk/cn10k_rx.c                      |   31 +-
>  drivers/net/cnxk/cn10k_rx.h                      |  649 +++++++++++---
>  drivers/net/cnxk/cn10k_rx_mseg.c                 |    2 +-
>  drivers/net/cnxk/cn10k_rx_vec.c                  |    4 +-
>  drivers/net/cnxk/cn10k_rx_vec_mseg.c             |    4 +-
>  drivers/net/cnxk/cn10k_tx.c                      |   31 +-
>  drivers/net/cnxk/cn10k_tx.h                      | 1006 +++++++++++++++++++---
>  drivers/net/cnxk/cn10k_tx_mseg.c                 |    2 +-
>  drivers/net/cnxk/cn10k_tx_vec.c                  |    2 +-
>  drivers/net/cnxk/cn10k_tx_vec_mseg.c             |    2 +-
>  drivers/net/cnxk/cn9k_ethdev.c                   |   23 +
>  drivers/net/cnxk/cn9k_ethdev.h                   |   64 ++
>  drivers/net/cnxk/cn9k_ethdev_sec.c               |  382 ++++++++
>  drivers/net/cnxk/cn9k_rx.c                       |   31 +-
>  drivers/net/cnxk/cn9k_rx.h                       |  493 +++++++++--
>  drivers/net/cnxk/cn9k_rx_mseg.c                  |    2 +-
>  drivers/net/cnxk/cn9k_rx_vec.c                   |    2 +-
>  drivers/net/cnxk/cn9k_rx_vec_mseg.c              |    2 +-
>  drivers/net/cnxk/cn9k_tx.c                       |   29 +-
>  drivers/net/cnxk/cn9k_tx.h                       |  393 ++++++---
>  drivers/net/cnxk/cn9k_tx_mseg.c                  |    2 +-
>  drivers/net/cnxk/cn9k_tx_vec.c                   |    2 +-
>  drivers/net/cnxk/cn9k_tx_vec_mseg.c              |    2 +-
>  drivers/net/cnxk/cnxk_ethdev.c                   |  243 +++++-
>  drivers/net/cnxk/cnxk_ethdev.h                   |  125 ++-
>  drivers/net/cnxk/cnxk_ethdev_devargs.c           |   88 +-
>  drivers/net/cnxk/cnxk_ethdev_sec.c               |  315 +++++++
>  drivers/net/cnxk/cnxk_lookup.c                   |   50 +-
>  drivers/net/cnxk/meson.build                     |    3 +
>  drivers/net/cnxk/version.map                     |    5 +
>  usertools/dpdk-devbind.py                        |    8 +-
>  96 files changed, 7686 insertions(+), 918 deletions(-)
>  create mode 100644 drivers/common/cnxk/cnxk_security_ar.h
>  create mode 100644 drivers/common/cnxk/roc_constants.h
>  create mode 100644 drivers/common/cnxk/roc_nix_inl.c
>  create mode 100644 drivers/common/cnxk/roc_nix_inl.h
>  create mode 100644 drivers/common/cnxk/roc_nix_inl_dev.c
>  create mode 100644 drivers/common/cnxk/roc_nix_inl_dev_irq.c
>  create mode 100644 drivers/common/cnxk/roc_nix_inl_priv.h
>  create mode 100644 drivers/net/cnxk/cn10k_ethdev_sec.c
>  create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c
>  create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c
>
> --
> 2.8.4
>

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k
  2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
@ 2021-10-06 16:21     ` Ferruh Yigit
  2021-10-06 16:44       ` Nithin Kumar Dabilpuram
  0 siblings, 1 reply; 91+ messages in thread
From: Ferruh Yigit @ 2021-10-06 16:21 UTC (permalink / raw)
  To: Nithin Dabilpuram, jerinj, Kiran Kumar K, Sunil Kumar Kori,
	Satha Rao, Ray Kinsella, Anatoly Burakov
  Cc: dev

On 10/1/2021 2:40 PM, Nithin Dabilpuram wrote:
> +static int
> +nix_security_release(struct cnxk_eth_dev *dev)
> +{
> +	struct rte_eth_dev *eth_dev = dev->eth_dev;
> +	struct cnxk_eth_sec_sess *eth_sec, *tvar;
> +	struct roc_nix *nix = &dev->nix;
> +	int rc, ret = 0;
> +
> +	/* Cleanup Inline inbound */
> +	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
> +		/* Destroy inbound sessions */
> +		tvar = NULL;
> +		TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
> +			cnxk_eth_sec_ops.session_destroy(eth_dev,
> +							 eth_sec->sess);
> +
> +		/* Clear lookup mem */
> +		cnxk_nix_lookup_mem_sa_base_clear(dev);
> +
> +		rc = roc_nix_inl_inb_fini(nix);
> +		if (rc)
> +			plt_err("Failed to cleanup nix inline inb, rc=%d", rc);
> +		ret |= rc;
> +	}
> +
> +	/* Cleanup Inline outbound */
> +	if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
> +	    dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
> +		/* Destroy outbound sessions */
> +		tvar = NULL;
> +		TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
> +			cnxk_eth_sec_ops.session_destroy(eth_dev,
> +							 eth_sec->sess);


Replacing 'TAILQ_FOREACH_SAFE' with 'RTE_TAILQ_FOREACH_SAFE' on next-net, because of
following commit in the main repo:

Commit f1f6ebc0eaf6 ("eal: remove sys/queue.h from public headers")

^ permalink raw reply	[flat|nested] 91+ messages in thread

* Re: [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k
  2021-10-06 16:21     ` Ferruh Yigit
@ 2021-10-06 16:44       ` Nithin Kumar Dabilpuram
  0 siblings, 0 replies; 91+ messages in thread
From: Nithin Kumar Dabilpuram @ 2021-10-06 16:44 UTC (permalink / raw)
  To: Ferruh Yigit, jerinj, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
	Ray Kinsella, Anatoly Burakov
  Cc: dev



On 10/6/21 9:51 PM, Ferruh Yigit wrote:
> On 10/1/2021 2:40 PM, Nithin Dabilpuram wrote:
>> +static int
>> +nix_security_release(struct cnxk_eth_dev *dev)
>> +{
>> +    struct rte_eth_dev *eth_dev = dev->eth_dev;
>> +    struct cnxk_eth_sec_sess *eth_sec, *tvar;
>> +    struct roc_nix *nix = &dev->nix;
>> +    int rc, ret = 0;
>> +
>> +    /* Cleanup Inline inbound */
>> +    if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
>> +        /* Destroy inbound sessions */
>> +        tvar = NULL;
>> +        TAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)
>> +            cnxk_eth_sec_ops.session_destroy(eth_dev,
>> +                             eth_sec->sess);
>> +
>> +        /* Clear lookup mem */
>> +        cnxk_nix_lookup_mem_sa_base_clear(dev);
>> +
>> +        rc = roc_nix_inl_inb_fini(nix);
>> +        if (rc)
>> +            plt_err("Failed to cleanup nix inline inb, rc=%d", rc);
>> +        ret |= rc;
>> +    }
>> +
>> +    /* Cleanup Inline outbound */
>> +    if (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY ||
>> +        dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
>> +        /* Destroy outbound sessions */
>> +        tvar = NULL;
>> +        TAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)
>> +            cnxk_eth_sec_ops.session_destroy(eth_dev,
>> +                             eth_sec->sess);
> 
> 
> Replacing 'TAILQ_FOREACH_SAFE' with 'RTE_TAILQ_FOREACH_SAFE' on 
> next-net, because of
> following commit in the main repo:
> 
> Commit f1f6ebc0eaf6 ("eal: remove sys/queue.h from public headers")

Ack, Thanks.

^ permalink raw reply	[flat|nested] 91+ messages in thread

end of thread, other threads:[~2021-10-06 16:46 UTC | newest]

Thread overview: 91+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-02  2:14 [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 01/27] common/cnxk: add security support for cn9k fast path Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 02/27] common/cnxk: add helper API to dump cpt parse header Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 03/27] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 04/27] common/cnxk: change nix debug API and queue API interface Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 05/27] common/cnxk: add nix inline device irq API Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 06/27] common/cnxk: add nix inline device init and fini Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 07/27] common/cnxk: add nix inline inbound and outbound support API Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 08/27] common/cnxk: dump cpt lf registers on error intr Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 09/27] common/cnxk: align cpt lf enable/disable sequence Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 10/27] common/cnxk: restore nix sqb pool limit before destroy Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 11/27] common/cnxk: add cq enable support in nix Tx path Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 12/27] common/cnxk: setup aura bp conf based on nix Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 13/27] common/cnxk: add anti-replay check implementation for cn9k Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 14/27] common/cnxk: add inline IPsec support in rte flow Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 15/27] net/cnxk: add inline security support for cn9k Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 16/27] net/cnxk: add inline security support for cn10k Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 17/27] net/cnxk: add cn9k Rx support for security offload Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 18/27] net/cnxk: add cn9k Tx " Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 19/27] net/cnxk: add cn10k Rx " Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 20/27] net/cnxk: add cn10k Tx " Nithin Dabilpuram
2021-09-02  2:14 ` [dpdk-dev] [PATCH 21/27] net/cnxk: add cn9k anti replay " Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 22/27] net/cnxk: add cn10k IPsec transport mode support Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 23/27] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 24/27] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 25/27] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 26/27] net/cnxk: add devargs for configuring channel mask Nithin Dabilpuram
2021-09-02  2:15 ` [dpdk-dev] [PATCH 27/27] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
2021-09-29 12:44 ` [dpdk-dev] [PATCH 00/27] net/cnxk: support for inline ipsec Jerin Jacob
2021-09-30 17:00 ` [dpdk-dev] [PATCH v2 00/28] " Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
2021-09-30 17:00   ` [dpdk-dev] [PATCH v2 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 19/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 21/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
2021-09-30 17:01   ` [dpdk-dev] [PATCH v2 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
2021-10-01  5:37   ` [dpdk-dev] [PATCH v2 00/28] net/cnxk: support for inline ipsec Jerin Jacob
2021-10-01 13:39 ` [dpdk-dev] [PATCH v3 " Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 01/28] common/cnxk: support cn9k fast path security session Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 02/28] common/cnxk: support CPT parse header dump Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 03/28] common/cnxk: allow reuse of SSO API for inline dev Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 04/28] common/cnxk: change NIX debug API and queue API interface Nithin Dabilpuram
2021-10-01 13:39   ` [dpdk-dev] [PATCH v3 05/28] common/cnxk: support NIX inline device IRQ Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 06/28] common/cnxk: support NIX inline device init and fini Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 07/28] common/cnxk: support NIX inline inbound and outbound setup Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 08/28] common/cnxk: disable CQ drop when inline inbound is enabled Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 09/28] common/cnxk: dump CPT LF registers on error intr Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 10/28] common/cnxk: align CPT LF enable/disable sequence Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 11/28] common/cnxk: restore NIX sqb pool limit before destroy Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 12/28] common/cnxk: add CQ enable support in NIX Tx path Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 13/28] common/cnxk: setup aura BP conf based on nix Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 14/28] common/cnxk: support anti-replay check in SW for cn9k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 15/28] common/cnxk: support inline IPsec rte flow action Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 16/28] net/cnxk: support inline security setup for cn9k Nithin Dabilpuram
2021-10-06 16:21     ` Ferruh Yigit
2021-10-06 16:44       ` Nithin Kumar Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 17/28] net/cnxk: support inline security setup for cn10k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 18/28] net/cnxk: support Rx security offload on cn9k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 19/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 20/28] net/cnxk: support Rx security offload on cn10k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 21/28] net/cnxk: support Tx " Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 22/28] net/cnxk: support IPsec anti replay in cn9k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 23/28] net/cnxk: support IPsec transport mode in cn10k Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 24/28] net/cnxk: update ethertype for mixed IPsec tunnel versions Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 25/28] net/cnxk: allow zero udp6 checksum for non inline device Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 26/28] net/cnxk: add crypto capabilities for AES CBC and HMAC SHA1 Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 27/28] net/cnxk: support configuring channel mask via devargs Nithin Dabilpuram
2021-10-01 13:40   ` [dpdk-dev] [PATCH v3 28/28] net/cnxk: reflect globally enabled offloads in queue conf Nithin Dabilpuram
2021-10-02 13:49   ` [dpdk-dev] [PATCH v3 00/28] net/cnxk: support for inline ipsec Jerin Jacob

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).