DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
@ 2018-08-24 16:53 Konstantin Ananyev
  2018-09-03 12:41 ` Joseph, Anoob
                   ` (20 more replies)
  0 siblings, 21 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-08-24 16:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal, Declan Doherty

This RFC introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
The library is concentrated on data-path protocols processing (ESP and AH),
IKE protocol(s) implementation is out of scope for that library.
Though hook/callback mechanisms will be defined to allow integrate it
with existing IKE implementations.
Due to quite complex nature of IPsec protocol suite and variety of user
requirements and usage scenarios a few API levels will be provided:
1) Security Association (SA-level) API
    Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
       add/remove ESP/AH related headers and data, etc.).   
2) Security Association Database (SAD) API
    API to create/manage/destroy IPsec SAD.
    While DPDK IPsec library plans to have its own implementation,
    the intention is to keep it as independent from the other parts
    of IPsec library as possible.
    That is supposed to give users the ability to provide their own
    implementation of the SAD compatible with the other parts of the
    IPsec library.
3) IPsec Context (CTX) API
    This is supposed to be a high-level API, where each IPsec CTX is an
    abstraction of 'independent copy of the IPsec stack'.
    CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
    and provides:
    - de-multiplexing stream of inbound packets to particular SAs and
      further IPsec related processing. 
    - IPsec related processing for the outbound packets.
    - SA add/delete/update functionality
  
Current RFC concentrates on SA-level API only (1), 
detailed discussion for 2) and 3) will be subjects for separate RFC(s). 

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

Processed inbound/outbound packets could be grouped by user provided
flow id (opaque 64-bit number associated by user with given SA).

SA-level API is based on top of crypto-dev/security API and relies on them
to perform actual cipher and integrity checking.
Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed
by crypto-device:
rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
Though for packets destined for inline processing no extra overhead
is required and simple and synchronous API: rte_ipsec_inline_process()
is introduced for that case.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

Below is the brief (and simplified) overview of expected SA-level
API usage.

/* allocate and initialize SA */
size_t sz = rte_ipsec_sa_size();
struct rte_ipsec_sa *sa = rte_malloc(sz);
struct rte_ipsec_sa_prm prm;
/* fill prm */
rc = rte_ipsec_sa_init(sa, &prm);
if (rc != 0) { /*handle error */}
.....

/* process inbound/outbound IPsec packets that belongs to given SA */

/* inline IPsec processing was done for these packets */
if (use_inline_ipsec)
       n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
/* use crypto-device to process the packets */
else {
     struct rte_crypto_op *cop[nb_pkts];
     struct rte_ipsec_group grp[nb_pkts];

      ....
     /* prepare crypto ops */
     n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
     /* enqueue crypto ops to related crypto-dev */
     n =  rte_cryptodev_enqueue_burst(..., cops, n);
     if (n != nb_pkts) { /*handle failed packets */}
     /* dequeue finished crypto ops from related crypto-dev */
     n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
     /* finish IPsec processing for associated packets */
     n = rte_ipsec_crypto_process(cop, pkts, grp, n);
     /* now we have <n> group of packets grouped by SA flow id  */
    ....
 }   
...

/* uninit given SA */
rte_ipsec_sa_fini(sa);

Planned scope for 18.11:
========================

- SA-level API definition
- ESP tunnel mode support (both IPv4/IPv6)
- Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
- UT
 
Note: Still WIP, so not all planned for 18.11 functionality is in place.

Post 18.11:
===========
- ESP transport mode support (both IPv4/IPv6)
- update examples/ipsec-secgw to use librte_ipsec
- SAD and high-level API definition and implementation


Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 +
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/pad.h                 |  45 ++
 lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  13 +
 lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
 lib/librte_net/rte_esp.h               |  10 +-
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 11 files changed, 1278 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c

diff --git a/config/common_base b/config/common_base
index 4bcbaf923..c95602c05 100644
--- a/config/common_base
+++ b/config/common_base
@@ -879,6 +879,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
 #
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
+#
 # Compile the test application
 #
 CONFIG_RTE_APP_TEST=y
diff --git a/lib/Makefile b/lib/Makefile
index afa604e20..58998dedd 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -105,6 +105,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 
 ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
 DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..15441cf41
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..79c55a8be
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..d1154eede
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,245 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ * IKEv2 protocol support right now is out of scope of that draft.
+ * Though it tries to define related API in such way, that it could be adopted
+ * by IKEv2 implementation.
+ */
+
+#include <rte_common.h>
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t flowid; /**< provided and interpreted by user */
+	struct rte_security_ipsec_xform ipsec_xform; /**< SA configuration */
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode repated parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode repated parameters */
+	};
+
+	struct {
+		enum rte_security_session_action_type type;
+		struct rte_security_ctx *sctx;
+		struct rte_security_session *sses;
+		uint32_t ol_flags;
+	} sec; /**< rte_security related parameters */
+
+	struct {
+		struct rte_crypto_sym_xform *xform;
+		struct rte_mempool *pool;
+		/**<pool for rte_cryptodev_sym_session */
+		const uint8_t *devid;
+		/**<array of cryptodevs that can be used byt that SA */
+		uint32_t nb_dev; /**< number of elements in devid[] */
+	} crypto; /**< rte_cryptodev related parameters */
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - AUTH/CRYPT/AEAD algorithm
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG_IPV,
+	RTE_SATP_LOG_PROTO,
+	RTE_SATP_LOG_DIR,
+	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_USE = RTE_SATP_LOG_MODE + 2,
+	RTE_SATP_LOG_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
+
+#define RTE_IPSEC_SATP_USE_MASK		(1ULL << RTE_SATP_LOG_USE)
+#define RTE_IPSEC_SATP_USE_LKSD		(0ULL << RTE_SATP_LOG_USE)
+#define RTE_IPSEC_SATP_USE_INLN		(1ULL << RTE_SATP_LOG_USE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * initialise SA base on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+/**
+ * get SA size
+ * @return
+ *   size required for rte_ipsec_sa instance.
+ */
+size_t __rte_experimental
+rte_ipsec_sa_size(void);
+
+
+/**
+ * Used to group mbufs by flowid, sa, etc.
+ * See below for particular usages.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t flowid;
+		struct rte_ipsec_sa *sa;
+	}; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/*
+ * For input mbufs and given SA prepare crypto ops that can be enqueued
+ * into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * @param sa
+ *   Pointer to SA object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+uint16_t __rte_experimental
+rte_ipsec_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num);
+
+/*
+ * Process dequeued from crypto-dev crypto ops, apply necessary
+ * changes to related mbufs and group them by user-defined *flowid*.
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields updated.
+ * outbound - encrypted, ICV attached, IP headers updated,
+ * ESP/AH fields added, related mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array, or if *grp* is NULL, then
+ *   number of filled elements in *mb* array.
+ * Note: input crypto_ops can represent mbufs that belong to different SAs.
+ * So grp parameter allows to return mbufs grouped based on user defined
+ * *flowid*.
+ * If user doesn't want any grouping to be perfomed, he can set grp to NULL.
+ */
+uint16_t __rte_experimental
+rte_ipsec_crypto_process(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num);
+
+/*
+ * Process packets that are subjects to inline IPsec offload.
+ * It is up to the caller to figure out does given SA and input packets
+ * are eligible to perform inline IPsec.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added.
+ * @param sa
+ *   Pointer to SA object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+uint16_t __rte_experimental
+rte_ipsec_inline_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	uint16_t num);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..9b79b3ad0
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,13 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_crypto_prepare;
+	rte_ipsec_crypto_process;
+	rte_ipsec_inline_process;
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..0c293f40f
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,921 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+#include <rte_cryptodev.h>
+#include "pad.h"
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	(2 * sizeof(uint64_t))
+
+#define	IPSEC_MAX_CRYPTO_DEVS	(UINT8_MAX + 1)
+
+/* ??? these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+	__uint128_t raw;
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;   /* type of given SA */
+	uint64_t flowid; /* user defined */
+	uint32_t spi;
+	uint32_t salt;
+	uint64_t sqn;
+	uint64_t *iv_ptr;
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t iv_len;
+	uint8_t pad_align;
+	uint8_t proto;    /* next proto */
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+		uint8_t type;
+		uint8_t status;
+		uint8_t sess_type;
+	} ctp;
+	struct {
+		uint64_t v8;
+		uint64_t v[IPSEC_MAX_IV_SIZE / sizeof(uint64_t)];
+	} iv;
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	struct {
+		struct rte_security_session *sec;
+		uint32_t ol_flags;
+		struct rte_security_ctx *sctx;
+
+		/*
+		 * !!! should be removed if we do crypto sym session properly
+		 * bitmap of crypto devs for which that session was initialised.
+		 */
+		rte_ymm_t cdev_bmap;
+
+		/*
+		 * !!! as alternative we need a space in cryptodev_sym_session
+		 * to store ptr to SA (uint64_t udata or so).
+		 */
+		struct rte_cryptodev_sym_session crypto;
+	} session __rte_cache_min_aligned;
+
+} __rte_cache_aligned;
+
+#define	CS_TO_SA(cs)	((cs) - offsetof(struct rte_ipsec_sa, session.crypto))
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+static inline struct rte_ipsec_sa *
+cses2sa(uintptr_t p)
+{
+	p -= offsetof(struct rte_ipsec_sa, session.crypto);
+	return (struct rte_ipsec_sa *)p;
+}
+
+static int
+check_crypto_xform(struct crypto_xform *xform)
+{
+	uintptr_t p;
+
+	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
+
+	/* either aead or both auth and cipher should be not NULLs */
+	if (xform->aead) {
+		if (p)
+			return -EINVAL;
+	} else if (p == (uintptr_t)xform->auth) {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+fill_crypto_xform(struct crypto_xform *xform,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf;
+
+	memset(xform, 0, sizeof(*xform));
+
+	for (xf = prm->crypto.xform; xf != NULL; xf = xf->next) {
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;
+			xform->auth = &xf->auth;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->cipher != NULL)
+				return -EINVAL;
+			xform->cipher = &xf->cipher;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+			if (xform->aead != NULL)
+				return -EINVAL;
+			xform->aead = &xf->aead;
+		} else
+			return -EINVAL;
+	}
+
+	return check_crypto_xform(xform);
+}
+
+/*
+ * !!! we might not need session fini - if cryptodev layer would have similar
+ * functionality.
+ */
+static void
+crypto_session_fini(struct rte_ipsec_sa *sa)
+{
+	uint64_t v;
+	size_t sz;
+	uint32_t i, j;
+
+	sz = sizeof(sa->session.cdev_bmap.u64[0]) * CHAR_BIT;
+
+	for (i = 0; i != RTE_DIM(sa->session.cdev_bmap.u64); i++) {
+
+		v = sa->session.cdev_bmap.u64[0];
+		for (j = 0; v != 0; v >>= 1, j++) {
+			if ((v & 1) != 0)
+				rte_cryptodev_sym_session_clear(i * sz + j,
+					&sa->session.crypto);
+		}
+		sa->session.cdev_bmap.u64[i] = 0;
+	}
+}
+
+static int
+crypto_session_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	size_t sz;
+	uint32_t i, k;
+	int32_t rc;
+
+	rc = 0;
+	sz = sizeof(sa->session.cdev_bmap.u64[0]) * CHAR_BIT;
+
+	for (i = 0; i != prm->crypto.nb_dev; i++) {
+
+		k = prm->crypto.devid[i];
+		rc = rte_cryptodev_sym_session_init(k, &sa->session.crypto,
+			prm->crypto.xform, prm->crypto.pool);
+		if (rc != 0)
+			break;
+		sa->session.cdev_bmap.u64[k / sz] |= 1 << (k % sz);
+	}
+
+	return rc;
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+size_t __rte_experimental
+rte_ipsec_sa_size(void)
+{
+	size_t sz;
+
+	sz = sizeof(struct rte_ipsec_sa) +
+		rte_cryptodev_sym_get_header_session_size();
+	sz = RTE_ALIGN_CEIL(sz,  RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	size_t sz;
+
+	sz = rte_ipsec_sa_size();
+	crypto_session_fini(sa);
+	memset(sa, 0, sz);
+}
+
+static uint64_t
+fill_sa_type(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	} else {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	}
+
+	/* !!! some inline ipsec support right now */
+	if (prm->sec.type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO)
+		tp |= RTE_IPSEC_SATP_USE_INLN;
+	else
+		tp |= RTE_IPSEC_SATP_USE_LKSD;
+
+	return tp;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->ctp.auth.length + sa->ctp.cipher.offset;
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = sa->hdr_len;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+	sa->ctp.cipher.length = sa->iv_len;
+}
+
+static int
+esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	int32_t rc = 0;
+
+	if (cxf->aead != NULL) {
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->aad_len = cxf->aead->aad_length;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_len = cxf->aead->iv.length;
+		sa->iv_ptr = sa->iv.v;
+		sa->pad_align = 4;
+	} else {
+		sa->aad_len = 0;
+		sa->icv_len = cxf->auth->digest_length;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = 4;
+			sa->iv_len = 0;
+			sa->iv_ptr = sa->iv.v;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = sizeof(sa->iv.v);
+			sa->iv_len = sizeof(sa->iv.v);
+			sa->iv_ptr = sa->iv.v;
+			memset(sa->iv.v, 0, sizeof(sa->iv.v));
+		} else
+			return -EINVAL;
+	}
+
+	sa->type = fill_sa_type(prm);
+	sa->flowid = prm->flowid;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+	sa->sqn = 0;
+
+	sa->proto = prm->tun.next_proto;
+
+	if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB)
+		esp_inb_tun_init(sa);
+	else
+		esp_outb_tun_init(sa, prm);
+
+	sa->ctp.type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+	sa->ctp.status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+	sa->ctp.sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+
+	/* pass info required for inline outbound */
+	sa->session.sctx = prm->sec.sctx;
+	sa->session.sec = prm->sec.sses;
+	sa->session.ol_flags = prm->sec.ol_flags;
+
+	if ((sa->type & RTE_IPSEC_SATP_USE_MASK) != RTE_IPSEC_SATP_USE_INLN)
+		rc = crypto_session_init(sa, prm);
+	return rc;
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	int32_t rc;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* only esp inbound and outbound tunnel is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP ||
+			prm->ipsec_xform.mode !=
+			RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
+		return -EINVAL;
+
+	/* only inline crypto or none action type are supported */
+	if (!(prm->sec.type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+			prm->sec.type == RTE_SECURITY_ACTION_TYPE_NONE))
+		return -EINVAL;
+
+	if (prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, prm);
+	if (rc != 0)
+		return rc;
+
+	rc = esp_tun_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	return rc;
+}
+
+static inline void
+esp_outb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+
+	cop->type = sa->ctp.type;
+	cop->status = sa->ctp.status;
+	cop->sess_type = sa->ctp.sess_type;
+
+	sop = cop->sym;
+
+	/* fill sym op fields */
+	sop->session = (void *)(uintptr_t)&sa->session.crypto;
+	sop->m_src = mb;
+
+	sop->cipher.data.offset = sa->ctp.cipher.offset;
+	sop->cipher.data.length = sa->ctp.cipher.length + plen;
+	sop->auth.data.offset = sa->ctp.auth.offset;
+	sop->auth.data.length = sa->ctp.auth.length + plen;
+	sop->auth.digest.data = icv->va;
+	sop->auth.digest.phys_addr = icv->pa;
+
+	/* !!! fill sym op aead fields */
+}
+
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, pdlen, pdofs, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+	/* calculate padding and tail space required */
+
+	/* number of bytes to encrypt */
+	clen = mb->pkt_len + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - mb->pkt_len;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		struct ipv4_hdr *l3h;
+		l3h = (struct ipv4_hdr *)(ph + sa->hdr_l3_off);
+		l3h->packet_id = rte_cpu_to_be_16(sa->sqn);
+		l3h->total_length = rte_cpu_to_be_16(mb->pkt_len -
+			sa->hdr_l3_off);
+	} else {
+		struct ipv6_hdr *l3h;
+		l3h = (struct ipv6_hdr *)(ph + sa->hdr_l3_off);
+		l3h->payload_len = rte_cpu_to_be_16(mb->pkt_len -
+			sa->hdr_l3_off - sizeof(*l3h));
+	}
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+
+	esph->spi = sa->spi;
+	esph->seq = rte_cpu_to_be_32(sa->sqn);
+	rte_memcpy(iv, sa->iv_ptr, sa->iv_len);
+
+	/* offset for ICV */
+	pdofs += pdlen;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	/* !!! fill aad fields, if any (aad fields are placed after icv */
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+static inline uint32_t
+esn_outb_check_sqn(struct rte_ipsec_sa *sa, uint32_t num)
+{
+	RTE_SET_USED(sa);
+	return num;
+}
+
+static inline int
+esn_inb_check_sqn(struct rte_ipsec_sa *sa, uint32_t sqn)
+{
+	RTE_SET_USED(sa);
+	RTE_SET_USED(sqn);
+	return 0;
+}
+
+static inline uint16_t
+esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, n;
+	union sym_op_data icv;
+
+	n = esn_outb_check_sqn(sa, num);
+
+	for (i = 0; i != n; i++) {
+
+		sa->sqn++;
+		sa->iv.v8 = rte_cpu_to_be_64(sa->sqn);
+
+		/* update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, mb[i], &icv);
+		if (rc < 0) {
+			rte_errno = -rc;
+			break;
+		}
+
+		/* update crypto op */
+		esp_outb_tun_cop_prepare(cop[i], sa, mb[i], &icv, rc);
+	}
+
+	return i;
+}
+
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	cop->type = sa->ctp.type;
+	cop->status = sa->ctp.status;
+	cop->sess_type = sa->ctp.sess_type;
+
+	sop = cop->sym;
+
+	/* fill sym op fields */
+	sop->session = (void *)(uintptr_t)&sa->session.crypto;
+	sop->m_src = mb;
+
+	sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+	sop->cipher.data.length = clen;
+	sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+	sop->auth.data.length = plen - sa->ctp.auth.length;
+	sop->auth.digest.data = icv->va;
+	sop->auth.digest.phys_addr = icv->pa;
+
+	/* !!! fill sym op aead fields */
+
+	/* copy iv from the input packet to the cop */
+	ivc = (uint64_t *)(sop + 1);
+	ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+	rte_memcpy(ivc, ivp, sa->iv_len);
+	return 0;
+}
+
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	struct rte_mbuf *ml;
+	uint32_t icv_ofs, plen;
+
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	return plen;
+}
+
+static inline uint16_t
+esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, hl;
+	union sym_op_data icv;
+
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, mb[i], hl, &icv);
+
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[i], sa, mb[i], &icv,
+				hl, rc);
+		if (rc < 0) {
+			rte_errno = -rc;
+			break;
+		}
+	}
+
+	return i;
+}
+
+uint16_t __rte_experimental
+rte_ipsec_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		return esp_inb_tun_prepare(sa, mb, cop, num);
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		return esp_outb_tun_prepare(sa, mb, cop, num);
+	default:
+		rte_errno = ENOTSUP;
+		return 0;
+	}
+}
+
+/*
+ * !!! create something more generic (and smarter)
+ * ideally in librte_mbuf
+ */
+static inline void
+free_mbuf_bulk(struct rte_mbuf *mb[], uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		rte_pktmbuf_free(mb[i]);
+}
+
+/* exclude NULLs from the final list of packets. */
+static inline uint32_t
+compress_pkt_list(struct rte_mbuf *pkt[], uint32_t nb_pkt, uint32_t nb_zero)
+{
+	uint32_t i, j, k, l;
+
+	for (j = nb_pkt; nb_zero != 0 && j-- != 0; ) {
+
+		/* found a hole. */
+		if (pkt[j] == NULL) {
+
+			/* find how big is it. */
+			for (i = j; i-- != 0 && pkt[i] == NULL; )
+				;
+			/* fill the hole. */
+			for (k = j + 1, l = i + 1; k != nb_pkt; k++, l++)
+				pkt[l] = pkt[k];
+
+			nb_pkt -= j - i;
+			nb_zero -= j - i;
+			j = i + 1;
+		}
+	}
+
+	return nb_pkt;
+}
+
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t icv_len)
+{
+	uint32_t hlen, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+
+	/*
+	 * check spi, sqn, padding and next proto.
+	 * drop packet if something is wrong.
+	 * ??? consider move spi and sqn check to prepare.
+	 */
+
+	pd = (char *)espt - espt->pad_len;
+	if (esph->spi != sa->spi ||
+			esn_inb_check_sqn(sa, esph->seq) != 0 ||
+			 espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	return 0;
+}
+
+static inline uint16_t
+esp_inb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k, icv_len;
+
+	icv_len = sa->icv_len;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], icv_len)) {
+			dr[k++] = mb[i];
+			mb[i] = NULL;
+		}
+	}
+
+	if (k != 0)
+		compress_pkt_list(mb, num, k);
+
+	return num - k;
+}
+
+static inline uint16_t
+esp_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_mbuf *dr[], uint16_t num)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		return esp_inb_tun_pkt_process(sa, mb, dr, num);
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		return num;
+	default:
+		return 0;
+	}
+}
+
+static inline uint16_t
+esp_process(const struct rte_crypto_op *cop[], struct rte_mbuf *mb[],
+	struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t cnt, i, j, k, n;
+	uintptr_t ns, ps;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = 0;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = (uintptr_t)cop[i]->sym[0].session;
+
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS) {
+			dr[k++] = m;
+			continue;
+		}
+
+		if (ps != ns) {
+
+			if (ps != 0) {
+				sa = cses2sa(ps);
+
+				/* setup count for current group */
+				grp[n].cnt = mb + j - grp[n].m;
+
+				/* do SA type specific processing */
+				cnt = esp_pkt_process(sa, grp[n].m, dr + k,
+					grp[n].cnt);
+
+				/* some packets were dropped */
+				cnt = grp[n].cnt - cnt;
+				if (cnt != 0) {
+					grp[n].cnt -= cnt;
+					j -= cnt;
+					k += cnt;
+				}
+
+				/* open new group */
+				n++;
+			}
+
+			grp[n].flowid = cses2sa(ns)->flowid;
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	if (ps != 0) {
+		sa = cses2sa(ps);
+		grp[n].cnt = mb + j - grp[n].m;
+		cnt = esp_pkt_process(sa, grp[n].m, dr + k, grp[n].cnt);
+		cnt = grp[n].cnt - cnt;
+		if (cnt != 0) {
+			grp[n].cnt -= cnt;
+			j -= cnt;
+			k += cnt;
+		}
+		n++;
+	}
+
+	if (k != 0)
+		free_mbuf_bulk(dr, k);
+
+	return n;
+}
+
+uint16_t __rte_experimental
+rte_ipsec_crypto_process(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	return esp_process(cop, mb, grp, num);
+}
+
+static inline uint16_t
+inline_outb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i;
+	struct rte_mbuf *m;
+	int rc;
+	union sym_op_data icv;
+
+	for (i = 0; i != num; i++) {
+		m = mb[i];
+
+		sa->sqn++;
+		sa->iv.v8 = rte_cpu_to_be_64(sa->sqn);
+
+		/* update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, m, &icv);
+		if (rc < 0) {
+			rte_errno = -rc;
+			break;
+		}
+
+		m->ol_flags |= PKT_TX_SEC_OFFLOAD;
+
+		if (sa->session.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+			rte_security_set_pkt_metadata(sa->session.sctx,
+				sa->session.sec, m, NULL);
+	}
+
+	return i;
+}
+
+static inline uint16_t
+inline_inb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, icv_len;
+	int rc;
+
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		rc = esp_inb_tun_single_pkt_process(sa, mb[i], icv_len);
+		if (rc != 0) {
+			rte_errno = -rc;
+			break;
+		}
+	}
+
+	return i;
+}
+
+uint16_t __rte_experimental
+rte_ipsec_inline_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		return inline_inb_tun_pkt_process(sa, mb, num);
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		return inline_outb_tun_pkt_process(sa, mb, num);
+	default:
+		rte_errno = ENOTSUP;
+	}
+
+	return 0;
+}
diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/meson.build b/lib/meson.build
index eb91f100b..bb07e67bd 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,6 +21,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'meter', 'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	# ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index de33883be..7f4344ecd 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -62,6 +62,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2018-09-03 12:41 ` Joseph, Anoob
  2018-09-03 18:21   ` Ananyev, Konstantin
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 0/9] " Konstantin Ananyev
                   ` (19 subsequent siblings)
  20 siblings, 1 reply; 194+ messages in thread
From: Joseph, Anoob @ 2018-09-03 12:41 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: Mohammad Abdul Awal, Declan Doherty, Jerin Jacob, Narayana Prasad

Hi Konstantin,

Few comments. Please see inline.

Thanks,

Anoob

On 24-08-2018 22:23, Konstantin Ananyev wrote:
> External Email
>
> This RFC introduces a new library within DPDK: librte_ipsec.
> The aim is to provide DPDK native high performance library for IPsec
> data-path processing.
> The library is supposed to utilize existing DPDK crypto-dev and
> security API to provide application with transparent IPsec processing API.
> The library is concentrated on data-path protocols processing (ESP and AH),
> IKE protocol(s) implementation is out of scope for that library.
> Though hook/callback mechanisms will be defined to allow integrate it
> with existing IKE implementations.
> Due to quite complex nature of IPsec protocol suite and variety of user
> requirements and usage scenarios a few API levels will be provided:
> 1) Security Association (SA-level) API
>      Operates at SA level, provides functions to:
>      - initialize/teardown SA object
>      - process inbound/outbound ESP/AH packets associated with the given SA
>        (decrypt/encrypt, authenticate, check integrity,
>         add/remove ESP/AH related headers and data, etc.).
> 2) Security Association Database (SAD) API
>      API to create/manage/destroy IPsec SAD.
>      While DPDK IPsec library plans to have its own implementation,
>      the intention is to keep it as independent from the other parts
>      of IPsec library as possible.
>      That is supposed to give users the ability to provide their own
>      implementation of the SAD compatible with the other parts of the
>      IPsec library.
> 3) IPsec Context (CTX) API
>      This is supposed to be a high-level API, where each IPsec CTX is an
>      abstraction of 'independent copy of the IPsec stack'.
>      CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
>      and provides:
>      - de-multiplexing stream of inbound packets to particular SAs and
>        further IPsec related processing.
>      - IPsec related processing for the outbound packets.
>      - SA add/delete/update functionality
[Anoob]: Security Policy is an important aspect of IPsec. An IPsec 
library without Security Policy API would be incomplete. For inline 
protocol offload, the final SP-SA check(selector check) is the only 
IPsec part being done by ipsec-secgw now. Would make sense to add that 
also in the library.
> Current RFC concentrates on SA-level API only (1),
> detailed discussion for 2) and 3) will be subjects for separate RFC(s).
>
> SA (low) level API
> ==================
>
> API described below operates on SA level.
> It provides functionality that allows user for given SA to process
> inbound and outbound IPsec packets.
> To be more specific:
> - for inbound ESP/AH packets perform decryption, authentication,
>    integrity checking, remove ESP/AH related headers
[Anoob] Anti-replay check would also be required.
> - for outbound packets perform payload encryption, attach ICV,
>    update/add IP headers, add ESP/AH headers/trailers,
>    setup related mbuf felids (ol_flags, tx_offloads, etc.).
[Anoob] Do we have any plans to handle ESN expiry? Some means to 
initiate an IKE renegotiation? I'm assuming application won't be aware 
of the sequence numbers, in this case.
> - initialize/un-initialize given SA based on user provided parameters.
>
> Processed inbound/outbound packets could be grouped by user provided
> flow id (opaque 64-bit number associated by user with given SA).
>
> SA-level API is based on top of crypto-dev/security API and relies on them
> to perform actual cipher and integrity checking.
> Due to the nature of crypto-dev API (enqueue/deque model) we use
> asynchronous API for IPsec packets destined to be processed
> by crypto-device:
> rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
> Though for packets destined for inline processing no extra overhead
> is required and simple and synchronous API: rte_ipsec_inline_process()
> is introduced for that case.
[Anoob] The API should include event-delivery as a crypto-op completion 
mechanism as well. The application could configure the event crypto 
adapter and then enqueue and dequeue to crypto device using events (via 
event dev).
> The following functionality:
>    - match inbound/outbound packets to particular SA
>    - manage crypto/security devices
>    - provide SAD/SPD related functionality
>    - determine what crypto/security device has to be used
>      for given packet(s)
> is out of scope for SA-level API.
>
> Below is the brief (and simplified) overview of expected SA-level
> API usage.
>
> /* allocate and initialize SA */
> size_t sz = rte_ipsec_sa_size();
> struct rte_ipsec_sa *sa = rte_malloc(sz);
> struct rte_ipsec_sa_prm prm;
> /* fill prm */
> rc = rte_ipsec_sa_init(sa, &prm);
> if (rc != 0) { /*handle error */}
> .....
>
> /* process inbound/outbound IPsec packets that belongs to given SA */
>
> /* inline IPsec processing was done for these packets */
> if (use_inline_ipsec)
>         n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
> /* use crypto-device to process the packets */
> else {
>       struct rte_crypto_op *cop[nb_pkts];
>       struct rte_ipsec_group grp[nb_pkts];
>
>        ....
>       /* prepare crypto ops */
>       n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
>       /* enqueue crypto ops to related crypto-dev */
>       n =  rte_cryptodev_enqueue_burst(..., cops, n);
>       if (n != nb_pkts) { /*handle failed packets */}
>       /* dequeue finished crypto ops from related crypto-dev */
>       n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
>       /* finish IPsec processing for associated packets */
>       n = rte_ipsec_crypto_process(cop, pkts, grp, n);
[Anoob] Does the SA based grouping apply to both inbound and outbound?
>       /* now we have <n> group of packets grouped by SA flow id  */
>      ....
>   }
> ...
>
> /* uninit given SA */
> rte_ipsec_sa_fini(sa);
>
> Planned scope for 18.11:
> ========================
>
> - SA-level API definition
> - ESP tunnel mode support (both IPv4/IPv6)
> - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
> - UT
[Anoob] What is UT?
> Note: Still WIP, so not all planned for 18.11 functionality is in place.
>
> Post 18.11:
> ===========
> - ESP transport mode support (both IPv4/IPv6)
> - update examples/ipsec-secgw to use librte_ipsec
> - SAD and high-level API definition and implementation
>
>
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   config/common_base                     |   5 +
>   lib/Makefile                           |   2 +
>   lib/librte_ipsec/Makefile              |  24 +
>   lib/librte_ipsec/meson.build           |  10 +
>   lib/librte_ipsec/pad.h                 |  45 ++
>   lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
>   lib/librte_ipsec/rte_ipsec_version.map |  13 +
>   lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
>   lib/librte_net/rte_esp.h               |  10 +-
>   lib/meson.build                        |   2 +
>   mk/rte.app.mk                          |   2 +
>   11 files changed, 1278 insertions(+), 1 deletion(-)
>   create mode 100644 lib/librte_ipsec/Makefile
>   create mode 100644 lib/librte_ipsec/meson.build
>   create mode 100644 lib/librte_ipsec/pad.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>   create mode 100644 lib/librte_ipsec/sa.c
<snip>
> +static inline uint16_t
> +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> +       struct rte_crypto_op *cop[], uint16_t num)
> +{
> +       int32_t rc;
> +       uint32_t i, n;
> +       union sym_op_data icv;
> +
> +       n = esn_outb_check_sqn(sa, num);
> +
> +       for (i = 0; i != n; i++) {
> +
> +               sa->sqn++;
[Anoob] Shouldn't this be done atomically?
> +               sa->iv.v8 = rte_cpu_to_be_64(sa->sqn);
> +
> +               /* update the packet itself */
> +               rc = esp_outb_tun_pkt_prepare(sa, mb[i], &icv);
> +               if (rc < 0) {
> +                       rte_errno = -rc;
> +                       break;
> +               }
> +
> +               /* update crypto op */
> +               esp_outb_tun_cop_prepare(cop[i], sa, mb[i], &icv, rc);
> +       }
> +
> +       return i;
> +}
>
<snip>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-03 12:41 ` Joseph, Anoob
@ 2018-09-03 18:21   ` Ananyev, Konstantin
  2018-09-05 14:39     ` Joseph, Anoob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-03 18:21 UTC (permalink / raw)
  To: Joseph, Anoob, dev
  Cc: Awal, Mohammad Abdul, Doherty, Declan, Jerin Jacob, Narayana Prasad

Hi Anoob,

> Hi Konstantin,
> 
> Few comments. Please see inline.
> 
> Thanks,
> 
> Anoob
> 
> On 24-08-2018 22:23, Konstantin Ananyev wrote:
> > External Email
> >
> > This RFC introduces a new library within DPDK: librte_ipsec.
> > The aim is to provide DPDK native high performance library for IPsec
> > data-path processing.
> > The library is supposed to utilize existing DPDK crypto-dev and
> > security API to provide application with transparent IPsec processing API.
> > The library is concentrated on data-path protocols processing (ESP and AH),
> > IKE protocol(s) implementation is out of scope for that library.
> > Though hook/callback mechanisms will be defined to allow integrate it
> > with existing IKE implementations.
> > Due to quite complex nature of IPsec protocol suite and variety of user
> > requirements and usage scenarios a few API levels will be provided:
> > 1) Security Association (SA-level) API
> >      Operates at SA level, provides functions to:
> >      - initialize/teardown SA object
> >      - process inbound/outbound ESP/AH packets associated with the given SA
> >        (decrypt/encrypt, authenticate, check integrity,
> >         add/remove ESP/AH related headers and data, etc.).
> > 2) Security Association Database (SAD) API
> >      API to create/manage/destroy IPsec SAD.
> >      While DPDK IPsec library plans to have its own implementation,
> >      the intention is to keep it as independent from the other parts
> >      of IPsec library as possible.
> >      That is supposed to give users the ability to provide their own
> >      implementation of the SAD compatible with the other parts of the
> >      IPsec library.
> > 3) IPsec Context (CTX) API
> >      This is supposed to be a high-level API, where each IPsec CTX is an
> >      abstraction of 'independent copy of the IPsec stack'.
> >      CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
> >      and provides:
> >      - de-multiplexing stream of inbound packets to particular SAs and
> >        further IPsec related processing.
> >      - IPsec related processing for the outbound packets.
> >      - SA add/delete/update functionality
> [Anoob]: Security Policy is an important aspect of IPsec. An IPsec
> library without Security Policy API would be incomplete. For inline
> protocol offload, the final SP-SA check(selector check) is the only
> IPsec part being done by ipsec-secgw now. Would make sense to add that
> also in the library.

You mean here, that we need some sort of SPD implementation, correct?

> > Current RFC concentrates on SA-level API only (1),
> > detailed discussion for 2) and 3) will be subjects for separate RFC(s).
> >
> > SA (low) level API
> > ==================
> >
> > API described below operates on SA level.
> > It provides functionality that allows user for given SA to process
> > inbound and outbound IPsec packets.
> > To be more specific:
> > - for inbound ESP/AH packets perform decryption, authentication,
> >    integrity checking, remove ESP/AH related headers
> [Anoob] Anti-replay check would also be required.

Yep, anti-replay and ESN support is implied as part of "integrity checking".
Probably I have to be more specific here.

> > - for outbound packets perform payload encryption, attach ICV,
> >    update/add IP headers, add ESP/AH headers/trailers,
> >    setup related mbuf felids (ol_flags, tx_offloads, etc.).
> [Anoob] Do we have any plans to handle ESN expiry? Some means to
> initiate an IKE renegotiation? I'm assuming application won't be aware
> of the sequence numbers, in this case.
> > - initialize/un-initialize given SA based on user provided parameters.
> >
> > Processed inbound/outbound packets could be grouped by user provided
> > flow id (opaque 64-bit number associated by user with given SA).
> >
> > SA-level API is based on top of crypto-dev/security API and relies on them
> > to perform actual cipher and integrity checking.
> > Due to the nature of crypto-dev API (enqueue/deque model) we use
> > asynchronous API for IPsec packets destined to be processed
> > by crypto-device:
> > rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
> > rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
> > Though for packets destined for inline processing no extra overhead
> > is required and simple and synchronous API: rte_ipsec_inline_process()
> > is introduced for that case.
> [Anoob] The API should include event-delivery as a crypto-op completion
> mechanism as well. The application could configure the event crypto
> adapter and then enqueue and dequeue to crypto device using events (via
> event dev).

Not sure what particular extra API you think is required here?
As I understand in both cases (with or without event crypto-adapter) we still have to:
 1) fill crypto-op properly
 2) enqueue it to crypto-dev (via eventdev or directly)
3)  receive processed by crypto-dev crypto-op (either via eventdev or directly)
4) check crypto-op status, do further post-processing if any

So #1 and #4 (SA-level API respnibility) remain the same for both cases.

> > The following functionality:
> >    - match inbound/outbound packets to particular SA
> >    - manage crypto/security devices
> >    - provide SAD/SPD related functionality
> >    - determine what crypto/security device has to be used
> >      for given packet(s)
> > is out of scope for SA-level API.
> >
> > Below is the brief (and simplified) overview of expected SA-level
> > API usage.
> >
> > /* allocate and initialize SA */
> > size_t sz = rte_ipsec_sa_size();
> > struct rte_ipsec_sa *sa = rte_malloc(sz);
> > struct rte_ipsec_sa_prm prm;
> > /* fill prm */
> > rc = rte_ipsec_sa_init(sa, &prm);
> > if (rc != 0) { /*handle error */}
> > .....
> >
> > /* process inbound/outbound IPsec packets that belongs to given SA */
> >
> > /* inline IPsec processing was done for these packets */
> > if (use_inline_ipsec)
> >         n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
> > /* use crypto-device to process the packets */
> > else {
> >       struct rte_crypto_op *cop[nb_pkts];
> >       struct rte_ipsec_group grp[nb_pkts];
> >
> >        ....
> >       /* prepare crypto ops */
> >       n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
> >       /* enqueue crypto ops to related crypto-dev */
> >       n =  rte_cryptodev_enqueue_burst(..., cops, n);
> >       if (n != nb_pkts) { /*handle failed packets */}
> >       /* dequeue finished crypto ops from related crypto-dev */
> >       n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
> >       /* finish IPsec processing for associated packets */
> >       n = rte_ipsec_crypto_process(cop, pkts, grp, n);
> [Anoob] Does the SA based grouping apply to both inbound and outbound?

Yes, the plan is to have it available for both cases.

> >       /* now we have <n> group of packets grouped by SA flow id  */
> >      ....
> >   }
> > ...
> >
> > /* uninit given SA */
> > rte_ipsec_sa_fini(sa);
> >
> > Planned scope for 18.11:
> > ========================
> >
> > - SA-level API definition
> > - ESP tunnel mode support (both IPv4/IPv6)
> > - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
> > - UT
> [Anoob] What is UT?

Unit-Test


> > Note: Still WIP, so not all planned for 18.11 functionality is in place.
> >
> > Post 18.11:
> > ===========
> > - ESP transport mode support (both IPv4/IPv6)
> > - update examples/ipsec-secgw to use librte_ipsec
> > - SAD and high-level API definition and implementation
> >
> >
> > Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> > Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >   config/common_base                     |   5 +
> >   lib/Makefile                           |   2 +
> >   lib/librte_ipsec/Makefile              |  24 +
> >   lib/librte_ipsec/meson.build           |  10 +
> >   lib/librte_ipsec/pad.h                 |  45 ++
> >   lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
> >   lib/librte_ipsec/rte_ipsec_version.map |  13 +
> >   lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
> >   lib/librte_net/rte_esp.h               |  10 +-
> >   lib/meson.build                        |   2 +
> >   mk/rte.app.mk                          |   2 +
> >   11 files changed, 1278 insertions(+), 1 deletion(-)
> >   create mode 100644 lib/librte_ipsec/Makefile
> >   create mode 100644 lib/librte_ipsec/meson.build
> >   create mode 100644 lib/librte_ipsec/pad.h
> >   create mode 100644 lib/librte_ipsec/rte_ipsec.h
> >   create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
> >   create mode 100644 lib/librte_ipsec/sa.c
> <snip>
> > +static inline uint16_t
> > +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> > +       struct rte_crypto_op *cop[], uint16_t num)
> > +{
> > +       int32_t rc;
> > +       uint32_t i, n;
> > +       union sym_op_data icv;
> > +
> > +       n = esn_outb_check_sqn(sa, num);
> > +
> > +       for (i = 0; i != n; i++) {
> > +
> > +               sa->sqn++;
> [Anoob] Shouldn't this be done atomically?

If we want to have MT-safe API for SA-datapath API, then yes.
Though it would make things more complicated here, especially for inbound with anti-replay support.
I think it is doable (spin-lock?), but would cause extra overhead and complexity.
Right now I am not sure it really worth it - comments/suggestions are welcome.
What probably could be a good compromise - runtime decision per SA basis (at sa_init()),
do we need an ST or MT behavior for given SA.

> > +               sa->iv.v8 = rte_cpu_to_be_64(sa->sqn);
> > +
> > +               /* update the packet itself */
> > +               rc = esp_outb_tun_pkt_prepare(sa, mb[i], &icv);
> > +               if (rc < 0) {
> > +                       rte_errno = -rc;
> > +                       break;
> > +               }
> > +
> > +               /* update crypto op */
> > +               esp_outb_tun_cop_prepare(cop[i], sa, mb[i], &icv, rc);
> > +       }
> > +
> > +       return i;
> > +}
> >
> <snip>

Thanks
Konstantin


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-03 18:21   ` Ananyev, Konstantin
@ 2018-09-05 14:39     ` Joseph, Anoob
       [not found]       ` <2601191342CEEE43887BDE71AB977258EA954BAD@irsmsx105.ger.corp.intel.com>
  0 siblings, 1 reply; 194+ messages in thread
From: Joseph, Anoob @ 2018-09-05 14:39 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Awal, Mohammad Abdul, Doherty, Declan, Jerin Jacob, Narayana Prasad

Hi Konstantin,

Please see inline.


On 03-09-2018 23:51, Ananyev, Konstantin wrote:
> External Email
>
> Hi Anoob,
>
>> Hi Konstantin,
>>
>> Few comments. Please see inline.
>>
>> Thanks,
>>
>> Anoob
>>
>> On 24-08-2018 22:23, Konstantin Ananyev wrote:
>>> External Email
>>>
>>> This RFC introduces a new library within DPDK: librte_ipsec.
>>> The aim is to provide DPDK native high performance library for IPsec
>>> data-path processing.
>>> The library is supposed to utilize existing DPDK crypto-dev and
>>> security API to provide application with transparent IPsec processing API.
>>> The library is concentrated on data-path protocols processing (ESP and AH),
>>> IKE protocol(s) implementation is out of scope for that library.
>>> Though hook/callback mechanisms will be defined to allow integrate it
>>> with existing IKE implementations.
>>> Due to quite complex nature of IPsec protocol suite and variety of user
>>> requirements and usage scenarios a few API levels will be provided:
>>> 1) Security Association (SA-level) API
>>>       Operates at SA level, provides functions to:
>>>       - initialize/teardown SA object
>>>       - process inbound/outbound ESP/AH packets associated with the given SA
>>>         (decrypt/encrypt, authenticate, check integrity,
>>>          add/remove ESP/AH related headers and data, etc.).
>>> 2) Security Association Database (SAD) API
>>>       API to create/manage/destroy IPsec SAD.
>>>       While DPDK IPsec library plans to have its own implementation,
>>>       the intention is to keep it as independent from the other parts
>>>       of IPsec library as possible.
>>>       That is supposed to give users the ability to provide their own
>>>       implementation of the SAD compatible with the other parts of the
>>>       IPsec library.
>>> 3) IPsec Context (CTX) API
>>>       This is supposed to be a high-level API, where each IPsec CTX is an
>>>       abstraction of 'independent copy of the IPsec stack'.
>>>       CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
>>>       and provides:
>>>       - de-multiplexing stream of inbound packets to particular SAs and
>>>         further IPsec related processing.
>>>       - IPsec related processing for the outbound packets.
>>>       - SA add/delete/update functionality
>> [Anoob]: Security Policy is an important aspect of IPsec. An IPsec
>> library without Security Policy API would be incomplete. For inline
>> protocol offload, the final SP-SA check(selector check) is the only
>> IPsec part being done by ipsec-secgw now. Would make sense to add that
>> also in the library.
> You mean here, that we need some sort of SPD implementation, correct?
[Anoob] Yes.
>
>>> Current RFC concentrates on SA-level API only (1),
>>> detailed discussion for 2) and 3) will be subjects for separate RFC(s).
>>>
>>> SA (low) level API
>>> ==================
>>>
>>> API described below operates on SA level.
>>> It provides functionality that allows user for given SA to process
>>> inbound and outbound IPsec packets.
>>> To be more specific:
>>> - for inbound ESP/AH packets perform decryption, authentication,
>>>     integrity checking, remove ESP/AH related headers
>> [Anoob] Anti-replay check would also be required.
> Yep, anti-replay and ESN support is implied as part of "integrity checking".
> Probably I have to be more specific here.
[Anoob] This is fine.
>
>>> - for outbound packets perform payload encryption, attach ICV,
>>>     update/add IP headers, add ESP/AH headers/trailers,
>>>     setup related mbuf felids (ol_flags, tx_offloads, etc.).
>> [Anoob] Do we have any plans to handle ESN expiry? Some means to
>> initiate an IKE renegotiation? I'm assuming application won't be aware
>> of the sequence numbers, in this case.
[Anoob] What is your plan with events like ESN expiry? IPsec spec talks 
about byte and time expiry as well.
>>> - initialize/un-initialize given SA based on user provided parameters.
>>>
>>> Processed inbound/outbound packets could be grouped by user provided
>>> flow id (opaque 64-bit number associated by user with given SA).
>>>
>>> SA-level API is based on top of crypto-dev/security API and relies on them
>>> to perform actual cipher and integrity checking.
>>> Due to the nature of crypto-dev API (enqueue/deque model) we use
>>> asynchronous API for IPsec packets destined to be processed
>>> by crypto-device:
>>> rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
>>> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
>>> Though for packets destined for inline processing no extra overhead
>>> is required and simple and synchronous API: rte_ipsec_inline_process()
>>> is introduced for that case.
>> [Anoob] The API should include event-delivery as a crypto-op completion
>> mechanism as well. The application could configure the event crypto
>> adapter and then enqueue and dequeue to crypto device using events (via
>> event dev).
> Not sure what particular extra API you think is required here?
> As I understand in both cases (with or without event crypto-adapter) we still have to:
>   1) fill crypto-op properly
>   2) enqueue it to crypto-dev (via eventdev or directly)
> 3)  receive processed by crypto-dev crypto-op (either via eventdev or directly)
> 4) check crypto-op status, do further post-processing if any
>
> So #1 and #4 (SA-level API respnibility) remain the same for both cases.
[Anoob] rte_ipsec_inline_process works on packets not events. We might 
need a similar API which processes events.
>
>>> The following functionality:
>>>     - match inbound/outbound packets to particular SA
>>>     - manage crypto/security devices
>>>     - provide SAD/SPD related functionality
>>>     - determine what crypto/security device has to be used
>>>       for given packet(s)
>>> is out of scope for SA-level API.
>>>
>>> Below is the brief (and simplified) overview of expected SA-level
>>> API usage.
>>>
>>> /* allocate and initialize SA */
>>> size_t sz = rte_ipsec_sa_size();
>>> struct rte_ipsec_sa *sa = rte_malloc(sz);
>>> struct rte_ipsec_sa_prm prm;
>>> /* fill prm */
>>> rc = rte_ipsec_sa_init(sa, &prm);
>>> if (rc != 0) { /*handle error */}
>>> .....
>>>
>>> /* process inbound/outbound IPsec packets that belongs to given SA */
>>>
>>> /* inline IPsec processing was done for these packets */
>>> if (use_inline_ipsec)
>>>          n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
>>> /* use crypto-device to process the packets */
>>> else {
>>>        struct rte_crypto_op *cop[nb_pkts];
>>>        struct rte_ipsec_group grp[nb_pkts];
>>>
>>>         ....
>>>        /* prepare crypto ops */
>>>        n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
>>>        /* enqueue crypto ops to related crypto-dev */
>>>        n =  rte_cryptodev_enqueue_burst(..., cops, n);
>>>        if (n != nb_pkts) { /*handle failed packets */}
>>>        /* dequeue finished crypto ops from related crypto-dev */
>>>        n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
>>>        /* finish IPsec processing for associated packets */
>>>        n = rte_ipsec_crypto_process(cop, pkts, grp, n);
>> [Anoob] Does the SA based grouping apply to both inbound and outbound?
> Yes, the plan is to have it available for both cases.
[Anoob] On the inbound, shouldn't the packets be grouped+ordered based 
on inner L3+inner L4?
>
>>>        /* now we have <n> group of packets grouped by SA flow id  */
>>>       ....
>>>    }
>>> ...
>>>
>>> /* uninit given SA */
>>> rte_ipsec_sa_fini(sa);
>>>
>>> Planned scope for 18.11:
>>> ========================
>>>
>>> - SA-level API definition
>>> - ESP tunnel mode support (both IPv4/IPv6)
>>> - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
>>> - UT
>> [Anoob] What is UT?
> Unit-Test
>
>
>>> Note: Still WIP, so not all planned for 18.11 functionality is in place.
>>>
>>> Post 18.11:
>>> ===========
>>> - ESP transport mode support (both IPv4/IPv6)
>>> - update examples/ipsec-secgw to use librte_ipsec
>>> - SAD and high-level API definition and implementation
>>>
>>>
>>> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
>>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>>> ---
>>>    config/common_base                     |   5 +
>>>    lib/Makefile                           |   2 +
>>>    lib/librte_ipsec/Makefile              |  24 +
>>>    lib/librte_ipsec/meson.build           |  10 +
>>>    lib/librte_ipsec/pad.h                 |  45 ++
>>>    lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
>>>    lib/librte_ipsec/rte_ipsec_version.map |  13 +
>>>    lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
>>>    lib/librte_net/rte_esp.h               |  10 +-
>>>    lib/meson.build                        |   2 +
>>>    mk/rte.app.mk                          |   2 +
>>>    11 files changed, 1278 insertions(+), 1 deletion(-)
>>>    create mode 100644 lib/librte_ipsec/Makefile
>>>    create mode 100644 lib/librte_ipsec/meson.build
>>>    create mode 100644 lib/librte_ipsec/pad.h
>>>    create mode 100644 lib/librte_ipsec/rte_ipsec.h
>>>    create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>>>    create mode 100644 lib/librte_ipsec/sa.c
>> <snip>
>>> +static inline uint16_t
>>> +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>>> +       struct rte_crypto_op *cop[], uint16_t num)
>>> +{
>>> +       int32_t rc;
>>> +       uint32_t i, n;
>>> +       union sym_op_data icv;
>>> +
>>> +       n = esn_outb_check_sqn(sa, num);
>>> +
>>> +       for (i = 0; i != n; i++) {
>>> +
>>> +               sa->sqn++;
>> [Anoob] Shouldn't this be done atomically?
> If we want to have MT-safe API for SA-datapath API, then yes.
> Though it would make things more complicated here, especially for inbound with anti-replay support.
> I think it is doable (spin-lock?), but would cause extra overhead and complexity.
> Right now I am not sure it really worth it - comments/suggestions are welcome.
> What probably could be a good compromise - runtime decision per SA basis (at sa_init()),
> do we need an ST or MT behavior for given SA.
[Anoob] Going with single thread approach would significantly limit the 
scope of this library. Single thread approach would mean one SA on one 
core. This would not work on practical cases.

Suppose we have two flows which are supposed to use the same SA. With 
RSS, these flows could end up on different cores. Now only one core 
would be able to process, as SA will not be shared. We have the same 
problem in ipsec-secgw too.

In case of ingress also, the same problem exists. We will not be able to 
use RSS and spread the traffic to multiple cores. Considering IPsec 
being CPU intensive, this would limit the net output of the chip.

Thanks,

Anoob

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
       [not found]       ` <2601191342CEEE43887BDE71AB977258EA954BAD@irsmsx105.ger.corp.intel.com>
@ 2018-09-12 18:09         ` Ananyev, Konstantin
  2018-09-15 17:06           ` Joseph, Anoob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-12 18:09 UTC (permalink / raw)
  To: Joseph, Anoob, dev
  Cc: Awal, Mohammad Abdul, Doherty, Declan,
	Jerin Jacob (jerin.jacob@caviumnetworks.com),
	Narayana Prasad

Hi Anoob,

> 
> Hi Konstantin,
> Please see inline.
> 
> 
> This RFC introduces a new library within DPDK: librte_ipsec.
> The aim is to provide DPDK native high performance library for IPsec
> data-path processing.
> The library is supposed to utilize existing DPDK crypto-dev and
> security API to provide application with transparent IPsec processing API.
> The library is concentrated on data-path protocols processing (ESP and AH),
> IKE protocol(s) implementation is out of scope for that library.
> Though hook/callback mechanisms will be defined to allow integrate it
> with existing IKE implementations.
> Due to quite complex nature of IPsec protocol suite and variety of user
> requirements and usage scenarios a few API levels will be provided:
> 1) Security Association (SA-level) API
>      Operates at SA level, provides functions to:
>      - initialize/teardown SA object
>      - process inbound/outbound ESP/AH packets associated with the given SA
>        (decrypt/encrypt, authenticate, check integrity,
>         add/remove ESP/AH related headers and data, etc.).
> 2) Security Association Database (SAD) API
>      API to create/manage/destroy IPsec SAD.
>      While DPDK IPsec library plans to have its own implementation,
>      the intention is to keep it as independent from the other parts
>      of IPsec library as possible.
>      That is supposed to give users the ability to provide their own
>      implementation of the SAD compatible with the other parts of the
>      IPsec library.
> 3) IPsec Context (CTX) API
>      This is supposed to be a high-level API, where each IPsec CTX is an
>      abstraction of 'independent copy of the IPsec stack'.
>      CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
>      and provides:
>      - de-multiplexing stream of inbound packets to particular SAs and
>        further IPsec related processing.
>      - IPsec related processing for the outbound packets.
>      - SA add/delete/update functionality
> [Anoob]: Security Policy is an important aspect of IPsec. An IPsec
> library without Security Policy API would be incomplete. For inline
> protocol offload, the final SP-SA check(selector check) is the only
> IPsec part being done by ipsec-secgw now. Would make sense to add that
> also in the library.
> 
> You mean here, that we need some sort of SPD implementation, correct?
> [Anoob] Yes.

Ok, I see.
Our thought was that just something based on librte_acl would be enough here...
But if you think that a special defined SPD API (and implementation) is needed -
we can probably discuss it along with SAD API (#2 above).
Though if you'd like to start to work on RFC for it right-away - please feel free to do so :)

> 
> 
> 
> Current RFC concentrates on SA-level API only (1),
> detailed discussion for 2) and 3) will be subjects for separate RFC(s).
> 
> SA (low) level API
> ==================
> 
> API described below operates on SA level.
> It provides functionality that allows user for given SA to process
> inbound and outbound IPsec packets.
> To be more specific:
> - for inbound ESP/AH packets perform decryption, authentication,
>    integrity checking, remove ESP/AH related headers
> [Anoob] Anti-replay check would also be required.
> 
> Yep, anti-replay and ESN support is implied as part of "integrity checking".
> Probably I have to be more specific here.
> [Anoob] This is fine.
> 
> 
> 
> - for outbound packets perform payload encryption, attach ICV,
>    update/add IP headers, add ESP/AH headers/trailers,
>    setup related mbuf felids (ol_flags, tx_offloads, etc.).
> [Anoob] Do we have any plans to handle ESN expiry? Some means to
> initiate an IKE renegotiation? I'm assuming application won't be aware
> of the sequence numbers, in this case.
> [Anoob] What is your plan with events like ESN expiry? IPsec spec talks about byte and time expiry as well.

At current moment, for SA level: rte_ipsec_crypto_prepare()/rte_ipsec_inline_process() will set rte_errno
to special value (EOVERFLOW) to signal upper layer that limit is reached.
Upper layer can decide to start re-negotiation, or just destroy an SA.
 
Future plans for IPsec Context (CTX) API (#3 above):
Introduce a special function, something like:
rte_ipsec_get_expired(rte_ipsec_ctx *ctx, rte_ipsec_sa *expired_sa[], uint32_t num);
It would return up-to *num* of SAs for given ipsec context, that are expired/limit reached.
Then upper layer again might decide for each SA should renegotiation be started,
or just wipe given SA.
It would be upper layer responsibility to call this function periodically.
 
> 
> 
> - initialize/un-initialize given SA based on user provided parameters.
> 
> Processed inbound/outbound packets could be grouped by user provided
> flow id (opaque 64-bit number associated by user with given SA).
> 
> SA-level API is based on top of crypto-dev/security API and relies on them
> to perform actual cipher and integrity checking.
> Due to the nature of crypto-dev API (enqueue/deque model) we use
> asynchronous API for IPsec packets destined to be processed
> by crypto-device:
> rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
> Though for packets destined for inline processing no extra overhead
> is required and simple and synchronous API: rte_ipsec_inline_process()
> is introduced for that case.
> [Anoob] The API should include event-delivery as a crypto-op completion
> mechanism as well. The application could configure the event crypto
> adapter and then enqueue and dequeue to crypto device using events (via
> event dev).
> 
> Not sure what particular extra API you think is required here?
> As I understand in both cases (with or without event crypto-adapter) we still have to:
>  1) fill crypto-op properly
>  2) enqueue it to crypto-dev (via eventdev or directly)
> 3)  receive processed by crypto-dev crypto-op (either via eventdev or directly)
> 4) check crypto-op status, do further post-processing if any
> 
> So #1 and #4 (SA-level API respnibility) remain the same for both cases.
> [Anoob] rte_ipsec_inline_process works on packets not events. We might need a similar API which processes events.

Ok, I still don't get you here.
Could you specify what exactly function you'd like to add to the API here with parameter list
and brief behavior description? 

> 
> The following functionality:
>    - match inbound/outbound packets to particular SA
>    - manage crypto/security devices
>    - provide SAD/SPD related functionality
>    - determine what crypto/security device has to be used
>      for given packet(s)
> is out of scope for SA-level API.
> 
> Below is the brief (and simplified) overview of expected SA-level
> API usage.
> 
> /* allocate and initialize SA */
> size_t sz = rte_ipsec_sa_size();
> struct rte_ipsec_sa *sa = rte_malloc(sz);
> struct rte_ipsec_sa_prm prm;
> /* fill prm */
> rc = rte_ipsec_sa_init(sa, &prm);
> if (rc != 0) { /*handle error */}
> .....
> 
> /* process inbound/outbound IPsec packets that belongs to given SA */
> 
> /* inline IPsec processing was done for these packets */
> if (use_inline_ipsec)
>         n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
> /* use crypto-device to process the packets */
> else {
>       struct rte_crypto_op *cop[nb_pkts];
>       struct rte_ipsec_group grp[nb_pkts];
> 
>        ....
>       /* prepare crypto ops */
>       n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
>       /* enqueue crypto ops to related crypto-dev */
>       n =  rte_cryptodev_enqueue_burst(..., cops, n);
>       if (n != nb_pkts) { /*handle failed packets */}
>       /* dequeue finished crypto ops from related crypto-dev */
>       n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
>       /* finish IPsec processing for associated packets */
>       n = rte_ipsec_crypto_process(cop, pkts, grp, n);
> [Anoob] Does the SA based grouping apply to both inbound and outbound?
> 
> Yes, the plan is to have it available for both cases.
> [Anoob] On the inbound, shouldn't the packets be grouped+ordered based on inner L3+inner L4?

I think that's up to the user decide based on what criteria wants to group it and does he wants
to do any grouping at all.
That's why flowid is user-defined and totally transparent to the lib.

> 
> 
> 
>       /* now we have <n> group of packets grouped by SA flow id  */
>      ....
>   }
> ...
> 
> /* uninit given SA */
> rte_ipsec_sa_fini(sa);
> 
> Planned scope for 18.11:
> ========================
> 
> - SA-level API definition
> - ESP tunnel mode support (both IPv4/IPv6)
> - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
> - UT
> [Anoob] What is UT?
> 
> Unit-Test
> 
> 
> Note: Still WIP, so not all planned for 18.11 functionality is in place.
> 
> Post 18.11:
> ===========
> - ESP transport mode support (both IPv4/IPv6)
> - update examples/ipsec-secgw to use librte_ipsec
> - SAD and high-level API definition and implementation
> 
> 
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   config/common_base                     |   5 +
>   lib/Makefile                           |   2 +
>   lib/librte_ipsec/Makefile              |  24 +
>   lib/librte_ipsec/meson.build           |  10 +
>   lib/librte_ipsec/pad.h                 |  45 ++
>   lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
>   lib/librte_ipsec/rte_ipsec_version.map |  13 +
>   lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
>   lib/librte_net/rte_esp.h               |  10 +-
>   lib/meson.build                        |   2 +
>   mk/rte.app.mk                          |   2 +
>   11 files changed, 1278 insertions(+), 1 deletion(-)
>   create mode 100644 lib/librte_ipsec/Makefile
>   create mode 100644 lib/librte_ipsec/meson.build
>   create mode 100644 lib/librte_ipsec/pad.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>   create mode 100644 lib/librte_ipsec/sa.c
> <snip>
> +static inline uint16_t
> +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> +       struct rte_crypto_op *cop[], uint16_t num)
> +{
> +       int32_t rc;
> +       uint32_t i, n;
> +       union sym_op_data icv;
> +
> +       n = esn_outb_check_sqn(sa, num);
> +
> +       for (i = 0; i != n; i++) {
> +
> +               sa->sqn++;
> [Anoob] Shouldn't this be done atomically?
> 
> If we want to have MT-safe API for SA-datapath API, then yes.
> Though it would make things more complicated here, especially for inbound with anti-replay support.
> I think it is doable (spin-lock?), but would cause extra overhead and complexity.
> Right now I am not sure it really worth it - comments/suggestions are welcome.
> What probably could be a good compromise - runtime decision per SA basis (at sa_init()),
> do we need an ST or MT behavior for given SA.
> [Anoob] Going with single thread approach would significantly limit the scope of this library. Single thread approach would mean
> one SA on one core. This would not work on practical cases.
> Suppose we have two flows which are supposed to use the same SA. With RSS, these flows could end up on different cores. Now
> only one core would be able to process, as SA will not be shared. We have the same problem in ipsec-secgw too.

Just for my curiosity - how do you plan to use RSS for ipsec packet distribution?
Do you foresee a common situation when there would be packets that belongs to the same SA
(same SPI) but with multiple source(destination) IP addresses?
If so, probably some examples would be helpful.
I think IPsec RFCs doesn't prevent such situation, but AFAIK the most common case - single source/destination IPs for the same SPI.

Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU cores.
To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
I think it would require some extra synchronization:  
make sure that we do final packet processing (seq check/update) at the same order as we received the packets
(packets entered ipsec processing).  
I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
Though we plan CTX level API to support such scenario.
What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks concurrently.

> In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores. Considering
> IPsec being CPU intensive, this would limit the net output of the chip.

That's true - but from other side implementation can offload heavy part
(encrypt/decrypt, auth) to special HW (cryptodev).
In that case single core might be enough for SA and extra synchronization would just slowdown things.
That's why I think it should be configurable  what behavior (ST or MT) to use. 

Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-12 18:09         ` Ananyev, Konstantin
@ 2018-09-15 17:06           ` Joseph, Anoob
  2018-09-16 10:56             ` Jerin Jacob
  2018-09-17 10:36             ` Ananyev, Konstantin
  0 siblings, 2 replies; 194+ messages in thread
From: Joseph, Anoob @ 2018-09-15 17:06 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Awal, Mohammad Abdul, Doherty, Declan,
	Jerin Jacob (jerin.jacob@caviumnetworks.com),
	Narayana Prasad

Hi Konstantin

See inline.

Thanks,

Anoob


On 12-09-2018 23:39, Ananyev, Konstantin wrote:
> External Email
>
> Hi Anoob,
>
>> Hi Konstantin,
>> Please see inline.
>>
>>
>> This RFC introduces a new library within DPDK: librte_ipsec.
>> The aim is to provide DPDK native high performance library for IPsec
>> data-path processing.
>> The library is supposed to utilize existing DPDK crypto-dev and
>> security API to provide application with transparent IPsec processing API.
>> The library is concentrated on data-path protocols processing (ESP and AH),
>> IKE protocol(s) implementation is out of scope for that library.
>> Though hook/callback mechanisms will be defined to allow integrate it
>> with existing IKE implementations.
>> Due to quite complex nature of IPsec protocol suite and variety of user
>> requirements and usage scenarios a few API levels will be provided:
>> 1) Security Association (SA-level) API
>>       Operates at SA level, provides functions to:
>>       - initialize/teardown SA object
>>       - process inbound/outbound ESP/AH packets associated with the given SA
>>         (decrypt/encrypt, authenticate, check integrity,
>>          add/remove ESP/AH related headers and data, etc.).
>> 2) Security Association Database (SAD) API
>>       API to create/manage/destroy IPsec SAD.
>>       While DPDK IPsec library plans to have its own implementation,
>>       the intention is to keep it as independent from the other parts
>>       of IPsec library as possible.
>>       That is supposed to give users the ability to provide their own
>>       implementation of the SAD compatible with the other parts of the
>>       IPsec library.
>> 3) IPsec Context (CTX) API
>>       This is supposed to be a high-level API, where each IPsec CTX is an
>>       abstraction of 'independent copy of the IPsec stack'.
>>       CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
>>       and provides:
>>       - de-multiplexing stream of inbound packets to particular SAs and
>>         further IPsec related processing.
>>       - IPsec related processing for the outbound packets.
>>       - SA add/delete/update functionality
>> [Anoob]: Security Policy is an important aspect of IPsec. An IPsec
>> library without Security Policy API would be incomplete. For inline
>> protocol offload, the final SP-SA check(selector check) is the only
>> IPsec part being done by ipsec-secgw now. Would make sense to add that
>> also in the library.
>>
>> You mean here, that we need some sort of SPD implementation, correct?
>> [Anoob] Yes.
> Ok, I see.
> Our thought was that just something based on librte_acl would be enough here...
> But if you think that a special defined SPD API (and implementation) is needed -
> we can probably discuss it along with SAD API (#2 above).
> Though if you'd like to start to work on RFC for it right-away - please feel free to do so :)
>
>>
>>
>> Current RFC concentrates on SA-level API only (1),
>> detailed discussion for 2) and 3) will be subjects for separate RFC(s).
>>
>> SA (low) level API
>> ==================
>>
>> API described below operates on SA level.
>> It provides functionality that allows user for given SA to process
>> inbound and outbound IPsec packets.
>> To be more specific:
>> - for inbound ESP/AH packets perform decryption, authentication,
>>     integrity checking, remove ESP/AH related headers
>> [Anoob] Anti-replay check would also be required.
>>
>> Yep, anti-replay and ESN support is implied as part of "integrity checking".
>> Probably I have to be more specific here.
>> [Anoob] This is fine.
>>
>>
>>
>> - for outbound packets perform payload encryption, attach ICV,
>>     update/add IP headers, add ESP/AH headers/trailers,
>>     setup related mbuf felids (ol_flags, tx_offloads, etc.).
>> [Anoob] Do we have any plans to handle ESN expiry? Some means to
>> initiate an IKE renegotiation? I'm assuming application won't be aware
>> of the sequence numbers, in this case.
>> [Anoob] What is your plan with events like ESN expiry? IPsec spec talks about byte and time expiry as well.
> At current moment, for SA level: rte_ipsec_crypto_prepare()/rte_ipsec_inline_process() will set rte_errno
> to special value (EOVERFLOW) to signal upper layer that limit is reached.
> Upper layer can decide to start re-negotiation, or just destroy an SA.
>
> Future plans for IPsec Context (CTX) API (#3 above):
> Introduce a special function, something like:
> rte_ipsec_get_expired(rte_ipsec_ctx *ctx, rte_ipsec_sa *expired_sa[], uint32_t num);
> It would return up-to *num* of SAs for given ipsec context, that are expired/limit reached.
> Then upper layer again might decide for each SA should renegotiation be started,
> or just wipe given SA.
> It would be upper layer responsibility to call this function periodically.
>
>>
>> - initialize/un-initialize given SA based on user provided parameters.
>>
>> Processed inbound/outbound packets could be grouped by user provided
>> flow id (opaque 64-bit number associated by user with given SA).
>>
>> SA-level API is based on top of crypto-dev/security API and relies on them
>> to perform actual cipher and integrity checking.
>> Due to the nature of crypto-dev API (enqueue/deque model) we use
>> asynchronous API for IPsec packets destined to be processed
>> by crypto-device:
>> rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
>> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
>> Though for packets destined for inline processing no extra overhead
>> is required and simple and synchronous API: rte_ipsec_inline_process()
>> is introduced for that case.
>> [Anoob] The API should include event-delivery as a crypto-op completion
>> mechanism as well. The application could configure the event crypto
>> adapter and then enqueue and dequeue to crypto device using events (via
>> event dev).
>>
>> Not sure what particular extra API you think is required here?
>> As I understand in both cases (with or without event crypto-adapter) we still have to:
>>   1) fill crypto-op properly
>>   2) enqueue it to crypto-dev (via eventdev or directly)
>> 3)  receive processed by crypto-dev crypto-op (either via eventdev or directly)
>> 4) check crypto-op status, do further post-processing if any
>>
>> So #1 and #4 (SA-level API respnibility) remain the same for both cases.
>> [Anoob] rte_ipsec_inline_process works on packets not events. We might need a similar API which processes events.
> Ok, I still don't get you here.
> Could you specify what exactly function you'd like to add to the API here with parameter list
> and brief behavior description?
>
>> The following functionality:
>>     - match inbound/outbound packets to particular SA
>>     - manage crypto/security devices
>>     - provide SAD/SPD related functionality
>>     - determine what crypto/security device has to be used
>>       for given packet(s)
>> is out of scope for SA-level API.
>>
>> Below is the brief (and simplified) overview of expected SA-level
>> API usage.
>>
>> /* allocate and initialize SA */
>> size_t sz = rte_ipsec_sa_size();
>> struct rte_ipsec_sa *sa = rte_malloc(sz);
>> struct rte_ipsec_sa_prm prm;
>> /* fill prm */
>> rc = rte_ipsec_sa_init(sa, &prm);
>> if (rc != 0) { /*handle error */}
>> .....
>>
>> /* process inbound/outbound IPsec packets that belongs to given SA */
>>
>> /* inline IPsec processing was done for these packets */
>> if (use_inline_ipsec)
>>          n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
>> /* use crypto-device to process the packets */
>> else {
>>        struct rte_crypto_op *cop[nb_pkts];
>>        struct rte_ipsec_group grp[nb_pkts];
>>
>>         ....
>>        /* prepare crypto ops */
>>        n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
>>        /* enqueue crypto ops to related crypto-dev */
>>        n =  rte_cryptodev_enqueue_burst(..., cops, n);
>>        if (n != nb_pkts) { /*handle failed packets */}
>>        /* dequeue finished crypto ops from related crypto-dev */
>>        n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
>>        /* finish IPsec processing for associated packets */
>>        n = rte_ipsec_crypto_process(cop, pkts, grp, n);
>> [Anoob] Does the SA based grouping apply to both inbound and outbound?
>>
>> Yes, the plan is to have it available for both cases.
>> [Anoob] On the inbound, shouldn't the packets be grouped+ordered based on inner L3+inner L4?
> I think that's up to the user decide based on what criteria wants to group it and does he wants
> to do any grouping at all.
> That's why flowid is user-defined and totally transparent to the lib.
>
>>
>>
>>        /* now we have <n> group of packets grouped by SA flow id  */
>>       ....
>>    }
>> ...
>>
>> /* uninit given SA */
>> rte_ipsec_sa_fini(sa);
>>
>> Planned scope for 18.11:
>> ========================
>>
>> - SA-level API definition
>> - ESP tunnel mode support (both IPv4/IPv6)
>> - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
>> - UT
>> [Anoob] What is UT?
>>
>> Unit-Test
>>
>>
>> Note: Still WIP, so not all planned for 18.11 functionality is in place.
>>
>> Post 18.11:
>> ===========
>> - ESP transport mode support (both IPv4/IPv6)
>> - update examples/ipsec-secgw to use librte_ipsec
>> - SAD and high-level API definition and implementation
>>
>>
>> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>    config/common_base                     |   5 +
>>    lib/Makefile                           |   2 +
>>    lib/librte_ipsec/Makefile              |  24 +
>>    lib/librte_ipsec/meson.build           |  10 +
>>    lib/librte_ipsec/pad.h                 |  45 ++
>>    lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
>>    lib/librte_ipsec/rte_ipsec_version.map |  13 +
>>    lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
>>    lib/librte_net/rte_esp.h               |  10 +-
>>    lib/meson.build                        |   2 +
>>    mk/rte.app.mk                          |   2 +
>>    11 files changed, 1278 insertions(+), 1 deletion(-)
>>    create mode 100644 lib/librte_ipsec/Makefile
>>    create mode 100644 lib/librte_ipsec/meson.build
>>    create mode 100644 lib/librte_ipsec/pad.h
>>    create mode 100644 lib/librte_ipsec/rte_ipsec.h
>>    create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>>    create mode 100644 lib/librte_ipsec/sa.c
>> <snip>
>> +static inline uint16_t
>> +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>> +       struct rte_crypto_op *cop[], uint16_t num)
>> +{
>> +       int32_t rc;
>> +       uint32_t i, n;
>> +       union sym_op_data icv;
>> +
>> +       n = esn_outb_check_sqn(sa, num);
>> +
>> +       for (i = 0; i != n; i++) {
>> +
>> +               sa->sqn++;
>> [Anoob] Shouldn't this be done atomically?
>>
>> If we want to have MT-safe API for SA-datapath API, then yes.
>> Though it would make things more complicated here, especially for inbound with anti-replay support.
>> I think it is doable (spin-lock?), but would cause extra overhead and complexity.
>> Right now I am not sure it really worth it - comments/suggestions are welcome.
>> What probably could be a good compromise - runtime decision per SA basis (at sa_init()),
>> do we need an ST or MT behavior for given SA.
>> [Anoob] Going with single thread approach would significantly limit the scope of this library. Single thread approach would mean
>> one SA on one core. This would not work on practical cases.
>> Suppose we have two flows which are supposed to use the same SA. With RSS, these flows could end up on different cores. Now
>> only one core would be able to process, as SA will not be shared. We have the same problem in ipsec-secgw too.
> Just for my curiosity - how do you plan to use RSS for ipsec packet distribution?
> Do you foresee a common situation when there would be packets that belongs to the same SA
> (same SPI) but with multiple source(destination) IP addresses?
> If so, probably some examples would be helpful.
> I think IPsec RFCs doesn't prevent such situation, but AFAIK the most common case - single source/destination IPs for the same SPI.

sp ipv4 out esp protect 6 pri 1 dst 192.168.1.0/24 sport 0:65535 dport 0:65535
sp ipv4 out esp protect 6 pri 1 dst 192.168.2.0/24 sport 0:65535 dport 0:65535

sa out 6 cipher_algo aes-128-cbc cipher_key 22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11 auth_algo sha1-hmac auth_key 22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33:44:55 mode ipv4-tunnel src 172.16.2.1 dst 172.16.1.1

Isn't this a valid configuration? Wouldn't this be a common use case when we have site-to-site tunneling?

https://tools.ietf.org/html/rfc4301#section-4.4.1.1

>
> Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU cores.
> To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> I think it would require some extra synchronization:
> make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> (packets entered ipsec processing).
> I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> Though we plan CTX level API to support such scenario.
> What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks concurrently.
>
>> In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores. Considering
>> IPsec being CPU intensive, this would limit the net output of the chip.
> That's true - but from other side implementation can offload heavy part
> (encrypt/decrypt, auth) to special HW (cryptodev).
> In that case single core might be enough for SA and extra synchronization would just slowdown things.
> That's why I think it should be configurable  what behavior (ST or MT) to use.
I do agree that these are the issues that we need to address to make the 
library MT safe. Whether the extra synchronization would slow down 
things is a very subjective question and will heavily depend on the 
platform. The library should have enough provisions to be able to 
support MT without causing overheads to ST. Right now, the library 
assumes ST.
>
> Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-15 17:06           ` Joseph, Anoob
@ 2018-09-16 10:56             ` Jerin Jacob
  2018-09-17 18:12               ` Ananyev, Konstantin
  2018-09-17 10:36             ` Ananyev, Konstantin
  1 sibling, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-09-16 10:56 UTC (permalink / raw)
  To: Joseph, Anoob
  Cc: Ananyev, Konstantin, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad

-----Original Message-----
> Date: Sat, 15 Sep 2018 22:36:18 +0530
> From: "Joseph, Anoob" <Anoob.Joseph@caviumnetworks.com>
> To: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>, "dev@dpdk.org"
>  <dev@dpdk.org>
> Cc: "Awal, Mohammad Abdul" <mohammad.abdul.awal@intel.com>, "Doherty,
>  Declan" <declan.doherty@intel.com>, "Jerin Jacob
>  (jerin.jacob@caviumnetworks.com)" <jerin.jacob@caviumnetworks.com>,
>  Narayana Prasad <narayanaprasad.athreya@caviumnetworks.com>
> Subject: Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path
>  processing
> User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101
>  Thunderbird/52.9.1
> 
> > 
> > Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU cores.
> > To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> > I think it would require some extra synchronization:
> > make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> > (packets entered ipsec processing).
> > I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> > Though we plan CTX level API to support such scenario.
> > What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks concurrently.
> > 
> > > In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores. Considering
> > > IPsec being CPU intensive, this would limit the net output of the chip.
> > That's true - but from other side implementation can offload heavy part
> > (encrypt/decrypt, auth) to special HW (cryptodev).
> > In that case single core might be enough for SA and extra synchronization would just slowdown things.
> > That's why I think it should be configurable  what behavior (ST or MT) to use.
> I do agree that these are the issues that we need to address to make the
> library MT safe. Whether the extra synchronization would slow down things is
> a very subjective question and will heavily depend on the platform. The
> library should have enough provisions to be able to support MT without
> causing overheads to ST. Right now, the library assumes ST.


I agree with Anoob here.

I have two concerns with librte_ipsec as a separate library

1) There is an overlap with rte_security and new proposed library.
For IPsec, If an application needs to use rte_security for HW
implementation and and application needs to use librte_ipsec for
 SW implementation then it is bad and a lot duplication of work on 
he slow path too.

The rte_security spec can support both inline and look-aside IPSec
protocol support.

2) This library is tuned for fat CPU core in mind like single SA on core
etc. Which is fine for x86 servers and arm64 server category of machines
but it does not work very well with NPU class of SoC or FPGA.

As there  are the different ways to implement the IPSec, For instance,
use of eventdev can help in situation for handling millions of SA and 
equence number of update and anti reply check can be done by leveraging 
some of the HW specific features like 
ORDERED, ATOMIC schedule type(mapped as eventdev feature)in HW with PIPELINE model.

# Issues with having one SA one core,
- In the outbound side, there could be multiple flows using the same SA.
  Multiple flows could be processed parallel on different lcores,
but tying one SA to one core would mean we won't be able to do that.

- In the inbound side, we will have a fat flow hitting one core. If
  IPsec library assumes single core, we will not be able to to spread
fat flow to multiple cores. And one SA-one core would mean all ports on
which we would expect IPsec traffic has to be handled by that core.

I have made a simple presentation. This presentation details ONE WAY to
implement the IPSec with HW support on NPU.

https://docs.google.com/presentation/d/1e3IDf9R7ZQB8FN16Nvu7KINuLSWMdyKEw8_0H05rjj4/edit?usp=sharing

I am not saying this should be the ONLY way to do as it does not work
very well with non NPU/FPGA class of SoC.

So how about making the proposed IPSec library as plugin/driver to
rte_security.
This would give flexibly for each vendor/platform choose to different
IPse implementation based on HW support WITHOUT CHANGING THE APPLICATION
INTERFACE.

IMO, rte_security IPsec look aside support can be simply added by
creating the virtual crypto device(i.e move the proposed code to the virtual crypto device)
likewise inline support
can be added by the virtual ethdev device. This would avoid the need for
updating ipsec-gw application as well i.e unified interface to application.

If you don't like the above idea, any scheme of plugin based
implementation would be fine so that vendor or platform can choose its own implementation.
It can be based on partial HW implement too. i.e SA look can be used in SW, remaining stuff in HW
(for example IPsec inline case)

# For protocols like UDP, it makes sense to create librte_udp as there
no much HW specific offload other than ethdev provides.

# PDCP could be another library to offload to HW, So talking
rte_security path makes more sense in that case too.

Jerin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-15 17:06           ` Joseph, Anoob
  2018-09-16 10:56             ` Jerin Jacob
@ 2018-09-17 10:36             ` Ananyev, Konstantin
  2018-09-17 14:41               ` Joseph, Anoob
  1 sibling, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-17 10:36 UTC (permalink / raw)
  To: Joseph, Anoob, dev
  Cc: Awal, Mohammad Abdul, Doherty, Declan,
	Jerin Jacob (jerin.jacob@caviumnetworks.com),
	Narayana Prasad

Hi Anoob,

> 
> Hi Konstantin,
> Please see inline.
> 
> 
> This RFC introduces a new library within DPDK: librte_ipsec.
> The aim is to provide DPDK native high performance library for IPsec
> data-path processing.
> The library is supposed to utilize existing DPDK crypto-dev and
> security API to provide application with transparent IPsec processing API.
> The library is concentrated on data-path protocols processing (ESP and AH),
> IKE protocol(s) implementation is out of scope for that library.
> Though hook/callback mechanisms will be defined to allow integrate it
> with existing IKE implementations.
> Due to quite complex nature of IPsec protocol suite and variety of user
> requirements and usage scenarios a few API levels will be provided:
> 1) Security Association (SA-level) API
>      Operates at SA level, provides functions to:
>      - initialize/teardown SA object
>      - process inbound/outbound ESP/AH packets associated with the given SA
>        (decrypt/encrypt, authenticate, check integrity,
>         add/remove ESP/AH related headers and data, etc.).
> 2) Security Association Database (SAD) API
>      API to create/manage/destroy IPsec SAD.
>      While DPDK IPsec library plans to have its own implementation,
>      the intention is to keep it as independent from the other parts
>      of IPsec library as possible.
>      That is supposed to give users the ability to provide their own
>      implementation of the SAD compatible with the other parts of the
>      IPsec library.
> 3) IPsec Context (CTX) API
>      This is supposed to be a high-level API, where each IPsec CTX is an
>      abstraction of 'independent copy of the IPsec stack'.
>      CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
>      and provides:
>      - de-multiplexing stream of inbound packets to particular SAs and
>        further IPsec related processing.
>      - IPsec related processing for the outbound packets.
>      - SA add/delete/update functionality
> [Anoob]: Security Policy is an important aspect of IPsec. An IPsec
> library without Security Policy API would be incomplete. For inline
> protocol offload, the final SP-SA check(selector check) is the only
> IPsec part being done by ipsec-secgw now. Would make sense to add that
> also in the library.
> 
> You mean here, that we need some sort of SPD implementation, correct?
> [Anoob] Yes.
> 
> Ok, I see.
> Our thought was that just something based on librte_acl would be enough here...
> But if you think that a special defined SPD API (and implementation) is needed -
> we can probably discuss it along with SAD API (#2 above).
> Though if you'd like to start to work on RFC for it right-away - please feel free to do so :)
> 
> 
> 
> 
> Current RFC concentrates on SA-level API only (1),
> detailed discussion for 2) and 3) will be subjects for separate RFC(s).
> 
> SA (low) level API
> ==================
> 
> API described below operates on SA level.
> It provides functionality that allows user for given SA to process
> inbound and outbound IPsec packets.
> To be more specific:
> - for inbound ESP/AH packets perform decryption, authentication,
>    integrity checking, remove ESP/AH related headers
> [Anoob] Anti-replay check would also be required.
> 
> Yep, anti-replay and ESN support is implied as part of "integrity checking".
> Probably I have to be more specific here.
> [Anoob] This is fine.
> 
> 
> 
> - for outbound packets perform payload encryption, attach ICV,
>    update/add IP headers, add ESP/AH headers/trailers,
>    setup related mbuf felids (ol_flags, tx_offloads, etc.).
> [Anoob] Do we have any plans to handle ESN expiry? Some means to
> initiate an IKE renegotiation? I'm assuming application won't be aware
> of the sequence numbers, in this case.
> [Anoob] What is your plan with events like ESN expiry? IPsec spec talks about byte and time expiry as well.
> 
> At current moment, for SA level: rte_ipsec_crypto_prepare()/rte_ipsec_inline_process() will set rte_errno
> to special value (EOVERFLOW) to signal upper layer that limit is reached.
> Upper layer can decide to start re-negotiation, or just destroy an SA.
> 
> Future plans for IPsec Context (CTX) API (#3 above):
> Introduce a special function, something like:
> rte_ipsec_get_expired(rte_ipsec_ctx *ctx, rte_ipsec_sa *expired_sa[], uint32_t num);
> It would return up-to *num* of SAs for given ipsec context, that are expired/limit reached.
> Then upper layer again might decide for each SA should renegotiation be started,
> or just wipe given SA.
> It would be upper layer responsibility to call this function periodically.
> 
> 
> 
> - initialize/un-initialize given SA based on user provided parameters.
> 
> Processed inbound/outbound packets could be grouped by user provided
> flow id (opaque 64-bit number associated by user with given SA).
> 
> SA-level API is based on top of crypto-dev/security API and relies on them
> to perform actual cipher and integrity checking.
> Due to the nature of crypto-dev API (enqueue/deque model) we use
> asynchronous API for IPsec packets destined to be processed
> by crypto-device:
> rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
> Though for packets destined for inline processing no extra overhead
> is required and simple and synchronous API: rte_ipsec_inline_process()
> is introduced for that case.
> [Anoob] The API should include event-delivery as a crypto-op completion
> mechanism as well. The application could configure the event crypto
> adapter and then enqueue and dequeue to crypto device using events (via
> event dev).
> 
> Not sure what particular extra API you think is required here?
> As I understand in both cases (with or without event crypto-adapter) we still have to:
>  1) fill crypto-op properly
>  2) enqueue it to crypto-dev (via eventdev or directly)
> 3)  receive processed by crypto-dev crypto-op (either via eventdev or directly)
> 4) check crypto-op status, do further post-processing if any
> 
> So #1 and #4 (SA-level API respnibility) remain the same for both cases.
> [Anoob] rte_ipsec_inline_process works on packets not events. We might need a similar API which processes events.
> 
> Ok, I still don't get you here.
> Could you specify what exactly function you'd like to add to the API here with parameter list
> and brief behavior description?
> 
> 
> The following functionality:
>    - match inbound/outbound packets to particular SA
>    - manage crypto/security devices
>    - provide SAD/SPD related functionality
>    - determine what crypto/security device has to be used
>      for given packet(s)
> is out of scope for SA-level API.
> 
> Below is the brief (and simplified) overview of expected SA-level
> API usage.
> 
> /* allocate and initialize SA */
> size_t sz = rte_ipsec_sa_size();
> struct rte_ipsec_sa *sa = rte_malloc(sz);
> struct rte_ipsec_sa_prm prm;
> /* fill prm */
> rc = rte_ipsec_sa_init(sa, &prm);
> if (rc != 0) { /*handle error */}
> .....
> 
> /* process inbound/outbound IPsec packets that belongs to given SA */
> 
> /* inline IPsec processing was done for these packets */
> if (use_inline_ipsec)
>         n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
> /* use crypto-device to process the packets */
> else {
>       struct rte_crypto_op *cop[nb_pkts];
>       struct rte_ipsec_group grp[nb_pkts];
> 
>        ....
>       /* prepare crypto ops */
>       n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
>       /* enqueue crypto ops to related crypto-dev */
>       n =  rte_cryptodev_enqueue_burst(..., cops, n);
>       if (n != nb_pkts) { /*handle failed packets */}
>       /* dequeue finished crypto ops from related crypto-dev */
>       n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
>       /* finish IPsec processing for associated packets */
>       n = rte_ipsec_crypto_process(cop, pkts, grp, n);
> [Anoob] Does the SA based grouping apply to both inbound and outbound?
> 
> Yes, the plan is to have it available for both cases.
> [Anoob] On the inbound, shouldn't the packets be grouped+ordered based on inner L3+inner L4?
> 
> I think that's up to the user decide based on what criteria wants to group it and does he wants
> to do any grouping at all.
> That's why flowid is user-defined and totally transparent to the lib.
> 
> 
> 
> 
>       /* now we have <n> group of packets grouped by SA flow id  */
>      ....
>   }
> ...
> 
> /* uninit given SA */
> rte_ipsec_sa_fini(sa);
> 
> Planned scope for 18.11:
> ========================
> 
> - SA-level API definition
> - ESP tunnel mode support (both IPv4/IPv6)
> - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
> - UT
> [Anoob] What is UT?
> 
> Unit-Test
> 
> 
> Note: Still WIP, so not all planned for 18.11 functionality is in place.
> 
> Post 18.11:
> ===========
> - ESP transport mode support (both IPv4/IPv6)
> - update examples/ipsec-secgw to use librte_ipsec
> - SAD and high-level API definition and implementation
> 
> 
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   config/common_base                     |   5 +
>   lib/Makefile                           |   2 +
>   lib/librte_ipsec/Makefile              |  24 +
>   lib/librte_ipsec/meson.build           |  10 +
>   lib/librte_ipsec/pad.h                 |  45 ++
>   lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
>   lib/librte_ipsec/rte_ipsec_version.map |  13 +
>   lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
>   lib/librte_net/rte_esp.h               |  10 +-
>   lib/meson.build                        |   2 +
>   mk/rte.app.mk                          |   2 +
>   11 files changed, 1278 insertions(+), 1 deletion(-)
>   create mode 100644 lib/librte_ipsec/Makefile
>   create mode 100644 lib/librte_ipsec/meson.build
>   create mode 100644 lib/librte_ipsec/pad.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>   create mode 100644 lib/librte_ipsec/sa.c
> <snip>
> +static inline uint16_t
> +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> +       struct rte_crypto_op *cop[], uint16_t num)
> +{
> +       int32_t rc;
> +       uint32_t i, n;
> +       union sym_op_data icv;
> +
> +       n = esn_outb_check_sqn(sa, num);
> +
> +       for (i = 0; i != n; i++) {
> +
> +               sa->sqn++;
> [Anoob] Shouldn't this be done atomically?
> 
> If we want to have MT-safe API for SA-datapath API, then yes.
> Though it would make things more complicated here, especially for inbound with anti-replay support.
> I think it is doable (spin-lock?), but would cause extra overhead and complexity.
> Right now I am not sure it really worth it - comments/suggestions are welcome.
> What probably could be a good compromise - runtime decision per SA basis (at sa_init()),
> do we need an ST or MT behavior for given SA.
> [Anoob] Going with single thread approach would significantly limit the scope of this library. Single thread approach would mean
> one SA on one core. This would not work on practical cases.
> Suppose we have two flows which are supposed to use the same SA. With RSS, these flows could end up on different cores. Now
> only one core would be able to process, as SA will not be shared. We have the same problem in ipsec-secgw too.
> 
> Just for my curiosity - how do you plan to use RSS for ipsec packet distribution?
> Do you foresee a common situation when there would be packets that belongs to the same SA
> (same SPI) but with multiple source(destination) IP addresses?
> If so, probably some examples would be helpful.
> I think IPsec RFCs doesn't prevent such situation, but AFAIK the most common case - single source/destination IPs for the same SPI.
> 
> sp ipv4 out esp protect 6 pri 1 dst 192.168.1.0/24 sport 0:65535 dport 0:65535
> sp ipv4 out esp protect 6 pri 1 dst 192.168.2.0/24 sport 0:65535 dport 0:65535
> sa out 6 cipher_algo aes-128-cbc cipher_key 22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11 auth_algo sha1-hmac auth_key
> 22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33:44:55 mode ipv4-tunnel src 172.16.2.1 dst 172.16.1.1
> Isn't this a valid configuration? Wouldn't this be a common use case when we have site-to-site tunneling?
> https://tools.ietf.org/html/rfc4301#section-4.4.1.1

Ok, I think I understand what was my confusion here - above you talked about using RSS to distribute incoming *outbound* traffic, correct?
If so, then yes I think such scheme would work without problems.
My original thought was that we are talking about inbound traffic distribution here - in that case standard RSS wouldn't help much.

> 
> 
> Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU cores.
> To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> I think it would require some extra synchronization:
> make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> (packets entered ipsec processing).
> I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> Though we plan CTX level API to support such scenario.
> What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
> concurrently.
> 
> In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores. Considering
> IPsec being CPU intensive, this would limit the net output of the chip.
> 
> That's true - but from other side implementation can offload heavy part
> (encrypt/decrypt, auth) to special HW (cryptodev).
> In that case single core might be enough for SA and extra synchronization would just slowdown things.
> That's why I think it should be configurable  what behavior (ST or MT) to use.
> I do agree that these are the issues that we need to address to make the library MT safe. Whether the extra synchronization would
> slow down things is a very subjective question and will heavily depend on the platform. The library should have enough provisions
> to be able to support MT without causing overheads to ST. Right now, the library assumes ST.

Ok, I suppose we both agree that we need ST and MT case supported.
I didn't want to introduce MT related code right now (for 18.11), but as you guys seems very concerned about it,
we will try to add MT related stuff into v1, so you can review it at early stages. 
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-17 10:36             ` Ananyev, Konstantin
@ 2018-09-17 14:41               ` Joseph, Anoob
  0 siblings, 0 replies; 194+ messages in thread
From: Joseph, Anoob @ 2018-09-17 14:41 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Awal, Mohammad Abdul, Doherty, Declan,
	Jerin Jacob (jerin.jacob@caviumnetworks.com),
	Narayana Prasad

Hi Konstantin,


On 17-09-2018 16:06, Ananyev, Konstantin wrote:
> External Email
>
> Hi Anoob,
>
>> Hi Konstantin,
>> Please see inline.
>>
>>
>> This RFC introduces a new library within DPDK: librte_ipsec.
>> The aim is to provide DPDK native high performance library for IPsec
>> data-path processing.
>> The library is supposed to utilize existing DPDK crypto-dev and
>> security API to provide application with transparent IPsec processing API.
>> The library is concentrated on data-path protocols processing (ESP and AH),
>> IKE protocol(s) implementation is out of scope for that library.
>> Though hook/callback mechanisms will be defined to allow integrate it
>> with existing IKE implementations.
>> Due to quite complex nature of IPsec protocol suite and variety of user
>> requirements and usage scenarios a few API levels will be provided:
>> 1) Security Association (SA-level) API
>>       Operates at SA level, provides functions to:
>>       - initialize/teardown SA object
>>       - process inbound/outbound ESP/AH packets associated with the given SA
>>         (decrypt/encrypt, authenticate, check integrity,
>>          add/remove ESP/AH related headers and data, etc.).
>> 2) Security Association Database (SAD) API
>>       API to create/manage/destroy IPsec SAD.
>>       While DPDK IPsec library plans to have its own implementation,
>>       the intention is to keep it as independent from the other parts
>>       of IPsec library as possible.
>>       That is supposed to give users the ability to provide their own
>>       implementation of the SAD compatible with the other parts of the
>>       IPsec library.
>> 3) IPsec Context (CTX) API
>>       This is supposed to be a high-level API, where each IPsec CTX is an
>>       abstraction of 'independent copy of the IPsec stack'.
>>       CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
>>       and provides:
>>       - de-multiplexing stream of inbound packets to particular SAs and
>>         further IPsec related processing.
>>       - IPsec related processing for the outbound packets.
>>       - SA add/delete/update functionality
>> [Anoob]: Security Policy is an important aspect of IPsec. An IPsec
>> library without Security Policy API would be incomplete. For inline
>> protocol offload, the final SP-SA check(selector check) is the only
>> IPsec part being done by ipsec-secgw now. Would make sense to add that
>> also in the library.
>>
>> You mean here, that we need some sort of SPD implementation, correct?
>> [Anoob] Yes.
>>
>> Ok, I see.
>> Our thought was that just something based on librte_acl would be enough here...
>> But if you think that a special defined SPD API (and implementation) is needed -
>> we can probably discuss it along with SAD API (#2 above).
>> Though if you'd like to start to work on RFC for it right-away - please feel free to do so :)
>>
>>
>>
>>
>> Current RFC concentrates on SA-level API only (1),
>> detailed discussion for 2) and 3) will be subjects for separate RFC(s).
>>
>> SA (low) level API
>> ==================
>>
>> API described below operates on SA level.
>> It provides functionality that allows user for given SA to process
>> inbound and outbound IPsec packets.
>> To be more specific:
>> - for inbound ESP/AH packets perform decryption, authentication,
>>     integrity checking, remove ESP/AH related headers
>> [Anoob] Anti-replay check would also be required.
>>
>> Yep, anti-replay and ESN support is implied as part of "integrity checking".
>> Probably I have to be more specific here.
>> [Anoob] This is fine.
>>
>>
>>
>> - for outbound packets perform payload encryption, attach ICV,
>>     update/add IP headers, add ESP/AH headers/trailers,
>>     setup related mbuf felids (ol_flags, tx_offloads, etc.).
>> [Anoob] Do we have any plans to handle ESN expiry? Some means to
>> initiate an IKE renegotiation? I'm assuming application won't be aware
>> of the sequence numbers, in this case.
>> [Anoob] What is your plan with events like ESN expiry? IPsec spec talks about byte and time expiry as well.
>>
>> At current moment, for SA level: rte_ipsec_crypto_prepare()/rte_ipsec_inline_process() will set rte_errno
>> to special value (EOVERFLOW) to signal upper layer that limit is reached.
>> Upper layer can decide to start re-negotiation, or just destroy an SA.
>>
>> Future plans for IPsec Context (CTX) API (#3 above):
>> Introduce a special function, something like:
>> rte_ipsec_get_expired(rte_ipsec_ctx *ctx, rte_ipsec_sa *expired_sa[], uint32_t num);
>> It would return up-to *num* of SAs for given ipsec context, that are expired/limit reached.
>> Then upper layer again might decide for each SA should renegotiation be started,
>> or just wipe given SA.
>> It would be upper layer responsibility to call this function periodically.
>>
>>
>>
>> - initialize/un-initialize given SA based on user provided parameters.
>>
>> Processed inbound/outbound packets could be grouped by user provided
>> flow id (opaque 64-bit number associated by user with given SA).
>>
>> SA-level API is based on top of crypto-dev/security API and relies on them
>> to perform actual cipher and integrity checking.
>> Due to the nature of crypto-dev API (enqueue/deque model) we use
>> asynchronous API for IPsec packets destined to be processed
>> by crypto-device:
>> rte_ipsec_crypto_prepare()->rte_cryptodev_enqueue_burst()->
>> rte_cryptodev_dequeue_burst()->rte_ipsec_crypto_process().
>> Though for packets destined for inline processing no extra overhead
>> is required and simple and synchronous API: rte_ipsec_inline_process()
>> is introduced for that case.
>> [Anoob] The API should include event-delivery as a crypto-op completion
>> mechanism as well. The application could configure the event crypto
>> adapter and then enqueue and dequeue to crypto device using events (via
>> event dev).
>>
>> Not sure what particular extra API you think is required here?
>> As I understand in both cases (with or without event crypto-adapter) we still have to:
>>   1) fill crypto-op properly
>>   2) enqueue it to crypto-dev (via eventdev or directly)
>> 3)  receive processed by crypto-dev crypto-op (either via eventdev or directly)
>> 4) check crypto-op status, do further post-processing if any
>>
>> So #1 and #4 (SA-level API respnibility) remain the same for both cases.
>> [Anoob] rte_ipsec_inline_process works on packets not events. We might need a similar API which processes events.
>>
>> Ok, I still don't get you here.
>> Could you specify what exactly function you'd like to add to the API here with parameter list
>> and brief behavior description?
>>
>>
>> The following functionality:
>>     - match inbound/outbound packets to particular SA
>>     - manage crypto/security devices
>>     - provide SAD/SPD related functionality
>>     - determine what crypto/security device has to be used
>>       for given packet(s)
>> is out of scope for SA-level API.
>>
>> Below is the brief (and simplified) overview of expected SA-level
>> API usage.
>>
>> /* allocate and initialize SA */
>> size_t sz = rte_ipsec_sa_size();
>> struct rte_ipsec_sa *sa = rte_malloc(sz);
>> struct rte_ipsec_sa_prm prm;
>> /* fill prm */
>> rc = rte_ipsec_sa_init(sa, &prm);
>> if (rc != 0) { /*handle error */}
>> .....
>>
>> /* process inbound/outbound IPsec packets that belongs to given SA */
>>
>> /* inline IPsec processing was done for these packets */
>> if (use_inline_ipsec)
>>          n = rte_ipsec_inline_process(sa, pkts, nb_pkts);
>> /* use crypto-device to process the packets */
>> else {
>>        struct rte_crypto_op *cop[nb_pkts];
>>        struct rte_ipsec_group grp[nb_pkts];
>>
>>         ....
>>        /* prepare crypto ops */
>>        n = rte_ipsec_crypto_prepare(sa, pkts, cops, nb_pkts);
>>        /* enqueue crypto ops to related crypto-dev */
>>        n =  rte_cryptodev_enqueue_burst(..., cops, n);
>>        if (n != nb_pkts) { /*handle failed packets */}
>>        /* dequeue finished crypto ops from related crypto-dev */
>>        n = rte_cryptodev_dequeue_burst(..., cops, nb_pkts);
>>        /* finish IPsec processing for associated packets */
>>        n = rte_ipsec_crypto_process(cop, pkts, grp, n);
>> [Anoob] Does the SA based grouping apply to both inbound and outbound?
>>
>> Yes, the plan is to have it available for both cases.
>> [Anoob] On the inbound, shouldn't the packets be grouped+ordered based on inner L3+inner L4?
>>
>> I think that's up to the user decide based on what criteria wants to group it and does he wants
>> to do any grouping at all.
>> That's why flowid is user-defined and totally transparent to the lib.
>>
>>
>>
>>
>>        /* now we have <n> group of packets grouped by SA flow id  */
>>       ....
>>    }
>> ...
>>
>> /* uninit given SA */
>> rte_ipsec_sa_fini(sa);
>>
>> Planned scope for 18.11:
>> ========================
>>
>> - SA-level API definition
>> - ESP tunnel mode support (both IPv4/IPv6)
>> - Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
>> - UT
>> [Anoob] What is UT?
>>
>> Unit-Test
>>
>>
>> Note: Still WIP, so not all planned for 18.11 functionality is in place.
>>
>> Post 18.11:
>> ===========
>> - ESP transport mode support (both IPv4/IPv6)
>> - update examples/ipsec-secgw to use librte_ipsec
>> - SAD and high-level API definition and implementation
>>
>>
>> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>> ---
>>    config/common_base                     |   5 +
>>    lib/Makefile                           |   2 +
>>    lib/librte_ipsec/Makefile              |  24 +
>>    lib/librte_ipsec/meson.build           |  10 +
>>    lib/librte_ipsec/pad.h                 |  45 ++
>>    lib/librte_ipsec/rte_ipsec.h           | 245 +++++++++
>>    lib/librte_ipsec/rte_ipsec_version.map |  13 +
>>    lib/librte_ipsec/sa.c                  | 921 +++++++++++++++++++++++++++++++++
>>    lib/librte_net/rte_esp.h               |  10 +-
>>    lib/meson.build                        |   2 +
>>    mk/rte.app.mk                          |   2 +
>>    11 files changed, 1278 insertions(+), 1 deletion(-)
>>    create mode 100644 lib/librte_ipsec/Makefile
>>    create mode 100644 lib/librte_ipsec/meson.build
>>    create mode 100644 lib/librte_ipsec/pad.h
>>    create mode 100644 lib/librte_ipsec/rte_ipsec.h
>>    create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>>    create mode 100644 lib/librte_ipsec/sa.c
>> <snip>
>> +static inline uint16_t
>> +esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>> +       struct rte_crypto_op *cop[], uint16_t num)
>> +{
>> +       int32_t rc;
>> +       uint32_t i, n;
>> +       union sym_op_data icv;
>> +
>> +       n = esn_outb_check_sqn(sa, num);
>> +
>> +       for (i = 0; i != n; i++) {
>> +
>> +               sa->sqn++;
>> [Anoob] Shouldn't this be done atomically?
>>
>> If we want to have MT-safe API for SA-datapath API, then yes.
>> Though it would make things more complicated here, especially for inbound with anti-replay support.
>> I think it is doable (spin-lock?), but would cause extra overhead and complexity.
>> Right now I am not sure it really worth it - comments/suggestions are welcome.
>> What probably could be a good compromise - runtime decision per SA basis (at sa_init()),
>> do we need an ST or MT behavior for given SA.
>> [Anoob] Going with single thread approach would significantly limit the scope of this library. Single thread approach would mean
>> one SA on one core. This would not work on practical cases.
>> Suppose we have two flows which are supposed to use the same SA. With RSS, these flows could end up on different cores. Now
>> only one core would be able to process, as SA will not be shared. We have the same problem in ipsec-secgw too.
>>
>> Just for my curiosity - how do you plan to use RSS for ipsec packet distribution?
>> Do you foresee a common situation when there would be packets that belongs to the same SA
>> (same SPI) but with multiple source(destination) IP addresses?
>> If so, probably some examples would be helpful.
>> I think IPsec RFCs doesn't prevent such situation, but AFAIK the most common case - single source/destination IPs for the same SPI.
>>
>> sp ipv4 out esp protect 6 pri 1 dst 192.168.1.0/24 sport 0:65535 dport 0:65535
>> sp ipv4 out esp protect 6 pri 1 dst 192.168.2.0/24 sport 0:65535 dport 0:65535
>> sa out 6 cipher_algo aes-128-cbc cipher_key 22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11 auth_algo sha1-hmac auth_key
>> 22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33:44:55 mode ipv4-tunnel src 172.16.2.1 dst 172.16.1.1
>> Isn't this a valid configuration? Wouldn't this be a common use case when we have site-to-site tunneling?
>> https://tools.ietf.org/html/rfc4301#section-4.4.1.1
> Ok, I think I understand what was my confusion here - above you talked about using RSS to distribute incoming *outbound* traffic, correct?
> If so, then yes I think such scheme would work without problems.
> My original thought was that we are talking about inbound traffic distribution here - in that case standard RSS wouldn't help much.
Agreed. RSS won't be of much use in that case (inbound). But fat flow 
hitting one core would be a problem which we should solve. RSS will help 
in solving the same problem with outbound, to an extent.
>> Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU cores.
>> To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
>> I think it would require some extra synchronization:
>> make sure that we do final packet processing (seq check/update) at the same order as we received the packets
>> (packets entered ipsec processing).
>> I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
>> Though we plan CTX level API to support such scenario.
>> What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
>> concurrently.
>>
>> In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores. Considering
>> IPsec being CPU intensive, this would limit the net output of the chip.
>>
>> That's true - but from other side implementation can offload heavy part
>> (encrypt/decrypt, auth) to special HW (cryptodev).
>> In that case single core might be enough for SA and extra synchronization would just slowdown things.
>> That's why I think it should be configurable  what behavior (ST or MT) to use.
>> I do agree that these are the issues that we need to address to make the library MT safe. Whether the extra synchronization would
>> slow down things is a very subjective question and will heavily depend on the platform. The library should have enough provisions
>> to be able to support MT without causing overheads to ST. Right now, the library assumes ST.
> Ok, I suppose we both agree that we need ST and MT case supported.
> I didn't want to introduce MT related code right now (for 18.11), but as you guys seems very concerned about it,
> we will try to add MT related stuff into v1, so you can review it at early stages.
> Konstantin
Glad that we are on the same page. As you had pointed out, MT is not 
just about adding some locks. It's more complicated than that. And again 
for solving it, there could be multiple ways. Using locks and software 
queues etc would definitely be one way. But that's not the only 
solution. As Jerin had pointed out, there are parts in IPsec flow which 
can be offloaded to hardware.

http://mails.dpdk.org/archives/dev/2018-September/111770.html

Can you share your thoughts on the above approach?

Anoob

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-16 10:56             ` Jerin Jacob
@ 2018-09-17 18:12               ` Ananyev, Konstantin
  2018-09-18 12:42                 ` Ananyev, Konstantin
  2018-09-18 17:54                 ` Jerin Jacob
  0 siblings, 2 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-17 18:12 UTC (permalink / raw)
  To: Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad

Hi Jerin,


> >
> > >
> > > Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU
> cores.
> > > To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> > > I think it would require some extra synchronization:
> > > make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> > > (packets entered ipsec processing).
> > > I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> > > Though we plan CTX level API to support such scenario.
> > > What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
> concurrently.
> > >
> > > > In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores.
> Considering
> > > > IPsec being CPU intensive, this would limit the net output of the chip.
> > > That's true - but from other side implementation can offload heavy part
> > > (encrypt/decrypt, auth) to special HW (cryptodev).
> > > In that case single core might be enough for SA and extra synchronization would just slowdown things.
> > > That's why I think it should be configurable  what behavior (ST or MT) to use.
> > I do agree that these are the issues that we need to address to make the
> > library MT safe. Whether the extra synchronization would slow down things is
> > a very subjective question and will heavily depend on the platform. The
> > library should have enough provisions to be able to support MT without
> > causing overheads to ST. Right now, the library assumes ST.
> 
> 
> I agree with Anoob here.
> 
> I have two concerns with librte_ipsec as a separate library
> 
> 1) There is an overlap with rte_security and new proposed library.

I don't think there really is an overlap.
rte_security is a 'framework for management and provisioning of security protocol operations offloaded to hardware based devices'.
While rte_ipsec is aimed to be a library for IPsec data-path processing.
There is no plans for rte_ipsec to 'obsolete' rte_security.
Quite opposite rte_ipsec supposed to work with both rte_cryptodev and rte_security APIs (devices).
It is possible to have an SA that would use both crypto and  security devices.
Or to have an SA that would use multiple crypto devs
(though right now it is up the user level to do load-balancing logic). 

> For IPsec, If an application needs to use rte_security for HW
> implementation and and application needs to use librte_ipsec for
>  SW implementation then it is bad and a lot duplication of work on
> he slow path too.

The plan is that application would need to use just rte_ipsec API for all data-paths
(HW/SW, lookaside/inline). 
Let say right now there is rte_ipsec_inline_process() function if user
prefers to use inline security device to process given group packets,
and rte_ipsec_crypto_process(/prepare) if user decides to use
lookaside security or simple crypto device for it.

> 
> The rte_security spec can support both inline and look-aside IPSec
> protocol support.

AFAIK right now rte_security just provides API to create/free/manipulate security sessions.
I don't see how it can support all the functionality mentioned above,
plus SAD and SPD.

> 
> 2) This library is tuned for fat CPU core in mind like single SA on core
> etc. Which is fine for x86 servers and arm64 server category of machines
> but it does not work very well with NPU class of SoC or FPGA.
> 
> As there  are the different ways to implement the IPSec, For instance,
> use of eventdev can help in situation for handling millions of SA and
> equence number of update and anti reply check can be done by leveraging
> some of the HW specific features like
> ORDERED, ATOMIC schedule type(mapped as eventdev feature)in HW with PIPELINE model.
> 
> # Issues with having one SA one core,
> - In the outbound side, there could be multiple flows using the same SA.
>   Multiple flows could be processed parallel on different lcores,
> but tying one SA to one core would mean we won't be able to do that.
> 
> - In the inbound side, we will have a fat flow hitting one core. If
>   IPsec library assumes single core, we will not be able to to spread
> fat flow to multiple cores. And one SA-one core would mean all ports on
> which we would expect IPsec traffic has to be handled by that core.

I suppose that all refers to the discussion about MT safe API for rte_ipsec, right?
If so, then as I said in my reply to Anoob: 
We will try to make API usable in MT environment for v1,
so you can review and provide comments at early stages.

> 
> I have made a simple presentation. This presentation details ONE WAY to
> implement the IPSec with HW support on NPU.
> 
> https://docs.google.com/presentation/d/1e3IDf9R7ZQB8FN16Nvu7KINuLSWMdyKEw8_0H05rjj4/edit?usp=sharing
> 

Thanks, quite helpful.
Actually from page 3, it looks like your expectations don't contradict in general with proposed API:

...
} else if (ev.event_type == RTE_EVENT_TYPE_LCORE && ev.sub_event_id == APP_STATE_SEQ_UPDATE) {
                        sa = ev.flow_queue_id;                                  
                        /* do critical section work per sa */               
                        do_critical_section_work(sa);  

[KA] that's the place where I expect either 
rte_ipsec_inline_process(sa, ...); OR rte_ipsec_crypto_prepare(sa, ...);      	
would be called.
                                                                               
                     /* Issue the crypto request and generate the following on crypto work completion */
[KA] that's the place where I expect rte_ipsec_crypto_process(...) be invoked.

                        ev.flow_queue_id = tx_port;                                  
                        ev.sub_event_id = tx_queue_id;            
                        ev.sched_sync = RTE_SCHED_SYNC_ATOMIC;                 
                        rte_cryptodev_event_enqueue(cryptodev, ev.mbuf, eventdev, ev);                
                }


> I am not saying this should be the ONLY way to do as it does not work
> very well with non NPU/FPGA class of SoC.
> 
> So how about making the proposed IPSec library as plugin/driver to
> rte_security.

As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
is the best possible approach.
Though I probably understand your concern:
In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
Though there are devices where most of prepare/process can be done in HW
(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
Is that so?
To address that issue I suppose we can do:
1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
    security devices into ipsec.
    We planned to do it anyway, just don't have it done yet.
2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
    and add into rte_security_ops   new functions: 
    uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
    uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
    uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
    So for custom HW, PMD can overwrite normal prepare/process behavior.

> This would give flexibly for each vendor/platform choose to different
> IPse implementation based on HW support WITHOUT CHANGING THE APPLICATION
> INTERFACE.

Not sure what API changes you are referring to?
As I am aware we do introduce new API, but all existing APIs remain in place.

> 
> IMO, rte_security IPsec look aside support can be simply added by
> creating the virtual crypto device(i.e move the proposed code to the virtual crypto device)
> likewise inline support
> can be added by the virtual ethdev device.

That's probably possible and if someone would like to introduce such abstraction - NP in general
(though my suspicion - it might be too heavy to be really useful).
Though I don't think it should be the only possible way for the user to enable IPsec data-processing inside his app.
Again I guess such virtual-dev will still use rte_ipsec inside.

> This would avoid the need for
> updating ipsec-gw application as well i.e unified interface to application.

I think - it would  really good to simplify existing ipsec-secgw sample app.
Some parts of it seems unnecessary complex to me.
One of the reasons for it -  we don't really have an unified (and transparent) API for ipsec data-path.
Let's look at ipsec_enqueue() and related code (examples/ipsec-secgw/ipsec.c:365)
It is huge (and ugly) -  user has to handle dozen different cases just to enqueue packet for IPsec processing.
One of the aims of rte_ipsec library - hide all that complexities inside the library and provide to
the upper layer clean and transparent API.

> 
> If you don't like the above idea, any scheme of plugin based
> implementation would be fine so that vendor or platform can choose its own implementation.
> It can be based on partial HW implement too. i.e SA look can be used in SW, remaining stuff in HW
> (for example IPsec inline case)

I am surely ok with the idea to give vendors an ability to customize implementation
and enable their HW capabilities.
Do you think proposed additions to the rte_security would be  enough,
or something extra is needed?

Konstantin


> 
> # For protocols like UDP, it makes sense to create librte_udp as there
> no much HW specific offload other than ethdev provides.
> 
> # PDCP could be another library to offload to HW, So talking
> rte_security path makes more sense in that case too.
> 
> Jerin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-17 18:12               ` Ananyev, Konstantin
@ 2018-09-18 12:42                 ` Ananyev, Konstantin
  2018-09-20 14:26                   ` Akhil Goyal
  2018-09-18 17:54                 ` Jerin Jacob
  1 sibling, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-18 12:42 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain


> > I am not saying this should be the ONLY way to do as it does not work
> > very well with non NPU/FPGA class of SoC.
> >
> > So how about making the proposed IPSec library as plugin/driver to
> > rte_security.
> 
> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> is the best possible approach.
> Though I probably understand your concern:
> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
> Though there are devices where most of prepare/process can be done in HW
> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> Is that so?
> To address that issue I suppose we can do:
> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>     security devices into ipsec.
>     We planned to do it anyway, just don't have it done yet.
> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
>     and add into rte_security_ops   new functions:
>     uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>     uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>     uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>     So for custom HW, PMD can overwrite normal prepare/process behavior.
> 

Actually  after another thought: 
My previous assumption (probably wrong one) was that for both
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right now)
I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.  
Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
(HW/SW roses/responsibilities)?
About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that capability.
So my question is there any devices/drivers that do support it?
If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-17 18:12               ` Ananyev, Konstantin
  2018-09-18 12:42                 ` Ananyev, Konstantin
@ 2018-09-18 17:54                 ` Jerin Jacob
  2018-09-24  8:45                   ` Ananyev, Konstantin
  1 sibling, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-09-18 17:54 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Joseph, Anoob, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad, akhil.goyal, hemant.agrawal, shreyansh.jain

-----Original Message-----
> Date: Mon, 17 Sep 2018 18:12:48 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>, "Joseph, Anoob"
>  <Anoob.Joseph@caviumnetworks.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, "Awal, Mohammad Abdul"
>  <mohammad.abdul.awal@intel.com>, "Doherty, Declan"
>  <declan.doherty@intel.com>, Narayana Prasad
>  <narayanaprasad.athreya@caviumnetworks.com>
> Subject: RE: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path
>  processing
> 
> 
> Hi Jerin,


Hi Konstantin,

> 
> 
> > >
> > > >
> > > > Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU
> > cores.
> > > > To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> > > > I think it would require some extra synchronization:
> > > > make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> > > > (packets entered ipsec processing).
> > > > I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> > > > Though we plan CTX level API to support such scenario.
> > > > What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
> > concurrently.
> > > >
> > > > > In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores.
> > Considering
> > > > > IPsec being CPU intensive, this would limit the net output of the chip.
> > > > That's true - but from other side implementation can offload heavy part
> > > > (encrypt/decrypt, auth) to special HW (cryptodev).
> > > > In that case single core might be enough for SA and extra synchronization would just slowdown things.
> > > > That's why I think it should be configurable  what behavior (ST or MT) to use.
> > > I do agree that these are the issues that we need to address to make the
> > > library MT safe. Whether the extra synchronization would slow down things is
> > > a very subjective question and will heavily depend on the platform. The
> > > library should have enough provisions to be able to support MT without
> > > causing overheads to ST. Right now, the library assumes ST.
> >
> >
> > I agree with Anoob here.
> >
> > I have two concerns with librte_ipsec as a separate library
> >
> > 1) There is an overlap with rte_security and new proposed library.
> 
> I don't think there really is an overlap.

As mentioned in your other email. IMO, There is an overlap as
RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL can support almost everything
in HW or HW + SW if some PMD wishes to do so. 

Answering some of the questions, you have asked in other thread based on
my understanding.

Regarding RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL support,
Marvell/Cavium CPT hardware on next generation HW(Planning to upstream
around v19.02) can support RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and
Anoob already pushed the application changes in ipsec-gw.

In our understanding of HW/SW roles/responsibilities for that type of
devices are:

INLINE_PROTOCOL
----------------
In control path, security session is created with the given SA and
rte_flow configuration etc.
 
For outbound traffic, the application will have to do SA lookup and
identify the security action (inline/look aside crypto/protocol). For
packets identified for inline protocol processing, the application would
submit as plain packets to the ethernet device and the security capable
ethernet device would perform IPSec and send out the packet. For PMDs
which would need extra metadata (capability flag), set_pkt_metadata
function pointer would be called (from application). This can be used to
set some per packet field to identify the security session to be used to
process the packet. Sequence number updation will be done by the PMD.
For inbound traffic, the packets for IPSec would be identified by using
rte_flow (hardware accelerated packet filtering). For the packets
identified for inline offload (SECURITY action), hardware would perform
the processing. For inline protocol processed IPSec packets, PMD would
set “user data” so that application can get the details of the security
processing done on the packet. Once the plain packet (after IPSec
processing) is received, a selector check need to be performed to make
sure we have a valid packet after IPSec processing. The user data is used
for that. Anti-replay check is handled by the PMD. The PMD would raise
an eth event in case of sequence number expiry or any SA expiry.


LOOKASIDE_PROTOCOL
------------------
In control path, security session is created with the given SA.
 
Enqueue/dequeue is similar to what is done for regular crypto
(RTE_SECURITY_ACTION_TYPE_NONE) but all the protocol related processing
would be offloaded. Application will need to do SA lookup and identify
the processing to be done (both in case of outbound & inbound), and
submit packet to crypto device. Application need not do any IPSec
related transformations other than the lookup. Anti-replay need to be
handled in the PMD (the spec says the device “may be handled” do anti-replay check,
but a complete protocol offload would need anti-replay check also).


> rte_security is a 'framework for management and provisioning of security protocol operations offloaded to hardware based devices'.
> While rte_ipsec is aimed to be a library for IPsec data-path processing.
> There is no plans for rte_ipsec to 'obsolete' rte_security.
> Quite opposite rte_ipsec supposed to work with both rte_cryptodev and rte_security APIs (devices).
> It is possible to have an SA that would use both crypto and  security devices.
> Or to have an SA that would use multiple crypto devs
> (though right now it is up the user level to do load-balancing logic).
> 
> > For IPsec, If an application needs to use rte_security for HW
> > implementation and and application needs to use librte_ipsec for
> >  SW implementation then it is bad and a lot duplication of work on
> > he slow path too.
> 
> The plan is that application would need to use just rte_ipsec API for all data-paths
> (HW/SW, lookaside/inline).
> Let say right now there is rte_ipsec_inline_process() function if user
> prefers to use inline security device to process given group packets,
> and rte_ipsec_crypto_process(/prepare) if user decides to use
> lookaside security or simple crypto device for it.
> 
> >
> > The rte_security spec can support both inline and look-aside IPSec
> > protocol support.
> 
> AFAIK right now rte_security just provides API to create/free/manipulate security sessions.
> I don't see how it can support all the functionality mentioned above,
> plus SAD and SPD.


At least for INLINE_PROTOCOL case SA lookup for inbound traffic does by
HW.

> 
> >
> > 2) This library is tuned for fat CPU core in mind like single SA on core
> > etc. Which is fine for x86 servers and arm64 server category of machines
> > but it does not work very well with NPU class of SoC or FPGA.
> >
> > As there  are the different ways to implement the IPSec, For instance,
> > use of eventdev can help in situation for handling millions of SA and
> > equence number of update and anti reply check can be done by leveraging
> > some of the HW specific features like
> > ORDERED, ATOMIC schedule type(mapped as eventdev feature)in HW with PIPELINE model.
> >
> > # Issues with having one SA one core,
> > - In the outbound side, there could be multiple flows using the same SA.
> >   Multiple flows could be processed parallel on different lcores,
> > but tying one SA to one core would mean we won't be able to do that.
> >
> > - In the inbound side, we will have a fat flow hitting one core. If
> >   IPsec library assumes single core, we will not be able to to spread
> > fat flow to multiple cores. And one SA-one core would mean all ports on
> > which we would expect IPsec traffic has to be handled by that core.
> 
> I suppose that all refers to the discussion about MT safe API for rte_ipsec, right?
> If so, then as I said in my reply to Anoob:
> We will try to make API usable in MT environment for v1,
> so you can review and provide comments at early stages.

OK

> 
> >
> > I have made a simple presentation. This presentation details ONE WAY to
> > implement the IPSec with HW support on NPU.
> >
> > https://docs.google.com/presentation/d/1e3IDf9R7ZQB8FN16Nvu7KINuLSWMdyKEw8_0H05rjj4/edit?usp=sharing
> >
> 
> Thanks, quite helpful.
> Actually from page 3, it looks like your expectations don't contradict in general with proposed API:
> 
> ...
> } else if (ev.event_type == RTE_EVENT_TYPE_LCORE && ev.sub_event_id == APP_STATE_SEQ_UPDATE) {
>                         sa = ev.flow_queue_id;
>                         /* do critical section work per sa */
>                         do_critical_section_work(sa);
> 
> [KA] that's the place where I expect either
> rte_ipsec_inline_process(sa, ...); OR rte_ipsec_crypto_prepare(sa, ...);
> would be called.

Makes sense. But currently, the library defines what is
rte_ipsec_inline_process() and rte_ipsec_crypto_prepare(), but it should
be based on underneath security device or crypto device.

So, IMO for better control, these functions should be the function pointer
based and based on underlying device, library can fill the
implementation.

IMO, it is not possible to create "static inline function" with all "if"
checks. I think, we can have four ipsec functions with function pointer
scheme.

rte_ipsec_inbound_prepare()
rte_ipsec_inbound_process()
rte_ipsec_outbound_prepare()
rte_ipsec_outbound_process()

Some of the other concerns:
1) For HW implementation, rte_ipsec_sa needs to opaque like rte_security
as some of the structure defined by HW or Microcode. We can choose
absolute generic items as common and device/rte_security specific can be opaque.

2)I think, in order to accommodate the event drivern model. We need to pass
void ** in prepare() and process() function with an additional argument
of type(TYPE_EVENT/TYPE_MBUF) can be passed to detect packet object
type as some of the functions in prepare() and process() may need
rte_event to operate on.

> 
>                      /* Issue the crypto request and generate the following on crypto work completion */
> [KA] that's the place where I expect rte_ipsec_crypto_process(...) be invoked.
> 
>                         ev.flow_queue_id = tx_port;
>                         ev.sub_event_id = tx_queue_id;
>                         ev.sched_sync = RTE_SCHED_SYNC_ATOMIC;
>                         rte_cryptodev_event_enqueue(cryptodev, ev.mbuf, eventdev, ev);
>                 }
> 
> 
> > I am not saying this should be the ONLY way to do as it does not work
> > very well with non NPU/FPGA class of SoC.
> >
> > So how about making the proposed IPSec library as plugin/driver to
> > rte_security.
> 
> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> is the best possible approach.
> Though I probably understand your concern:
> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
> Though there are devices where most of prepare/process can be done in HW
> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> Is that so?
> To address that issue I suppose we can do:
> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>     security devices into ipsec.
>     We planned to do it anyway, just don't have it done yet.
> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM

The problem is, CUSTOM may have different variants and "if" conditions won't
scale if we choose to have non function pointer scheme. Otherwise, it
looks OK to create new SECURITY TYPE and associated plugin for prepare() and process()
function in librte_ipsec library.


>     and add into rte_security_ops   new functions:
>     uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>     uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>     uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>     So for custom HW, PMD can overwrite normal prepare/process behavior.
> 
> > This would give flexibly for each vendor/platform choose to different
> > IPse implementation based on HW support WITHOUT CHANGING THE APPLICATION
> > INTERFACE.
> 
> Not sure what API changes you are referring to?
> As I am aware we do introduce new API, but all existing APIs remain in place.


What I meant was, Single application programming interface to enable IPSec processing to
application.


> 
> >
> > IMO, rte_security IPsec look aside support can be simply added by
> > creating the virtual crypto device(i.e move the proposed code to the virtual crypto device)
> > likewise inline support
> > can be added by the virtual ethdev device.
> 
> That's probably possible and if someone would like to introduce such abstraction - NP in general
> (though my suspicion - it might be too heavy to be really useful).
> Though I don't think it should be the only possible way for the user to enable IPsec data-processing inside his app.
> Again I guess such virtual-dev will still use rte_ipsec inside.

I don't have strong opinion on virtual devices VS function pointer based
prepare() and process() function in librte_ipsec library.

> 
> > This would avoid the need for
> > updating ipsec-gw application as well i.e unified interface to application.
> 
> I think - it would  really good to simplify existing ipsec-secgw sample app.
> Some parts of it seems unnecessary complex to me.
> One of the reasons for it -  we don't really have an unified (and transparent) API for ipsec data-path.
> Let's look at ipsec_enqueue() and related code (examples/ipsec-secgw/ipsec.c:365)
> It is huge (and ugly) -  user has to handle dozen different cases just to enqueue packet for IPsec processing.
> One of the aims of rte_ipsec library - hide all that complexities inside the library and provide to
> the upper layer clean and transparent API.
> 
> >
> > If you don't like the above idea, any scheme of plugin based
> > implementation would be fine so that vendor or platform can choose its own implementation.
> > It can be based on partial HW implement too. i.e SA look can be used in SW, remaining stuff in HW
> > (for example IPsec inline case)
> 
> I am surely ok with the idea to give vendors an ability to customize implementation
> and enable their HW capabilities.

I think, We are on the same page, just that the fine details of "framework"
for customizing implementation based on their HW capabilities need to
iron out.

> Do you think proposed additions to the rte_security would be  enough,
> or something extra is needed?

See above.

Jerin

> 
> Konstantin
> 
> 
> >
> > # For protocols like UDP, it makes sense to create librte_udp as there
> > no much HW specific offload other than ethdev provides.
> >
> > # PDCP could be another library to offload to HW, So talking
> > rte_security path makes more sense in that case too.
> >
> > Jerin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-18 12:42                 ` Ananyev, Konstantin
@ 2018-09-20 14:26                   ` Akhil Goyal
  2018-09-24 10:51                     ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-09-20 14:26 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain

Hi Konstantin,

On 9/18/2018 6:12 PM, Ananyev, Konstantin wrote:
>>> I am not saying this should be the ONLY way to do as it does not work
>>> very well with non NPU/FPGA class of SoC.
>>>
>>> So how about making the proposed IPSec library as plugin/driver to
>>> rte_security.
>> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
>> is the best possible approach.
>> Though I probably understand your concern:
>> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
>> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
>> Though there are devices where most of prepare/process can be done in HW
>> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
>> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
>> Is that so?
>> To address that issue I suppose we can do:
>> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>>      security devices into ipsec.
>>      We planned to do it anyway, just don't have it done yet.
>> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
>>      and add into rte_security_ops   new functions:
>>      uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>>      uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>>      uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>>      So for custom HW, PMD can overwrite normal prepare/process behavior.
>>
> Actually  after another thought:
> My previous assumption (probably wrong one) was that for both
> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
> Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right now)
> I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.
> Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
> (HW/SW roses/responsibilities)?
> About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that capability.
> So my question is there any devices/drivers that do support it?
> If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
> Konstantin
>
>
In case of LOOKASIDE, the protocol errors like antireplay and sequence 
number overflow shall be the responsibility of either PMD or the HW.
It should notify the application that the error has occurred and 
application need to decide what it needs to decide next.

As Jerin said in other email, the roles/responsibility of the PMD in 
case of inline proto and lookaside case, nothing much is required from 
the application to do any processing for ipsec.

As per my understanding, the proposed RFC is to make the application 
code cleaner for  the protocol processing.
1. For inline proto and lookaside there won't be any change in the data 
path. The main changes would be in the control path.

2. But in case of inline crypto and RTE_SECURITY_ACTION_TYPE_NONE, the 
protocol processing will be done in the library and there would be 
changes in both control and data path.

As the rte_security currently provide generic APIs for control path only 
and we may have it expanded for protocol specific datapath processing.
So for the application, working with inline crypto/ inline proto would 
be quite similar and it won't need to do some extra processing for 
inline crypto.
Same will be the case for RTE_SECURITY_ACTION_TYPE_NONE and lookaside.

We may have the protocol specific APIs reside inside the rte_security 
and we can use either the crypto/net PMD underneath it.

Moving the SPD lookup inside the ipsec library may not be beneficial in 
terms of performance as well as configurability for the application. It 
would just be based on the rss hash.

Please let me know if my understanding is not correct anywhere.

-Akhil

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-18 17:54                 ` Jerin Jacob
@ 2018-09-24  8:45                   ` Ananyev, Konstantin
  2018-09-26 18:02                     ` Jerin Jacob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-24  8:45 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Joseph, Anoob, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad, akhil.goyal, hemant.agrawal, shreyansh.jain


Hi Jerin,

> > > > >
> > > > > Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU
> > > cores.
> > > > > To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> > > > > I think it would require some extra synchronization:
> > > > > make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> > > > > (packets entered ipsec processing).
> > > > > I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> > > > > Though we plan CTX level API to support such scenario.
> > > > > What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
> > > concurrently.
> > > > >
> > > > > > In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores.
> > > Considering
> > > > > > IPsec being CPU intensive, this would limit the net output of the chip.
> > > > > That's true - but from other side implementation can offload heavy part
> > > > > (encrypt/decrypt, auth) to special HW (cryptodev).
> > > > > In that case single core might be enough for SA and extra synchronization would just slowdown things.
> > > > > That's why I think it should be configurable  what behavior (ST or MT) to use.
> > > > I do agree that these are the issues that we need to address to make the
> > > > library MT safe. Whether the extra synchronization would slow down things is
> > > > a very subjective question and will heavily depend on the platform. The
> > > > library should have enough provisions to be able to support MT without
> > > > causing overheads to ST. Right now, the library assumes ST.
> > >
> > >
> > > I agree with Anoob here.
> > >
> > > I have two concerns with librte_ipsec as a separate library
> > >
> > > 1) There is an overlap with rte_security and new proposed library.
> >
> > I don't think there really is an overlap.
> 
> As mentioned in your other email. IMO, There is an overlap as
> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL can support almost everything
> in HW or HW + SW if some PMD wishes to do so.
> 
> Answering some of the questions, you have asked in other thread based on
> my understanding.
> 
> Regarding RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL support,
> Marvell/Cavium CPT hardware on next generation HW(Planning to upstream
> around v19.02) can support RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and
> Anoob already pushed the application changes in ipsec-gw.

Ok good to know.

> 
> In our understanding of HW/SW roles/responsibilities for that type of
> devices are:
> 
> INLINE_PROTOCOL
> ----------------
> In control path, security session is created with the given SA and
> rte_flow configuration etc.
> 
> For outbound traffic, the application will have to do SA lookup and
> identify the security action (inline/look aside crypto/protocol). For
> packets identified for inline protocol processing, the application would
> submit as plain packets to the ethernet device and the security capable
> ethernet device would perform IPSec and send out the packet. For PMDs
> which would need extra metadata (capability flag), set_pkt_metadata
> function pointer would be called (from application).
> This can be used to set some per packet field to identify the security session to be used to
> process the packet.

Yes, as I can see, that's what ipsec-gw is doing right now and it wouldn't be
a problem to do the same in ipsec lib.  

> Sequence number updation will be done by the PMD.

Ok, so for INLINE_PROTOCOL upper layer wouldn't need to keep track for SQN values at all?
You don’t' consider a possibility that by some reason that SA would need to
be moved from device that support INLINE_PROTOCOL to the device that doesn't?  

> For inbound traffic, the packets for IPSec would be identified by using
> rte_flow (hardware accelerated packet filtering). For the packets
> identified for inline offload (SECURITY action), hardware would perform
> the processing. For inline protocol processed IPSec packets, PMD would
> set “user data” so that application can get the details of the security
> processing done on the packet. Once the plain packet (after IPSec
> processing) is received, a selector check need to be performed to make
> sure we have a valid packet after IPSec processing. The user data is used
> for that. Anti-replay check is handled by the PMD. The PMD would raise
> an eth event in case of sequence number expiry or any SA expiry.

Few questions here:
1) if I understand things right - to specify that it was an IPsec packet -
PKT_RX_SEC_OFFLOAD will be set in mbuf ol_flags?
2) Basically 'userdata' will contain just a user provided at rte_security_session_create pointer
(most likely pointer to the SA, as it is done right now in ipsec-secgw), correct?
3) in current rte_security API si there a way to get/set replay window size, etc?
4)   Same question as for TX: you don't plan to support fallback to other type of devices/SW?
I.E. HW was not able to process ipsec packet by some reason (let say fragmented packet)
and now it is SW responsibility to do so?
The reason I am asking for that - it seems right now there is no defined way
to share SQN related information between HW/PMD and upper layer SW.
Is that ok, or would we need such capability?
If we would, and upper layer SW would need to keep track on SQN anyway,
then there is probably no point to do same thing in PMD itelf?
In that case PMD just need to provide SQN information to the upper layer 
(probably one easy way to do it - reuse rte_,buf.seqn for that purpose,
though for that will probably need make it 64-bit long).

> 
> 
> LOOKASIDE_PROTOCOL
> ------------------
> In control path, security session is created with the given SA.
> 
> Enqueue/dequeue is similar to what is done for regular crypto
> (RTE_SECURITY_ACTION_TYPE_NONE) but all the protocol related processing
> would be offloaded. Application will need to do SA lookup and identify
> the processing to be done (both in case of outbound & inbound), and
> submit packet to crypto device. Application need not do any IPSec
> related transformations other than the lookup. Anti-replay need to be
> handled in the PMD (the spec says the device “may be handled” do anti-replay check,
> but a complete protocol offload would need anti-replay check also).

Same question here - wouldn't there be a situations when HW/PMD would need to
share SQN information with upper layer?
Let say if upper layer SW would need to do load balancing between crypto-devices 
with LOOKASIDE_PROTOCOL and without?

> 
> 
> > rte_security is a 'framework for management and provisioning of security protocol operations offloaded to hardware based devices'.
> > While rte_ipsec is aimed to be a library for IPsec data-path processing.
> > There is no plans for rte_ipsec to 'obsolete' rte_security.
> > Quite opposite rte_ipsec supposed to work with both rte_cryptodev and rte_security APIs (devices).
> > It is possible to have an SA that would use both crypto and  security devices.
> > Or to have an SA that would use multiple crypto devs
> > (though right now it is up the user level to do load-balancing logic).
> >
> > > For IPsec, If an application needs to use rte_security for HW
> > > implementation and and application needs to use librte_ipsec for
> > >  SW implementation then it is bad and a lot duplication of work on
> > > he slow path too.
> >
> > The plan is that application would need to use just rte_ipsec API for all data-paths
> > (HW/SW, lookaside/inline).
> > Let say right now there is rte_ipsec_inline_process() function if user
> > prefers to use inline security device to process given group packets,
> > and rte_ipsec_crypto_process(/prepare) if user decides to use
> > lookaside security or simple crypto device for it.
> >
> > >
> > > The rte_security spec can support both inline and look-aside IPSec
> > > protocol support.
> >
> > AFAIK right now rte_security just provides API to create/free/manipulate security sessions.
> > I don't see how it can support all the functionality mentioned above,
> > plus SAD and SPD.
> 
> 
> At least for INLINE_PROTOCOL case SA lookup for inbound traffic does by
> HW.

For inbound yes, for outbound I suppose you still would need to do a lookup in SW.

> 
> >
> > >
> > > 2) This library is tuned for fat CPU core in mind like single SA on core
> > > etc. Which is fine for x86 servers and arm64 server category of machines
> > > but it does not work very well with NPU class of SoC or FPGA.
> > >
> > > As there  are the different ways to implement the IPSec, For instance,
> > > use of eventdev can help in situation for handling millions of SA and
> > > equence number of update and anti reply check can be done by leveraging
> > > some of the HW specific features like
> > > ORDERED, ATOMIC schedule type(mapped as eventdev feature)in HW with PIPELINE model.
> > >
> > > # Issues with having one SA one core,
> > > - In the outbound side, there could be multiple flows using the same SA.
> > >   Multiple flows could be processed parallel on different lcores,
> > > but tying one SA to one core would mean we won't be able to do that.
> > >
> > > - In the inbound side, we will have a fat flow hitting one core. If
> > >   IPsec library assumes single core, we will not be able to to spread
> > > fat flow to multiple cores. And one SA-one core would mean all ports on
> > > which we would expect IPsec traffic has to be handled by that core.
> >
> > I suppose that all refers to the discussion about MT safe API for rte_ipsec, right?
> > If so, then as I said in my reply to Anoob:
> > We will try to make API usable in MT environment for v1,
> > so you can review and provide comments at early stages.
> 
> OK
> 
> >
> > >
> > > I have made a simple presentation. This presentation details ONE WAY to
> > > implement the IPSec with HW support on NPU.
> > >
> > > https://docs.google.com/presentation/d/1e3IDf9R7ZQB8FN16Nvu7KINuLSWMdyKEw8_0H05rjj4/edit?usp=sharing
> > >
> >
> > Thanks, quite helpful.
> > Actually from page 3, it looks like your expectations don't contradict in general with proposed API:
> >
> > ...
> > } else if (ev.event_type == RTE_EVENT_TYPE_LCORE && ev.sub_event_id == APP_STATE_SEQ_UPDATE) {
> >                         sa = ev.flow_queue_id;
> >                         /* do critical section work per sa */
> >                         do_critical_section_work(sa);
> >
> > [KA] that's the place where I expect either
> > rte_ipsec_inline_process(sa, ...); OR rte_ipsec_crypto_prepare(sa, ...);
> > would be called.
> 
> Makes sense. But currently, the library defines what is
> rte_ipsec_inline_process() and rte_ipsec_crypto_prepare(), but it should
> be based on underneath security device or crypto device.

Reason for that - their code-paths are quite different:
for inline devices we can do whole processing synchronously(within process() function),
while fro crypto it is sort of split into tw parts -
we first have to do prepare();enqueue() them to crypto-dev, and then dequeue();process().
Another good thing with that way - it allows the same SA to work with different devices.
 
> 
> So, IMO for better control, these functions should be the function pointer
> based and based on underlying device, library can fill the
> implementation.
> 
> IMO, it is not possible to create "static inline function" with all "if"
> checks. I think, we can have four ipsec functions with function pointer
> scheme.
> 
> rte_ipsec_inbound_prepare()
> rte_ipsec_inbound_process()
> rte_ipsec_outbound_prepare()
> rte_ipsec_outbound_process()
> 
> Some of the other concerns:
> 1) For HW implementation, rte_ipsec_sa needs to opaque like rte_security
> as some of the structure defined by HW or Microcode. We can choose
> absolute generic items as common and device/rte_security specific can be opaque.

I don't think it would be a problem, rte_ipsec_sa  does contain a pointer to
rte_security_session, so it can provide it as an argument to these functions.
 
> 
> 2)I think, in order to accommodate the event drivern model. We need to pass
> void ** in prepare() and process() function with an additional argument
> of type(TYPE_EVENT/TYPE_MBUF) can be passed to detect packet object
> type as some of the functions in prepare() and process() may need
> rte_event to operate on.

You are talking here about security device specific functions described below, correct?

> 
> >
> >                      /* Issue the crypto request and generate the following on crypto work completion */
> > [KA] that's the place where I expect rte_ipsec_crypto_process(...) be invoked.
> >
> >                         ev.flow_queue_id = tx_port;
> >                         ev.sub_event_id = tx_queue_id;
> >                         ev.sched_sync = RTE_SCHED_SYNC_ATOMIC;
> >                         rte_cryptodev_event_enqueue(cryptodev, ev.mbuf, eventdev, ev);
> >                 }
> >
> >
> > > I am not saying this should be the ONLY way to do as it does not work
> > > very well with non NPU/FPGA class of SoC.
> > >
> > > So how about making the proposed IPSec library as plugin/driver to
> > > rte_security.
> >
> > As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> > is the best possible approach.
> > Though I probably understand your concern:
> > In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> > i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
> > Though there are devices where most of prepare/process can be done in HW
> > (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> > plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> > Is that so?
> > To address that issue I suppose we can do:
> > 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> >     security devices into ipsec.
> >     We planned to do it anyway, just don't have it done yet.
> > 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
> 
> The problem is, CUSTOM may have different variants and "if" conditions won't
> scale if we choose to have non function pointer scheme. Otherwise, it
> looks OK to create new SECURITY TYPE and associated plugin for prepare() and process()
> function in librte_ipsec library.

In principle, I don't mind to always use function pointers for prepare()/process(), but:
from your description above of INLINE_PROTOCOL and LOOKASIDE_PROTOCOL
the process()/prepare() for such devices looks well defined and
straightforward to implement.
Not sure we'll need a function pointer for such simple and lightweight case:
set/check ol_flags, set/read userdata value.
I think extra function call here is kind of overkill and will only slowdown things.
But if that would be majority preference - I wouldn't argue.
BTW if we'll agree to always use function pointers for process/prepare,
then there is no point to have that all existing action types -
all we need is an indication is it inline or lookaside device and
function pointers for prepare/process().

Konstantin

> 
> 
> >     and add into rte_security_ops   new functions:
> >     uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> >     uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> >     uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> >     So for custom HW, PMD can overwrite normal prepare/process behavior.
> >
> > > This would give flexibly for each vendor/platform choose to different
> > > IPse implementation based on HW support WITHOUT CHANGING THE APPLICATION
> > > INTERFACE.
> >
> > Not sure what API changes you are referring to?
> > As I am aware we do introduce new API, but all existing APIs remain in place.
> 
> 
> What I meant was, Single application programming interface to enable IPSec processing to
> application.
> 
> 
> >
> > >
> > > IMO, rte_security IPsec look aside support can be simply added by
> > > creating the virtual crypto device(i.e move the proposed code to the virtual crypto device)
> > > likewise inline support
> > > can be added by the virtual ethdev device.
> >
> > That's probably possible and if someone would like to introduce such abstraction - NP in general
> > (though my suspicion - it might be too heavy to be really useful).
> > Though I don't think it should be the only possible way for the user to enable IPsec data-processing inside his app.
> > Again I guess such virtual-dev will still use rte_ipsec inside.
> 
> I don't have strong opinion on virtual devices VS function pointer based
> prepare() and process() function in librte_ipsec library.
> 
> >
> > > This would avoid the need for
> > > updating ipsec-gw application as well i.e unified interface to application.
> >
> > I think - it would  really good to simplify existing ipsec-secgw sample app.
> > Some parts of it seems unnecessary complex to me.
> > One of the reasons for it -  we don't really have an unified (and transparent) API for ipsec data-path.
> > Let's look at ipsec_enqueue() and related code (examples/ipsec-secgw/ipsec.c:365)
> > It is huge (and ugly) -  user has to handle dozen different cases just to enqueue packet for IPsec processing.
> > One of the aims of rte_ipsec library - hide all that complexities inside the library and provide to
> > the upper layer clean and transparent API.
> >
> > >
> > > If you don't like the above idea, any scheme of plugin based
> > > implementation would be fine so that vendor or platform can choose its own implementation.
> > > It can be based on partial HW implement too. i.e SA look can be used in SW, remaining stuff in HW
> > > (for example IPsec inline case)
> >
> > I am surely ok with the idea to give vendors an ability to customize implementation
> > and enable their HW capabilities.
> 
> I think, We are on the same page, just that the fine details of "framework"
> for customizing implementation based on their HW capabilities need to
> iron out.
> 
> > Do you think proposed additions to the rte_security would be  enough,
> > or something extra is needed?
> 
> See above.
> 
> Jerin
> 
> >
> > Konstantin
> >
> >
> > >
> > > # For protocols like UDP, it makes sense to create librte_udp as there
> > > no much HW specific offload other than ethdev provides.
> > >
> > > # PDCP could be another library to offload to HW, So talking
> > > rte_security path makes more sense in that case too.
> > >
> > > Jerin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-20 14:26                   ` Akhil Goyal
@ 2018-09-24 10:51                     ` Ananyev, Konstantin
  2018-09-25  7:48                       ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-24 10:51 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain

Hi Akhil,

> 
> Hi Konstantin,
> 
> On 9/18/2018 6:12 PM, Ananyev, Konstantin wrote:
> >>> I am not saying this should be the ONLY way to do as it does not work
> >>> very well with non NPU/FPGA class of SoC.
> >>>
> >>> So how about making the proposed IPSec library as plugin/driver to
> >>> rte_security.
> >> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> >> is the best possible approach.
> >> Though I probably understand your concern:
> >> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> >> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
> >> Though there are devices where most of prepare/process can be done in HW
> >> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> >> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> >> Is that so?
> >> To address that issue I suppose we can do:
> >> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> >>      security devices into ipsec.
> >>      We planned to do it anyway, just don't have it done yet.
> >> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and
> RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
> >>      and add into rte_security_ops   new functions:
> >>      uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> num);
> >>      uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> num);
> >>      uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> >>      So for custom HW, PMD can overwrite normal prepare/process behavior.
> >>
> > Actually  after another thought:
> > My previous assumption (probably wrong one) was that for both
> > RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> > devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
> > Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right now)
> > I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.
> > Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
> > (HW/SW roses/responsibilities)?
> > About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that
> capability.
> > So my question is there any devices/drivers that do support it?
> > If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
> > Konstantin
> >
> >
> In case of LOOKASIDE, the protocol errors like antireplay and sequence
> number overflow shall be the responsibility of either PMD or the HW.
> It should notify the application that the error has occurred and
> application need to decide what it needs to decide next.

Ok, thanks for clarification.
Just to confirm -  do we have a defined way for it right now in rte_security?

> 
> As Jerin said in other email, the roles/responsibility of the PMD in
> case of inline proto and lookaside case, nothing much is required from
> the application to do any processing for ipsec.
> 
> As per my understanding, the proposed RFC is to make the application
> code cleaner for  the protocol processing.

Yes, unified data-path API is definitely one of the main goals. 

> 1. For inline proto and lookaside there won't be any change in the data
> path. The main changes would be in the control path.

Yes, from your and Jerin description data-path processing looks
really lightweight for these cases.
For control path - there is no much change, user would have to call 
rte_ipsec_sa_init() to start using given SA.

> 
> 2. But in case of inline crypto and RTE_SECURITY_ACTION_TYPE_NONE, the
> protocol processing will be done in the library and there would be
> changes in both control and data path.

Yes.

> 
> As the rte_security currently provide generic APIs for control path only
> and we may have it expanded for protocol specific datapath processing.
> So for the application, working with inline crypto/ inline proto would
> be quite similar and it won't need to do some extra processing for
> inline crypto.
> Same will be the case for RTE_SECURITY_ACTION_TYPE_NONE and lookaside.
> 
> We may have the protocol specific APIs reside inside the rte_security
> and we can use either the crypto/net PMD underneath it.

As I understand, you suggest instead of introducing new library,
introduce similar data-path functions inside rte_security.
Probably something like:

uint16_t rte_security_process(struct rte_security_session *s, struct rte_mbuf *mb[], uint16_t num);
uint16_t rte_security_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[], 
                                                                      struct rte_crypto_op *cop[], uint16_t num);
...
Is that correct?

I thought about that approach too, and indeed from one side it looks cleaner and easier
to customize - each of these functions would just call related function inside rte_security_ops.
The problem with that approach - it would mean that each SA would be able to work with one
device only.
So if someone needs an SA that could be processed by multiple cores and multiple crypto-devices
in parallel such approach wouldn’t fit.
That was the main reason to keep rte_security as it is right now and go ahead with new library.
One thing that worries me -  do we need a way to share SQN and replay window information
between rte_security and upper layer (rte_ipsec)?
If 'no', then ok, if 'yes' then probably we need to discuss how to do it now?

> 
> Moving the SPD lookup inside the ipsec library may not be beneficial in
> terms of performance as well as configurability for the application. It
> would just be based on the rss hash.

If SPD c be done completely in HW - that's fine.
I just don't think there are many devices these days that wouldn't require
any SW intervention for SPD lookup, and I think RSS would be enough here 
(though flow-director might be).
As I said before, my thought was that might be ACL library would be enough
here as SW fallback, but if people think we need a special API/implementation
for it - that's ok by me too.
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-24 10:51                     ` Ananyev, Konstantin
@ 2018-09-25  7:48                       ` Akhil Goyal
  2018-09-30 21:00                         ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-09-25  7:48 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain

Hi Konstantin,

On 9/24/2018 4:21 PM, Ananyev, Konstantin wrote:
> Hi Akhil,
>
>> Hi Konstantin,
>>
>> On 9/18/2018 6:12 PM, Ananyev, Konstantin wrote:
>>>>> I am not saying this should be the ONLY way to do as it does not work
>>>>> very well with non NPU/FPGA class of SoC.
>>>>>
>>>>> So how about making the proposed IPSec library as plugin/driver to
>>>>> rte_security.
>>>> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
>>>> is the best possible approach.
>>>> Though I probably understand your concern:
>>>> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
>>>> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
>>>> Though there are devices where most of prepare/process can be done in HW
>>>> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
>>>> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
>>>> Is that so?
>>>> To address that issue I suppose we can do:
>>>> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>>>>       security devices into ipsec.
>>>>       We planned to do it anyway, just don't have it done yet.
>>>> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and
>> RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
>>>>       and add into rte_security_ops   new functions:
>>>>       uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
>> num);
>>>>       uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
>> num);
>>>>       uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
>>>>       So for custom HW, PMD can overwrite normal prepare/process behavior.
>>>>
>>> Actually  after another thought:
>>> My previous assumption (probably wrong one) was that for both
>>> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>>> devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
>>> Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right now)
>>> I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.
>>> Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
>>> (HW/SW roses/responsibilities)?
>>> About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that
>> capability.
>>> So my question is there any devices/drivers that do support it?
>>> If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
>>> Konstantin
>>>
>>>
>> In case of LOOKASIDE, the protocol errors like antireplay and sequence
>> number overflow shall be the responsibility of either PMD or the HW.
>> It should notify the application that the error has occurred and
>> application need to decide what it needs to decide next.
> Ok, thanks for clarification.
> Just to confirm -  do we have a defined way for it right now in rte_security?
As of now, there are no macros defined for antireplay/seq. no. overflow 
errors in crypto errors(rte_crypto_op_status), but it will be added soon.
For inline cases, ipsec-secgw application gets error notification via 
rte_eth_event.
>
>> As Jerin said in other email, the roles/responsibility of the PMD in
>> case of inline proto and lookaside case, nothing much is required from
>> the application to do any processing for ipsec.
>>
>> As per my understanding, the proposed RFC is to make the application
>> code cleaner for  the protocol processing.
> Yes, unified data-path API is definitely one of the main goals.
>
>> 1. For inline proto and lookaside there won't be any change in the data
>> path. The main changes would be in the control path.
> Yes, from your and Jerin description data-path processing looks
> really lightweight for these cases.
> For control path - there is no much change, user would have to call
> rte_ipsec_sa_init() to start using given SA.
>
>> 2. But in case of inline crypto and RTE_SECURITY_ACTION_TYPE_NONE, the
>> protocol processing will be done in the library and there would be
>> changes in both control and data path.
> Yes.
>
>> As the rte_security currently provide generic APIs for control path only
>> and we may have it expanded for protocol specific datapath processing.
>> So for the application, working with inline crypto/ inline proto would
>> be quite similar and it won't need to do some extra processing for
>> inline crypto.
>> Same will be the case for RTE_SECURITY_ACTION_TYPE_NONE and lookaside.
>>
>> We may have the protocol specific APIs reside inside the rte_security
>> and we can use either the crypto/net PMD underneath it.
> As I understand, you suggest instead of introducing new library,
> introduce similar data-path functions inside rte_security.
> Probably something like:
>
> uint16_t rte_security_process(struct rte_security_session *s, struct rte_mbuf *mb[], uint16_t num);
> uint16_t rte_security_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>                                                                        struct rte_crypto_op *cop[], uint16_t num);
> ...
> Is that correct?

"rte_security_process_ipsec" and "rte_security_crypto_prepare_ipsec" will be better.
We can have such APIs for other protocols as well.
Also, we should leave the existing functionality as is and we should let the user decide whether
it needs to manage the ipsec on it's own or with the new APIs.

>
> I thought about that approach too, and indeed from one side it looks cleaner and easier
> to customize - each of these functions would just call related function inside rte_security_ops.
> The problem with that approach - it would mean that each SA would be able to work with one
> device only.
> So if someone needs an SA that could be processed by multiple cores and multiple crypto-devices
> in parallel such approach wouldn’t fit.
One SA should be processed by a single core or else we need to have an 
event based application which support ordered queues,
because if we process packets of single SA on multiple cores, then 
packets will get re-ordered and we will get the anti-replay late errors 
on decap side.
And if we have event based solution, then the scheduler will be able to 
handle the load balancing accordingly.

> That was the main reason to keep rte_security as it is right now and go ahead with new library.
> One thing that worries me -  do we need a way to share SQN and replay window information
> between rte_security and upper layer (rte_ipsec)?
> If 'no', then ok, if 'yes' then probably we need to discuss how to do it now?
anti-replay window size shall be a parameter in ipsec_xform, which shall 
be added.
And the error notification
  - in case of using crypto, then use rte_crypto_op_status
- in case of inline cases, then use rte_eth_event callbacks.
I don't see rte_ipsec needs to take care of that in your initial approach.
However, if you plan to include session reset inside rte_ipsec, then you 
may need that inside the rte_ipsec.
And yes that would be tricky.

-Akhil

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-24  8:45                   ` Ananyev, Konstantin
@ 2018-09-26 18:02                     ` Jerin Jacob
  2018-10-02 23:56                       ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-09-26 18:02 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Joseph, Anoob, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad, akhil.goyal, hemant.agrawal, shreyansh.jain

-----Original Message-----
> Date: Mon, 24 Sep 2018 08:45:48 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "Joseph, Anoob" <Anoob.Joseph@caviumnetworks.com>, "dev@dpdk.org"
>  <dev@dpdk.org>, "Awal, Mohammad Abdul" <mohammad.abdul.awal@intel.com>,
>  "Doherty, Declan" <declan.doherty@intel.com>, Narayana Prasad
>  <narayanaprasad.athreya@caviumnetworks.com>, "akhil.goyal@nxp.com"
>  <akhil.goyal@nxp.com>, "hemant.agrawal@nxp.com" <hemant.agrawal@nxp.com>,
>  "shreyansh.jain@nxp.com" <shreyansh.jain@nxp.com>
> Subject: RE: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path
>  processing
> 
> External Email
> 
> Hi Jerin,

Hi Konstantin,

> 
> > > > > >
> > > > > > Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU
> > > > cores.
> > > > > > To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> > > > > > I think it would require some extra synchronization:
> > > > > > make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> > > > > > (packets entered ipsec processing).
> > > > > > I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> > > > > > Though we plan CTX level API to support such scenario.
> > > > > > What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
> > > > concurrently.
> > > > > >
> > > > > > > In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores.
> > > > Considering
> > > > > > > IPsec being CPU intensive, this would limit the net output of the chip.
> > > > > > That's true - but from other side implementation can offload heavy part
> > > > > > (encrypt/decrypt, auth) to special HW (cryptodev).
> > > > > > In that case single core might be enough for SA and extra synchronization would just slowdown things.
> > > > > > That's why I think it should be configurable  what behavior (ST or MT) to use.
> > > > > I do agree that these are the issues that we need to address to make the
> > > > > library MT safe. Whether the extra synchronization would slow down things is
> > > > > a very subjective question and will heavily depend on the platform. The
> > > > > library should have enough provisions to be able to support MT without
> > > > > causing overheads to ST. Right now, the library assumes ST.
> > > >
> > > >
> > > > I agree with Anoob here.
> > > >
> > > > I have two concerns with librte_ipsec as a separate library
> > > >
> > > > 1) There is an overlap with rte_security and new proposed library.
> > >
> > > I don't think there really is an overlap.
> >
> > As mentioned in your other email. IMO, There is an overlap as
> > RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL can support almost everything
> > in HW or HW + SW if some PMD wishes to do so.
> >
> > Answering some of the questions, you have asked in other thread based on
> > my understanding.
> >
> > Regarding RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL support,
> > Marvell/Cavium CPT hardware on next generation HW(Planning to upstream
> > around v19.02) can support RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and
> > Anoob already pushed the application changes in ipsec-gw.
> 
> Ok good to know.
> 
> >
> > In our understanding of HW/SW roles/responsibilities for that type of
> > devices are:
> >
> > INLINE_PROTOCOL
> > ----------------
> > In control path, security session is created with the given SA and
> > rte_flow configuration etc.
> >
> > For outbound traffic, the application will have to do SA lookup and
> > identify the security action (inline/look aside crypto/protocol). For
> > packets identified for inline protocol processing, the application would
> > submit as plain packets to the ethernet device and the security capable
> > ethernet device would perform IPSec and send out the packet. For PMDs
> > which would need extra metadata (capability flag), set_pkt_metadata
> > function pointer would be called (from application).
> > This can be used to set some per packet field to identify the security session to be used to
> > process the packet.
> 
> Yes, as I can see, that's what ipsec-gw is doing right now and it wouldn't be
> a problem to do the same in ipsec lib.
> 
> > Sequence number updation will be done by the PMD.
> 
> Ok, so for INLINE_PROTOCOL upper layer wouldn't need to keep track for SQN values at all?
> You don’t' consider a possibility that by some reason that SA would need to
> be moved from device that support INLINE_PROTOCOL to the device that doesn't?

For INLINE_PROTOCOL, the application won't have any control over such 
per packet fields. As for moving the SA to a different device, right now 
rte_security spec doesn't allow that. May be we should fix the spec to 
allow multiple devices to share the same security session. That way, if 
there is error in the inline processing, application will be able to 
submit the packet to LOOKASIDE_PROTOCOL crypto device (sharing the 
session) and get the packet processed.


> 
> > For inbound traffic, the packets for IPSec would be identified by using
> > rte_flow (hardware accelerated packet filtering). For the packets
> > identified for inline offload (SECURITY action), hardware would perform
> > the processing. For inline protocol processed IPSec packets, PMD would
> > set “user data” so that application can get the details of the security
> > processing done on the packet. Once the plain packet (after IPSec
> > processing) is received, a selector check need to be performed to make
> > sure we have a valid packet after IPSec processing. The user data is used
> > for that. Anti-replay check is handled by the PMD. The PMD would raise
> > an eth event in case of sequence number expiry or any SA expiry.
> 
> Few questions here:
> 1) if I understand things right - to specify that it was an IPsec packet -
> PKT_RX_SEC_OFFLOAD will be set in mbuf ol_flags?
> 2) Basically 'userdata' will contain just a user provided at rte_security_session_create pointer
> (most likely pointer to the SA, as it is done right now in ipsec-secgw), correct?

Yes to 1 & 2.


> 3) in current rte_security API si there a way to get/set replay window size, etc?

Not right now. But Akhil mentioned that it will be added soon.


> 4)   Same question as for TX: you don't plan to support fallback to other type of devices/SW?
> I.E. HW was not able to process ipsec packet by some reason (let say fragmented packet)
> and now it is SW responsibility to do so?
> The reason I am asking for that - it seems right now there is no defined way
> to share SQN related information between HW/PMD and upper layer SW.
> Is that ok, or would we need such capability?
> If we would, and upper layer SW would need to keep track on SQN anyway,
> then there is probably no point to do same thing in PMD itelf?
> In that case PMD just need to provide SQN information to the upper layer
> (probably one easy way to do it - reuse rte_,buf.seqn for that purpose,
> though for that will probably need make it 64-bit long).

The spec doesn't allow doing IPsec partially on HW & SW. The way spec is 
written (and implemented in ipsec-secgw) to allow one kind of 
RTE_SECURITY_ACTION_TYPE for one SA. If HW is not able to process packet 
received on INLINE_PROTOCOL SA, then it is treated as error. Handling 
fragmentation is a very valid scenario. We will have to edit the spec if 
we need to handle this scenario.

> 
> >
> >
> > LOOKASIDE_PROTOCOL
> > ------------------
> > In control path, security session is created with the given SA.
> >
> > Enqueue/dequeue is similar to what is done for regular crypto
> > (RTE_SECURITY_ACTION_TYPE_NONE) but all the protocol related processing
> > would be offloaded. Application will need to do SA lookup and identify
> > the processing to be done (both in case of outbound & inbound), and
> > submit packet to crypto device. Application need not do any IPSec
> > related transformations other than the lookup. Anti-replay need to be
> > handled in the PMD (the spec says the device “may be handled” do anti-replay check,
> > but a complete protocol offload would need anti-replay check also).
> 
> Same question here - wouldn't there be a situations when HW/PMD would need to
> share SQN information with upper layer?
> Let say if upper layer SW would need to do load balancing between crypto-devices
> with LOOKASIDE_PROTOCOL and without?

Same answer as above. ACTION is tied to security session which is tied 
to SA. SQN etc is internal to the session and so load balancing between 
crypto-devices is not supported.

> 
> >
> >
> > > rte_security is a 'framework for management and provisioning of security protocol operations offloaded to hardware based devices'.
> > > While rte_ipsec is aimed to be a library for IPsec data-path processing.
> > > There is no plans for rte_ipsec to 'obsolete' rte_security.
> > > Quite opposite rte_ipsec supposed to work with both rte_cryptodev and rte_security APIs (devices).
> > > It is possible to have an SA that would use both crypto and  security devices.
> > > Or to have an SA that would use multiple crypto devs
> > > (though right now it is up the user level to do load-balancing logic).
> > >
> > > > For IPsec, If an application needs to use rte_security for HW
> > > > implementation and and application needs to use librte_ipsec for
> > > >  SW implementation then it is bad and a lot duplication of work on
> > > > he slow path too.
> > >
> > > The plan is that application would need to use just rte_ipsec API for all data-paths
> > > (HW/SW, lookaside/inline).
> > > Let say right now there is rte_ipsec_inline_process() function if user
> > > prefers to use inline security device to process given group packets,
> > > and rte_ipsec_crypto_process(/prepare) if user decides to use
> > > lookaside security or simple crypto device for it.
> > >
> > > >
> > > > The rte_security spec can support both inline and look-aside IPSec
> > > > protocol support.
> > >
> > > AFAIK right now rte_security just provides API to create/free/manipulate security sessions.
> > > I don't see how it can support all the functionality mentioned above,
> > > plus SAD and SPD.
> >
> >
> > At least for INLINE_PROTOCOL case SA lookup for inbound traffic does by
> > HW.
> 
> For inbound yes, for outbound I suppose you still would need to do a lookup in SW.

Yes

> 
> >
> > >
> > > >
> > > > 2) This library is tuned for fat CPU core in mind like single SA on core
> > > > etc. Which is fine for x86 servers and arm64 server category of machines
> > > > but it does not work very well with NPU class of SoC or FPGA.
> > > >
> > > > As there  are the different ways to implement the IPSec, For instance,
> > > > use of eventdev can help in situation for handling millions of SA and
> > > > equence number of update and anti reply check can be done by leveraging
> > > > some of the HW specific features like
> > > > ORDERED, ATOMIC schedule type(mapped as eventdev feature)in HW with PIPELINE model.
> > > >
> > > > # Issues with having one SA one core,
> > > > - In the outbound side, there could be multiple flows using the same SA.
> > > >   Multiple flows could be processed parallel on different lcores,
> > > > but tying one SA to one core would mean we won't be able to do that.
> > > >
> > > > - In the inbound side, we will have a fat flow hitting one core. If
> > > >   IPsec library assumes single core, we will not be able to to spread
> > > > fat flow to multiple cores. And one SA-one core would mean all ports on
> > > > which we would expect IPsec traffic has to be handled by that core.
> > >
> > > I suppose that all refers to the discussion about MT safe API for rte_ipsec, right?
> > > If so, then as I said in my reply to Anoob:
> > > We will try to make API usable in MT environment for v1,
> > > so you can review and provide comments at early stages.
> >
> > OK
> >
> > >
> > > >
> > > > I have made a simple presentation. This presentation details ONE WAY to
> > > > implement the IPSec with HW support on NPU.
> > > >
> > > > https://docs.google.com/presentation/d/1e3IDf9R7ZQB8FN16Nvu7KINuLSWMdyKEw8_0H05rjj4/edit?usp=sharing
> > > >
> > >
> > > Thanks, quite helpful.
> > > Actually from page 3, it looks like your expectations don't contradict in general with proposed API:
> > >
> > > ...
> > > } else if (ev.event_type == RTE_EVENT_TYPE_LCORE && ev.sub_event_id == APP_STATE_SEQ_UPDATE) {
> > >                         sa = ev.flow_queue_id;
> > >                         /* do critical section work per sa */
> > >                         do_critical_section_work(sa);
> > >
> > > [KA] that's the place where I expect either
> > > rte_ipsec_inline_process(sa, ...); OR rte_ipsec_crypto_prepare(sa, ...);
> > > would be called.
> >
> > Makes sense. But currently, the library defines what is
> > rte_ipsec_inline_process() and rte_ipsec_crypto_prepare(), but it should
> > be based on underneath security device or crypto device.
> 
> Reason for that - their code-paths are quite different:
> for inline devices we can do whole processing synchronously(within process() function),
> while fro crypto it is sort of split into tw parts -
> we first have to do prepare();enqueue() them to crypto-dev, and then dequeue();process().
> Another good thing with that way - it allows the same SA to work with different devices.
> 
> >
> > So, IMO for better control, these functions should be the function pointer
> > based and based on underlying device, library can fill the
> > implementation.
> >
> > IMO, it is not possible to create "static inline function" with all "if"
> > checks. I think, we can have four ipsec functions with function pointer
> > scheme.
> >
> > rte_ipsec_inbound_prepare()
> > rte_ipsec_inbound_process()
> > rte_ipsec_outbound_prepare()
> > rte_ipsec_outbound_process()
> >
> > Some of the other concerns:
> > 1) For HW implementation, rte_ipsec_sa needs to opaque like rte_security
> > as some of the structure defined by HW or Microcode. We can choose
> > absolute generic items as common and device/rte_security specific can be opaque.
> 
> I don't think it would be a problem, rte_ipsec_sa  does contain a pointer to
> rte_security_session, so it can provide it as an argument to these functions.

The rte_ipsec_sa would need some private space for application to store 
it's metadata. There can be SA implementations with additional fields 
for faster lookups. To rephrase, the application should be given some 
provision to store some metadata it would need for faster lookups.
may sa_init API can give amount private size required.


> 
> >
> > 2)I think, in order to accommodate the event drivern model. We need to pass
> > void ** in prepare() and process() function with an additional argument
> > of type(TYPE_EVENT/TYPE_MBUF) can be passed to detect packet object
> > type as some of the functions in prepare() and process() may need
> > rte_event to operate on.
> 
> You are talking here about security device specific functions described below, correct?
> 
> >
> > >
> > >                      /* Issue the crypto request and generate the following on crypto work completion */
> > > [KA] that's the place where I expect rte_ipsec_crypto_process(...) be invoked.
> > >
> > >                         ev.flow_queue_id = tx_port;
> > >                         ev.sub_event_id = tx_queue_id;
> > >                         ev.sched_sync = RTE_SCHED_SYNC_ATOMIC;
> > >                         rte_cryptodev_event_enqueue(cryptodev, ev.mbuf, eventdev, ev);
> > >                 }
> > >
> > >
> > > > I am not saying this should be the ONLY way to do as it does not work
> > > > very well with non NPU/FPGA class of SoC.
> > > >
> > > > So how about making the proposed IPSec library as plugin/driver to
> > > > rte_security.
> > >
> > > As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> > > is the best possible approach.
> > > Though I probably understand your concern:
> > > In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> > > i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are covered.
> > > Though there are devices where most of prepare/process can be done in HW
> > > (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> > > plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> > > Is that so?
> > > To address that issue I suppose we can do:
> > > 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> > >     security devices into ipsec.
> > >     We planned to do it anyway, just don't have it done yet.
> > > 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
> >
> > The problem is, CUSTOM may have different variants and "if" conditions won't
> > scale if we choose to have non function pointer scheme. Otherwise, it
> > looks OK to create new SECURITY TYPE and associated plugin for prepare() and process()
> > function in librte_ipsec library.
> 
> In principle, I don't mind to always use function pointers for prepare()/process(), but:
> from your description above of INLINE_PROTOCOL and LOOKASIDE_PROTOCOL
> the process()/prepare() for such devices looks well defined and
> straightforward to implement.
> Not sure we'll need a function pointer for such simple and lightweight case:
> set/check ol_flags, set/read userdata value.
> I think extra function call here is kind of overkill and will only slowdown things.
> But if that would be majority preference - I wouldn't argue.
> BTW if we'll agree to always use function pointers for process/prepare,
> then there is no point to have that all existing action types -
> all we need is an indication is it inline or lookaside device and
> function pointers for prepare/process().

Me too not a fan of function pointer scheme. But options are limited.

Though the generic usage seems straightforward, the implementation of 
the above modes can be very different. Vendors could optimize various 
operations (SQN update for example) for better performance on their 
hardware. Sticking to one approach would negate that advantage.

Another option would be to use multiple-worker model that Anoob had 
proposed some time back.
https://mails.dpdk.org/archives/dev/2018-June/103808.html

Idea would be to make all lib_ipsec functions added as static inline 
functions.

static inline rte_ipsec_add_tunnel_hdr(struct rte_mbuf *mbuf);
static inline rte_ipsec_update_sqn(struct rte_mbuf *mbuf, &seq_no);
...

For the regular use case, a fat 
rte_ipsec_(inbound/outbound)_(prepare/process) can be provided. The 
worker implemented for that case can directly call the function and 
forget about the other modes. For other vendors with varying 
capabilities, there can be multiple workers taking advantage of the hw 
features. For such workers, the static inline functions can be used as 
required. This gives vendors opportunity to pick and choose what they 
want from the ipsec lib. The worker to be used for that case will be 
determined based on the capabilities exposed by the PMDs.

https://mails.dpdk.org/archives/dev/2018-June/103828.html

The above email explains how multiple workers can be used with l2fwd.

For this to work, the application & library code need to be modularised. 
Like what is being done in the following series,
https://mails.dpdk.org/archives/dev/2018-June/103786.html

This way one application can be made to run on multiple platforms, with 
the app being optimized for the platform on which it would run.

/* ST SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - NO EVENTDEV*/
worker1()
{
     while(true) {
         nb_pkts = rte_eth_rx_burst();

         if (nb_pkts != 0) {
             /* Do lookup */
             rte_ipsec_inbound_prepare();
             rte_cryptodev_enqueue_burst();
             /* Update in-flight */
         }

         if (in_flight) {
             rte_cryptodev_dequeue_burst();
             rte_ipsec_outbound_process();
         }
         /* route packet */
}

#include <rte_ipsec.h>   /* For IPsec lib static inlines */

static inline rte_event_enqueue(struct rte_event *ev)
{
     ...
}

/* MT safe SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - EVENTDEV)
worker2()
{
     while(true) {
         nb_pkts = rte_eth_rx_burst();

         if (nb_pkts != 0) {
             /* Do lookup */
            rte_ipsec_add tunnel(ev->mbuf);
            rte_event_enqueue(ev)
            rte_cryptodev_enqueue_burst(ev->mbuf);
             /* Update in-flight */
         }

         if (in_flight) {
             rte_cryptodev_dequeue_burst();
             rte_ipsec_outbound_process();
         }
         /* route packet */
}

In short,

1) Have separate small inline functions in library
2) If something can be grouped, it can be exposed a specific function
to address a specific usecases
3) Let remaining code, can go in application as different worker() to 
address all the usecases.

> 
> Konstantin
> 
> >
> >
> > >     and add into rte_security_ops   new functions:
> > >     uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> > >     uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> > >     uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> > >     So for custom HW, PMD can overwrite normal prepare/process behavior.
> > >
> > > > This would give flexibly for each vendor/platform choose to different
> > > > IPse implementation based on HW support WITHOUT CHANGING THE APPLICATION
> > > > INTERFACE.
> > >
> > > Not sure what API changes you are referring to?
> > > As I am aware we do introduce new API, but all existing APIs remain in place.
> >
> >
> > What I meant was, Single application programming interface to enable IPSec processing to
> > application.
> >
> >
> > >
> > > >
> > > > IMO, rte_security IPsec look aside support can be simply added by
> > > > creating the virtual crypto device(i.e move the proposed code to the virtual crypto device)
> > > > likewise inline support
> > > > can be added by the virtual ethdev device.
> > >
> > > That's probably possible and if someone would like to introduce such abstraction - NP in general
> > > (though my suspicion - it might be too heavy to be really useful).
> > > Though I don't think it should be the only possible way for the user to enable IPsec data-processing inside his app.
> > > Again I guess such virtual-dev will still use rte_ipsec inside.
> >
> > I don't have strong opinion on virtual devices VS function pointer based
> > prepare() and process() function in librte_ipsec library.
> >
> > >
> > > > This would avoid the need for
> > > > updating ipsec-gw application as well i.e unified interface to application.
> > >
> > > I think - it would  really good to simplify existing ipsec-secgw sample app.
> > > Some parts of it seems unnecessary complex to me.
> > > One of the reasons for it -  we don't really have an unified (and transparent) API for ipsec data-path.
> > > Let's look at ipsec_enqueue() and related code (examples/ipsec-secgw/ipsec.c:365)
> > > It is huge (and ugly) -  user has to handle dozen different cases just to enqueue packet for IPsec processing.
> > > One of the aims of rte_ipsec library - hide all that complexities inside the library and provide to
> > > the upper layer clean and transparent API.
> > >
> > > >
> > > > If you don't like the above idea, any scheme of plugin based
> > > > implementation would be fine so that vendor or platform can choose its own implementation.
> > > > It can be based on partial HW implement too. i.e SA look can be used in SW, remaining stuff in HW
> > > > (for example IPsec inline case)
> > >
> > > I am surely ok with the idea to give vendors an ability to customize implementation
> > > and enable their HW capabilities.
> >
> > I think, We are on the same page, just that the fine details of "framework"
> > for customizing implementation based on their HW capabilities need to
> > iron out.
> >
> > > Do you think proposed additions to the rte_security would be  enough,
> > > or something extra is needed?
> >
> > See above.
> >
> > Jerin
> >
> > >
> > > Konstantin
> > >
> > >
> > > >
> > > > # For protocols like UDP, it makes sense to create librte_udp as there
> > > > no much HW specific offload other than ethdev provides.
> > > >
> > > > # PDCP could be another library to offload to HW, So talking
> > > > rte_security path makes more sense in that case too.
> > > >
> > > > Jerin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-25  7:48                       ` Akhil Goyal
@ 2018-09-30 21:00                         ` Ananyev, Konstantin
  2018-10-01 12:49                           ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-09-30 21:00 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain



Hi Akhil,

> 
> Hi Konstantin,
> 
> On 9/24/2018 4:21 PM, Ananyev, Konstantin wrote:
> > Hi Akhil,
> >
> >> Hi Konstantin,
> >>
> >> On 9/18/2018 6:12 PM, Ananyev, Konstantin wrote:
> >>>>> I am not saying this should be the ONLY way to do as it does not work
> >>>>> very well with non NPU/FPGA class of SoC.
> >>>>>
> >>>>> So how about making the proposed IPSec library as plugin/driver to
> >>>>> rte_security.
> >>>> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> >>>> is the best possible approach.
> >>>> Though I probably understand your concern:
> >>>> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> >>>> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are
> covered.
> >>>> Though there are devices where most of prepare/process can be done in HW
> >>>> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> >>>> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> >>>> Is that so?
> >>>> To address that issue I suppose we can do:
> >>>> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> >>>>       security devices into ipsec.
> >>>>       We planned to do it anyway, just don't have it done yet.
> >>>> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and
> >> RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
> >>>>       and add into rte_security_ops   new functions:
> >>>>       uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> >> num);
> >>>>       uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> >> num);
> >>>>       uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> num);
> >>>>       So for custom HW, PMD can overwrite normal prepare/process behavior.
> >>>>
> >>> Actually  after another thought:
> >>> My previous assumption (probably wrong one) was that for both
> >>> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> >>> devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
> >>> Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right now)
> >>> I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.
> >>> Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
> >>> (HW/SW roses/responsibilities)?
> >>> About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that
> >> capability.
> >>> So my question is there any devices/drivers that do support it?
> >>> If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
> >>> Konstantin
> >>>
> >>>
> >> In case of LOOKASIDE, the protocol errors like antireplay and sequence
> >> number overflow shall be the responsibility of either PMD or the HW.
> >> It should notify the application that the error has occurred and
> >> application need to decide what it needs to decide next.
> > Ok, thanks for clarification.
> > Just to confirm -  do we have a defined way for it right now in rte_security?
> As of now, there are no macros defined for antireplay/seq. no. overflow
> errors in crypto errors(rte_crypto_op_status), but it will be added soon.
> For inline cases, ipsec-secgw application gets error notification via
> rte_eth_event.

Ok.


> >
> >> As Jerin said in other email, the roles/responsibility of the PMD in
> >> case of inline proto and lookaside case, nothing much is required from
> >> the application to do any processing for ipsec.
> >>
> >> As per my understanding, the proposed RFC is to make the application
> >> code cleaner for  the protocol processing.
> > Yes, unified data-path API is definitely one of the main goals.
> >
> >> 1. For inline proto and lookaside there won't be any change in the data
> >> path. The main changes would be in the control path.
> > Yes, from your and Jerin description data-path processing looks
> > really lightweight for these cases.
> > For control path - there is no much change, user would have to call
> > rte_ipsec_sa_init() to start using given SA.
> >
> >> 2. But in case of inline crypto and RTE_SECURITY_ACTION_TYPE_NONE, the
> >> protocol processing will be done in the library and there would be
> >> changes in both control and data path.
> > Yes.
> >
> >> As the rte_security currently provide generic APIs for control path only
> >> and we may have it expanded for protocol specific datapath processing.
> >> So for the application, working with inline crypto/ inline proto would
> >> be quite similar and it won't need to do some extra processing for
> >> inline crypto.
> >> Same will be the case for RTE_SECURITY_ACTION_TYPE_NONE and lookaside.
> >>
> >> We may have the protocol specific APIs reside inside the rte_security
> >> and we can use either the crypto/net PMD underneath it.
> > As I understand, you suggest instead of introducing new library,
> > introduce similar data-path functions inside rte_security.
> > Probably something like:
> >
> > uint16_t rte_security_process(struct rte_security_session *s, struct rte_mbuf *mb[], uint16_t num);
> > uint16_t rte_security_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> >                                                                        struct rte_crypto_op *cop[], uint16_t num);
> > ...
> > Is that correct?
> 
> "rte_security_process_ipsec" and "rte_security_crypto_prepare_ipsec" will be better.
> We can have such APIs for other protocols as well.
> Also, we should leave the existing functionality as is and we should let the user decide whether
> it needs to manage the ipsec on it's own or with the new APIs.

There are no plans to void any existing API.
If the user has working code that uses rte_crytpodev/rte_security directly and wants to keep it,
that's fine.

> 
> >
> > I thought about that approach too, and indeed from one side it looks cleaner and easier
> > to customize - each of these functions would just call related function inside rte_security_ops.
> > The problem with that approach - it would mean that each SA would be able to work with one
> > device only.
> > So if someone needs an SA that could be processed by multiple cores and multiple crypto-devices
> > in parallel such approach wouldn’t fit.
> One SA should be processed by a single core or else we need to have an
> event based application which support ordered queues,
> because if we process packets of single SA on multiple cores, then
> packets will get re-ordered and we will get the anti-replay late errors
> on decap side.

I suppose in some cases one core would be enough to handle SA traffic,
for some not, as I said before, I think it should be configurable.
Of course for MT case some entity that would  guarantee proper ordering
for final packet processing would be needed.
It could be some eventdev, or SW FIFO queue, or something else.

> And if we have event based solution, then the scheduler will be able to
> handle the load balancing accordingly.

Didn't understand that sentence.

> 
> > That was the main reason to keep rte_security as it is right now and go ahead with new library.
> > One thing that worries me -  do we need a way to share SQN and replay window information
> > between rte_security and upper layer (rte_ipsec)?
> > If 'no', then ok, if 'yes' then probably we need to discuss how to do it now?
> anti-replay window size shall be a parameter in ipsec_xform, which shall
> be added.
> And the error notification
>   - in case of using crypto, then use rte_crypto_op_status
> - in case of inline cases, then use rte_eth_event callbacks.
> I don't see rte_ipsec needs to take care of that in your initial approach.
> However, if you plan to include session reset inside rte_ipsec, then you
> may need that inside the rte_ipsec.

I am not talking rte_ipsec, my concern here is rte_security.
Suppose you need to switch from device that can do inline_proto to the device that doesn't.
Right now the only way - renegotiate all SAs that were handled by inline_proto device
(because there is no way to retrieve from rte_security device SQN information).
Renegotiation should work, but it looks like quite expensive approach.
If rte_security would have a way to share its SQN status with SW, then I think it would
be possible to do such switch without SA termination.
Again with such info available - load-balancing for the same SA on multiple devices
might be possible.
Konstantin


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-30 21:00                         ` Ananyev, Konstantin
@ 2018-10-01 12:49                           ` Akhil Goyal
  2018-10-02 23:24                             ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-10-01 12:49 UTC (permalink / raw)
  To: Ananyev, Konstantin, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain

Hi Konstantin,

On 10/1/2018 2:30 AM, Ananyev, Konstantin wrote:
>
> Hi Akhil,
>
>> Hi Konstantin,
>>
>> On 9/24/2018 4:21 PM, Ananyev, Konstantin wrote:
>>> Hi Akhil,
>>>
>>>> Hi Konstantin,
>>>>
>>>> On 9/18/2018 6:12 PM, Ananyev, Konstantin wrote:
>>>>>>> I am not saying this should be the ONLY way to do as it does not work
>>>>>>> very well with non NPU/FPGA class of SoC.
>>>>>>>
>>>>>>> So how about making the proposed IPSec library as plugin/driver to
>>>>>>> rte_security.
>>>>>> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
>>>>>> is the best possible approach.
>>>>>> Though I probably understand your concern:
>>>>>> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
>>>>>> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are
>> covered.
>>>>>> Though there are devices where most of prepare/process can be done in HW
>>>>>> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
>>>>>> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
>>>>>> Is that so?
>>>>>> To address that issue I suppose we can do:
>>>>>> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>>>>>>        security devices into ipsec.
>>>>>>        We planned to do it anyway, just don't have it done yet.
>>>>>> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and
>>>> RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
>>>>>>        and add into rte_security_ops   new functions:
>>>>>>        uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
>>>> num);
>>>>>>        uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
>>>> num);
>>>>>>        uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
>> num);
>>>>>>        So for custom HW, PMD can overwrite normal prepare/process behavior.
>>>>>>
>>>>> Actually  after another thought:
>>>>> My previous assumption (probably wrong one) was that for both
>>>>> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>>>>> devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
>>>>> Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right now)
>>>>> I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.
>>>>> Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
>>>>> (HW/SW roses/responsibilities)?
>>>>> About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that
>>>> capability.
>>>>> So my question is there any devices/drivers that do support it?
>>>>> If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
>>>>> Konstantin
>>>>>
>>>>>
>>>> In case of LOOKASIDE, the protocol errors like antireplay and sequence
>>>> number overflow shall be the responsibility of either PMD or the HW.
>>>> It should notify the application that the error has occurred and
>>>> application need to decide what it needs to decide next.
>>> Ok, thanks for clarification.
>>> Just to confirm -  do we have a defined way for it right now in rte_security?
>> As of now, there are no macros defined for antireplay/seq. no. overflow
>> errors in crypto errors(rte_crypto_op_status), but it will be added soon.
>> For inline cases, ipsec-secgw application gets error notification via
>> rte_eth_event.
> Ok.
>
>
>>>> As Jerin said in other email, the roles/responsibility of the PMD in
>>>> case of inline proto and lookaside case, nothing much is required from
>>>> the application to do any processing for ipsec.
>>>>
>>>> As per my understanding, the proposed RFC is to make the application
>>>> code cleaner for  the protocol processing.
>>> Yes, unified data-path API is definitely one of the main goals.
>>>
>>>> 1. For inline proto and lookaside there won't be any change in the data
>>>> path. The main changes would be in the control path.
>>> Yes, from your and Jerin description data-path processing looks
>>> really lightweight for these cases.
>>> For control path - there is no much change, user would have to call
>>> rte_ipsec_sa_init() to start using given SA.
>>>
>>>> 2. But in case of inline crypto and RTE_SECURITY_ACTION_TYPE_NONE, the
>>>> protocol processing will be done in the library and there would be
>>>> changes in both control and data path.
>>> Yes.
>>>
>>>> As the rte_security currently provide generic APIs for control path only
>>>> and we may have it expanded for protocol specific datapath processing.
>>>> So for the application, working with inline crypto/ inline proto would
>>>> be quite similar and it won't need to do some extra processing for
>>>> inline crypto.
>>>> Same will be the case for RTE_SECURITY_ACTION_TYPE_NONE and lookaside.
>>>>
>>>> We may have the protocol specific APIs reside inside the rte_security
>>>> and we can use either the crypto/net PMD underneath it.
>>> As I understand, you suggest instead of introducing new library,
>>> introduce similar data-path functions inside rte_security.
>>> Probably something like:
>>>
>>> uint16_t rte_security_process(struct rte_security_session *s, struct rte_mbuf *mb[], uint16_t num);
>>> uint16_t rte_security_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
>>>                                                                         struct rte_crypto_op *cop[], uint16_t num);
>>> ...
>>> Is that correct?
>> "rte_security_process_ipsec" and "rte_security_crypto_prepare_ipsec" will be better.
>> We can have such APIs for other protocols as well.
>> Also, we should leave the existing functionality as is and we should let the user decide whether
>> it needs to manage the ipsec on it's own or with the new APIs.
> There are no plans to void any existing API.
> If the user has working code that uses rte_crytpodev/rte_security directly and wants to keep it,
> that's fine.
>
>>> I thought about that approach too, and indeed from one side it looks cleaner and easier
>>> to customize - each of these functions would just call related function inside rte_security_ops.
>>> The problem with that approach - it would mean that each SA would be able to work with one
>>> device only.
>>> So if someone needs an SA that could be processed by multiple cores and multiple crypto-devices
>>> in parallel such approach wouldn’t fit.
>> One SA should be processed by a single core or else we need to have an
>> event based application which support ordered queues,
>> because if we process packets of single SA on multiple cores, then
>> packets will get re-ordered and we will get the anti-replay late errors
>> on decap side.
> I suppose in some cases one core would be enough to handle SA traffic,
> for some not, as I said before, I think it should be configurable.
> Of course for MT case some entity that would  guarantee proper ordering
> for final packet processing would be needed.
> It could be some eventdev, or SW FIFO queue, or something else.
>
>> And if we have event based solution, then the scheduler will be able to
>> handle the load balancing accordingly.
> Didn't understand that sentence.
I mean the event device will be able to handle that which has an inbuilt 
scheduler in it for balancing the load of single SA,
and if the queues are ordered and it support order restoration, then it 
will be able to maintain the ordering. And for that
you would not have to bother about giving the same SA to different 
cryptodevs on multi cores.
>>> That was the main reason to keep rte_security as it is right now and go ahead with new library.
>>> One thing that worries me -  do we need a way to share SQN and replay window information
>>> between rte_security and upper layer (rte_ipsec)?
>>> If 'no', then ok, if 'yes' then probably we need to discuss how to do it now?
>> anti-replay window size shall be a parameter in ipsec_xform, which shall
>> be added.
>> And the error notification
>>    - in case of using crypto, then use rte_crypto_op_status
>> - in case of inline cases, then use rte_eth_event callbacks.
>> I don't see rte_ipsec needs to take care of that in your initial approach.
>> However, if you plan to include session reset inside rte_ipsec, then you
>> may need that inside the rte_ipsec.
> I am not talking rte_ipsec, my concern here is rte_security.
> Suppose you need to switch from device that can do inline_proto to the device that doesn't.
In what use case you would need such switching?
> Right now the only way - renegotiate all SAs that were handled by inline_proto device
> (because there is no way to retrieve from rte_security device SQN information).
> Renegotiation should work, but it looks like quite expensive approach.
This will be only for the first packet.
> If rte_security would have a way to share its SQN status with SW, then I think it would
> be possible to do such switch without SA termination.
what kind of SQN status you are looking for? overflow? If yes, 
application need to re-negotiate the session,
which will be done periodically anyways.
> Again with such info available - load-balancing for the same SA on multiple devices
> might be possible.
> Konstantin
>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-10-01 12:49                           ` Akhil Goyal
@ 2018-10-02 23:24                             ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-10-02 23:24 UTC (permalink / raw)
  To: Akhil Goyal, Jerin Jacob, Joseph, Anoob
  Cc: dev, Awal, Mohammad Abdul, Doherty, Declan, Narayana Prasad,
	Hemant Agrawal, shreyansh.jain

Hi Akhil,

> 
> Hi Konstantin,
> 
> On 10/1/2018 2:30 AM, Ananyev, Konstantin wrote:
> >
> > Hi Akhil,
> >
> >> Hi Konstantin,
> >>
> >> On 9/24/2018 4:21 PM, Ananyev, Konstantin wrote:
> >>> Hi Akhil,
> >>>
> >>>> Hi Konstantin,
> >>>>
> >>>> On 9/18/2018 6:12 PM, Ananyev, Konstantin wrote:
> >>>>>>> I am not saying this should be the ONLY way to do as it does not work
> >>>>>>> very well with non NPU/FPGA class of SoC.
> >>>>>>>
> >>>>>>> So how about making the proposed IPSec library as plugin/driver to
> >>>>>>> rte_security.
> >>>>>> As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> >>>>>> is the best possible approach.
> >>>>>> Though I probably understand your concern:
> >>>>>> In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> >>>>>> i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are
> >> covered.
> >>>>>> Though there are devices where most of prepare/process can be done in HW
> >>>>>> (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> >>>>>> plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> >>>>>> Is that so?
> >>>>>> To address that issue I suppose we can do:
> >>>>>> 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> >>>>>>        security devices into ipsec.
> >>>>>>        We planned to do it anyway, just don't have it done yet.
> >>>>>> 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and
> >>>> RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
> >>>>>>        and add into rte_security_ops   new functions:
> >>>>>>        uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[],
> uint16_t
> >>>> num);
> >>>>>>        uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[],
> uint16_t
> >>>> num);
> >>>>>>        uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> >> num);
> >>>>>>        So for custom HW, PMD can overwrite normal prepare/process behavior.
> >>>>>>
> >>>>> Actually  after another thought:
> >>>>> My previous assumption (probably wrong one) was that for both
> >>>>> RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> >>>>> devices can do whole data-path ipsec processing totally in HW - no need for any SW support (except init/config).
> >>>>> Now looking at dpaa and dpaa2 devices (the only ones that supports RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL right
> now)
> >>>>> I am not so sure about that - looks like some SW help might be needed for replay window updates, etc.
> >>>>> Hemant, Shreyansh - can you guys confirm what is expected from RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL devices
> >>>>> (HW/SW roses/responsibilities)?
> >>>>> About RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL  - I didn't find any driver inside DPDK source tree that does support that
> >>>> capability.
> >>>>> So my question is there any devices/drivers that do support it?
> >>>>> If so, where could source code could be found, and what are HW/SW roles/responsibilities for that type of devices?
> >>>>> Konstantin
> >>>>>
> >>>>>
> >>>> In case of LOOKASIDE, the protocol errors like antireplay and sequence
> >>>> number overflow shall be the responsibility of either PMD or the HW.
> >>>> It should notify the application that the error has occurred and
> >>>> application need to decide what it needs to decide next.
> >>> Ok, thanks for clarification.
> >>> Just to confirm -  do we have a defined way for it right now in rte_security?
> >> As of now, there are no macros defined for antireplay/seq. no. overflow
> >> errors in crypto errors(rte_crypto_op_status), but it will be added soon.
> >> For inline cases, ipsec-secgw application gets error notification via
> >> rte_eth_event.
> > Ok.

Actually looking at it a bit closer -you are talking about RTE_ETH_EVENT_IPSEC, right?
I do see struct/types definitions, and to see code in ipsec-secgw to handle it,
but I don't see any driver that supports it.
Is that what intended?

> >
> >
> >>>> As Jerin said in other email, the roles/responsibility of the PMD in
> >>>> case of inline proto and lookaside case, nothing much is required from
> >>>> the application to do any processing for ipsec.
> >>>>
> >>>> As per my understanding, the proposed RFC is to make the application
> >>>> code cleaner for  the protocol processing.
> >>> Yes, unified data-path API is definitely one of the main goals.
> >>>
> >>>> 1. For inline proto and lookaside there won't be any change in the data
> >>>> path. The main changes would be in the control path.
> >>> Yes, from your and Jerin description data-path processing looks
> >>> really lightweight for these cases.
> >>> For control path - there is no much change, user would have to call
> >>> rte_ipsec_sa_init() to start using given SA.
> >>>
> >>>> 2. But in case of inline crypto and RTE_SECURITY_ACTION_TYPE_NONE, the
> >>>> protocol processing will be done in the library and there would be
> >>>> changes in both control and data path.
> >>> Yes.
> >>>
> >>>> As the rte_security currently provide generic APIs for control path only
> >>>> and we may have it expanded for protocol specific datapath processing.
> >>>> So for the application, working with inline crypto/ inline proto would
> >>>> be quite similar and it won't need to do some extra processing for
> >>>> inline crypto.
> >>>> Same will be the case for RTE_SECURITY_ACTION_TYPE_NONE and lookaside.
> >>>>
> >>>> We may have the protocol specific APIs reside inside the rte_security
> >>>> and we can use either the crypto/net PMD underneath it.
> >>> As I understand, you suggest instead of introducing new library,
> >>> introduce similar data-path functions inside rte_security.
> >>> Probably something like:
> >>>
> >>> uint16_t rte_security_process(struct rte_security_session *s, struct rte_mbuf *mb[], uint16_t num);
> >>> uint16_t rte_security_crypto_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
> >>>                                                                         struct rte_crypto_op *cop[], uint16_t num);
> >>> ...
> >>> Is that correct?
> >> "rte_security_process_ipsec" and "rte_security_crypto_prepare_ipsec" will be better.
> >> We can have such APIs for other protocols as well.
> >> Also, we should leave the existing functionality as is and we should let the user decide whether
> >> it needs to manage the ipsec on it's own or with the new APIs.
> > There are no plans to void any existing API.
> > If the user has working code that uses rte_crytpodev/rte_security directly and wants to keep it,
> > that's fine.
> >
> >>> I thought about that approach too, and indeed from one side it looks cleaner and easier
> >>> to customize - each of these functions would just call related function inside rte_security_ops.
> >>> The problem with that approach - it would mean that each SA would be able to work with one
> >>> device only.
> >>> So if someone needs an SA that could be processed by multiple cores and multiple crypto-devices
> >>> in parallel such approach wouldn’t fit.
> >> One SA should be processed by a single core or else we need to have an
> >> event based application which support ordered queues,
> >> because if we process packets of single SA on multiple cores, then
> >> packets will get re-ordered and we will get the anti-replay late errors
> >> on decap side.
> > I suppose in some cases one core would be enough to handle SA traffic,
> > for some not, as I said before, I think it should be configurable.
> > Of course for MT case some entity that would  guarantee proper ordering
> > for final packet processing would be needed.
> > It could be some eventdev, or SW FIFO queue, or something else.
> >
> >> And if we have event based solution, then the scheduler will be able to
> >> handle the load balancing accordingly.
> > Didn't understand that sentence.
> I mean the event device will be able to handle that which has an inbuilt
> scheduler in it for balancing the load of single SA,
> and if the queues are ordered and it support order restoration, then it
> will be able to maintain the ordering. And for that
> you would not have to bother about giving the same SA to different
> cryptodevs on multi cores.

If such event device will be available for the user, and it would be a user preference to use it -
that's fine.
In such case there is no need for MT support  just ST version of SA code could be used.
But I suppose such scheduler shouldn't be the only option.

> >>> That was the main reason to keep rte_security as it is right now and go ahead with new library.
> >>> One thing that worries me -  do we need a way to share SQN and replay window information
> >>> between rte_security and upper layer (rte_ipsec)?
> >>> If 'no', then ok, if 'yes' then probably we need to discuss how to do it now?
> >> anti-replay window size shall be a parameter in ipsec_xform, which shall
> >> be added.
> >> And the error notification
> >>    - in case of using crypto, then use rte_crypto_op_status
> >> - in case of inline cases, then use rte_eth_event callbacks.
> >> I don't see rte_ipsec needs to take care of that in your initial approach.
> >> However, if you plan to include session reset inside rte_ipsec, then you
> >> may need that inside the rte_ipsec.
> > I am not talking rte_ipsec, my concern here is rte_security.
> > Suppose you need to switch from device that can do inline_proto to the device that doesn't.
> In what use case you would need such switching?

As an example - device detach,  VM live migration, in some cases even changes in routing table.
As another example - limitations in HW offload supported.
Let say ixgbe doesn't support ip reassemble.

> > Right now the only way - renegotiate all SAs that were handled by inline_proto device
> > (because there is no way to retrieve from rte_security device SQN information).
> > Renegotiation should work, but it looks like quite expensive approach.
> This will be only for the first packet.

Sure, and now imagine you have 1M SAs on inline-proto device and sysadmin wants
to detach that device.
How long it would take to re-negotiate all of them?

> > If rte_security would have a way to share its SQN status with SW, then I think it would
> > be possible to do such switch without SA termination.
> what kind of SQN status you are looking for? overflow?

Nope, I am talking about last-seq and replay-window state:
https://tools.ietf.org/html/rfc4303#section-3.4.3

Konstantin

> If yes,
> application need to re-negotiate the session,
> which will be done periodically anyways.
> > Again with such info available - load-balancing for the same SA on multiple devices
> > might be possible.
> > Konstantin
> >


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-09-26 18:02                     ` Jerin Jacob
@ 2018-10-02 23:56                       ` Ananyev, Konstantin
  2018-10-03  9:37                         ` Jerin Jacob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-10-02 23:56 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Joseph, Anoob, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad, akhil.goyal, hemant.agrawal, shreyansh.jain

Hi Jerin,

> > > > > > >
> > > > > > > Anyway, let's pretend we found some smart way to distribute inbound packets for the same SA to multiple HW queues/CPU
> > > > > cores.
> > > > > > > To make ipsec processing for such case to work correctly just atomicity on check/update segn/replay_window is not enough.
> > > > > > > I think it would require some extra synchronization:
> > > > > > > make sure that we do final packet processing (seq check/update) at the same order as we received the packets
> > > > > > > (packets entered ipsec processing).
> > > > > > > I don't really like to introduce such heavy mechanisms on SA level,  after all it supposed to be light and simple.
> > > > > > > Though we plan CTX level API to support such scenario.
> > > > > > > What I think would be useful addition for SA level API - have an ability to do one update seqn/replay_window and multiple checks
> > > > > concurrently.
> > > > > > >
> > > > > > > > In case of ingress also, the same problem exists. We will not be able to use RSS and spread the traffic to multiple cores.
> > > > > Considering
> > > > > > > > IPsec being CPU intensive, this would limit the net output of the chip.
> > > > > > > That's true - but from other side implementation can offload heavy part
> > > > > > > (encrypt/decrypt, auth) to special HW (cryptodev).
> > > > > > > In that case single core might be enough for SA and extra synchronization would just slowdown things.
> > > > > > > That's why I think it should be configurable  what behavior (ST or MT) to use.
> > > > > > I do agree that these are the issues that we need to address to make the
> > > > > > library MT safe. Whether the extra synchronization would slow down things is
> > > > > > a very subjective question and will heavily depend on the platform. The
> > > > > > library should have enough provisions to be able to support MT without
> > > > > > causing overheads to ST. Right now, the library assumes ST.
> > > > >
> > > > >
> > > > > I agree with Anoob here.
> > > > >
> > > > > I have two concerns with librte_ipsec as a separate library
> > > > >
> > > > > 1) There is an overlap with rte_security and new proposed library.
> > > >
> > > > I don't think there really is an overlap.
> > >
> > > As mentioned in your other email. IMO, There is an overlap as
> > > RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL can support almost everything
> > > in HW or HW + SW if some PMD wishes to do so.
> > >
> > > Answering some of the questions, you have asked in other thread based on
> > > my understanding.
> > >
> > > Regarding RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL support,
> > > Marvell/Cavium CPT hardware on next generation HW(Planning to upstream
> > > around v19.02) can support RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and
> > > Anoob already pushed the application changes in ipsec-gw.
> >
> > Ok good to know.
> >
> > >
> > > In our understanding of HW/SW roles/responsibilities for that type of
> > > devices are:
> > >
> > > INLINE_PROTOCOL
> > > ----------------
> > > In control path, security session is created with the given SA and
> > > rte_flow configuration etc.
> > >
> > > For outbound traffic, the application will have to do SA lookup and
> > > identify the security action (inline/look aside crypto/protocol). For
> > > packets identified for inline protocol processing, the application would
> > > submit as plain packets to the ethernet device and the security capable
> > > ethernet device would perform IPSec and send out the packet. For PMDs
> > > which would need extra metadata (capability flag), set_pkt_metadata
> > > function pointer would be called (from application).
> > > This can be used to set some per packet field to identify the security session to be used to
> > > process the packet.
> >
> > Yes, as I can see, that's what ipsec-gw is doing right now and it wouldn't be
> > a problem to do the same in ipsec lib.
> >
> > > Sequence number updation will be done by the PMD.
> >
> > Ok, so for INLINE_PROTOCOL upper layer wouldn't need to keep track for SQN values at all?
> > You don’t' consider a possibility that by some reason that SA would need to
> > be moved from device that support INLINE_PROTOCOL to the device that doesn't?
> 
> For INLINE_PROTOCOL, the application won't have any control over such
> per packet fields. As for moving the SA to a different device, right now
> rte_security spec doesn't allow that. May be we should fix the spec to
> allow multiple devices to share the same security session. That way, if
> there is error in the inline processing, application will be able to
> submit the packet to LOOKASIDE_PROTOCOL crypto device (sharing the
> session) and get the packet processed.
> 

Yep, that my thought too.
If we want to support such scenarios with lookaside-proto and inline-proto devices,
then rte_security need to be changed/extended.

> 
> >
> > > For inbound traffic, the packets for IPSec would be identified by using
> > > rte_flow (hardware accelerated packet filtering). For the packets
> > > identified for inline offload (SECURITY action), hardware would perform
> > > the processing. For inline protocol processed IPSec packets, PMD would
> > > set “user data” so that application can get the details of the security
> > > processing done on the packet. Once the plain packet (after IPSec
> > > processing) is received, a selector check need to be performed to make
> > > sure we have a valid packet after IPSec processing. The user data is used
> > > for that. Anti-replay check is handled by the PMD. The PMD would raise
> > > an eth event in case of sequence number expiry or any SA expiry.
> >
> > Few questions here:
> > 1) if I understand things right - to specify that it was an IPsec packet -
> > PKT_RX_SEC_OFFLOAD will be set in mbuf ol_flags?
> > 2) Basically 'userdata' will contain just a user provided at rte_security_session_create pointer
> > (most likely pointer to the SA, as it is done right now in ipsec-secgw), correct?
> 
> Yes to 1 & 2.
> 
> 
> > 3) in current rte_security API si there a way to get/set replay window size, etc?
> 
> Not right now. But Akhil mentioned that it will be added soon.
> 
> 
> > 4)   Same question as for TX: you don't plan to support fallback to other type of devices/SW?
> > I.E. HW was not able to process ipsec packet by some reason (let say fragmented packet)
> > and now it is SW responsibility to do so?
> > The reason I am asking for that - it seems right now there is no defined way
> > to share SQN related information between HW/PMD and upper layer SW.
> > Is that ok, or would we need such capability?
> > If we would, and upper layer SW would need to keep track on SQN anyway,
> > then there is probably no point to do same thing in PMD itelf?
> > In that case PMD just need to provide SQN information to the upper layer
> > (probably one easy way to do it - reuse rte_,buf.seqn for that purpose,
> > though for that will probably need make it 64-bit long).
> 
> The spec doesn't allow doing IPsec partially on HW & SW. The way spec is
> written (and implemented in ipsec-secgw) to allow one kind of
> RTE_SECURITY_ACTION_TYPE for one SA. If HW is not able to process packet
> received on INLINE_PROTOCOL SA, then it is treated as error. Handling
> fragmentation is a very valid scenario. We will have to edit the spec if
> we need to handle this scenario.
> 
> >
> > >
> > >
> > > LOOKASIDE_PROTOCOL
> > > ------------------
> > > In control path, security session is created with the given SA.
> > >
> > > Enqueue/dequeue is similar to what is done for regular crypto
> > > (RTE_SECURITY_ACTION_TYPE_NONE) but all the protocol related processing
> > > would be offloaded. Application will need to do SA lookup and identify
> > > the processing to be done (both in case of outbound & inbound), and
> > > submit packet to crypto device. Application need not do any IPSec
> > > related transformations other than the lookup. Anti-replay need to be
> > > handled in the PMD (the spec says the device “may be handled” do anti-replay check,
> > > but a complete protocol offload would need anti-replay check also).
> >
> > Same question here - wouldn't there be a situations when HW/PMD would need to
> > share SQN information with upper layer?
> > Let say if upper layer SW would need to do load balancing between crypto-devices
> > with LOOKASIDE_PROTOCOL and without?
> 
> Same answer as above. ACTION is tied to security session which is tied
> to SA. SQN etc is internal to the session and so load balancing between
> crypto-devices is not supported.
> 
> >
> > >
> > >
> > > > rte_security is a 'framework for management and provisioning of security protocol operations offloaded to hardware based devices'.
> > > > While rte_ipsec is aimed to be a library for IPsec data-path processing.
> > > > There is no plans for rte_ipsec to 'obsolete' rte_security.
> > > > Quite opposite rte_ipsec supposed to work with both rte_cryptodev and rte_security APIs (devices).
> > > > It is possible to have an SA that would use both crypto and  security devices.
> > > > Or to have an SA that would use multiple crypto devs
> > > > (though right now it is up the user level to do load-balancing logic).
> > > >
> > > > > For IPsec, If an application needs to use rte_security for HW
> > > > > implementation and and application needs to use librte_ipsec for
> > > > >  SW implementation then it is bad and a lot duplication of work on
> > > > > he slow path too.
> > > >
> > > > The plan is that application would need to use just rte_ipsec API for all data-paths
> > > > (HW/SW, lookaside/inline).
> > > > Let say right now there is rte_ipsec_inline_process() function if user
> > > > prefers to use inline security device to process given group packets,
> > > > and rte_ipsec_crypto_process(/prepare) if user decides to use
> > > > lookaside security or simple crypto device for it.
> > > >
> > > > >
> > > > > The rte_security spec can support both inline and look-aside IPSec
> > > > > protocol support.
> > > >
> > > > AFAIK right now rte_security just provides API to create/free/manipulate security sessions.
> > > > I don't see how it can support all the functionality mentioned above,
> > > > plus SAD and SPD.
> > >
> > >
> > > At least for INLINE_PROTOCOL case SA lookup for inbound traffic does by
> > > HW.
> >
> > For inbound yes, for outbound I suppose you still would need to do a lookup in SW.
> 
> Yes
> 
> >
> > >
> > > >
> > > > >
> > > > > 2) This library is tuned for fat CPU core in mind like single SA on core
> > > > > etc. Which is fine for x86 servers and arm64 server category of machines
> > > > > but it does not work very well with NPU class of SoC or FPGA.
> > > > >
> > > > > As there  are the different ways to implement the IPSec, For instance,
> > > > > use of eventdev can help in situation for handling millions of SA and
> > > > > equence number of update and anti reply check can be done by leveraging
> > > > > some of the HW specific features like
> > > > > ORDERED, ATOMIC schedule type(mapped as eventdev feature)in HW with PIPELINE model.
> > > > >
> > > > > # Issues with having one SA one core,
> > > > > - In the outbound side, there could be multiple flows using the same SA.
> > > > >   Multiple flows could be processed parallel on different lcores,
> > > > > but tying one SA to one core would mean we won't be able to do that.
> > > > >
> > > > > - In the inbound side, we will have a fat flow hitting one core. If
> > > > >   IPsec library assumes single core, we will not be able to to spread
> > > > > fat flow to multiple cores. And one SA-one core would mean all ports on
> > > > > which we would expect IPsec traffic has to be handled by that core.
> > > >
> > > > I suppose that all refers to the discussion about MT safe API for rte_ipsec, right?
> > > > If so, then as I said in my reply to Anoob:
> > > > We will try to make API usable in MT environment for v1,
> > > > so you can review and provide comments at early stages.
> > >
> > > OK
> > >
> > > >
> > > > >
> > > > > I have made a simple presentation. This presentation details ONE WAY to
> > > > > implement the IPSec with HW support on NPU.
> > > > >
> > > > > https://docs.google.com/presentation/d/1e3IDf9R7ZQB8FN16Nvu7KINuLSWMdyKEw8_0H05rjj4/edit?usp=sharing
> > > > >
> > > >
> > > > Thanks, quite helpful.
> > > > Actually from page 3, it looks like your expectations don't contradict in general with proposed API:
> > > >
> > > > ...
> > > > } else if (ev.event_type == RTE_EVENT_TYPE_LCORE && ev.sub_event_id == APP_STATE_SEQ_UPDATE) {
> > > >                         sa = ev.flow_queue_id;
> > > >                         /* do critical section work per sa */
> > > >                         do_critical_section_work(sa);
> > > >
> > > > [KA] that's the place where I expect either
> > > > rte_ipsec_inline_process(sa, ...); OR rte_ipsec_crypto_prepare(sa, ...);
> > > > would be called.
> > >
> > > Makes sense. But currently, the library defines what is
> > > rte_ipsec_inline_process() and rte_ipsec_crypto_prepare(), but it should
> > > be based on underneath security device or crypto device.
> >
> > Reason for that - their code-paths are quite different:
> > for inline devices we can do whole processing synchronously(within process() function),
> > while fro crypto it is sort of split into tw parts -
> > we first have to do prepare();enqueue() them to crypto-dev, and then dequeue();process().
> > Another good thing with that way - it allows the same SA to work with different devices.
> >
> > >
> > > So, IMO for better control, these functions should be the function pointer
> > > based and based on underlying device, library can fill the
> > > implementation.
> > >
> > > IMO, it is not possible to create "static inline function" with all "if"
> > > checks. I think, we can have four ipsec functions with function pointer
> > > scheme.
> > >
> > > rte_ipsec_inbound_prepare()
> > > rte_ipsec_inbound_process()
> > > rte_ipsec_outbound_prepare()
> > > rte_ipsec_outbound_process()
> > >
> > > Some of the other concerns:
> > > 1) For HW implementation, rte_ipsec_sa needs to opaque like rte_security
> > > as some of the structure defined by HW or Microcode. We can choose
> > > absolute generic items as common and device/rte_security specific can be opaque.
> >
> > I don't think it would be a problem, rte_ipsec_sa  does contain a pointer to
> > rte_security_session, so it can provide it as an argument to these functions.
> 
> The rte_ipsec_sa would need some private space for application to store
> it's metadata. There can be SA implementations with additional fields
> for faster lookups. To rephrase, the application should be given some
> provision to store some metadata it would need for faster lookups.
> may sa_init API can give amount private size required.
> 
> 
> >
> > >
> > > 2)I think, in order to accommodate the event drivern model. We need to pass
> > > void ** in prepare() and process() function with an additional argument
> > > of type(TYPE_EVENT/TYPE_MBUF) can be passed to detect packet object
> > > type as some of the functions in prepare() and process() may need
> > > rte_event to operate on.
> >
> > You are talking here about security device specific functions described below, correct?
> >
> > >
> > > >
> > > >                      /* Issue the crypto request and generate the following on crypto work completion */
> > > > [KA] that's the place where I expect rte_ipsec_crypto_process(...) be invoked.
> > > >
> > > >                         ev.flow_queue_id = tx_port;
> > > >                         ev.sub_event_id = tx_queue_id;
> > > >                         ev.sched_sync = RTE_SCHED_SYNC_ATOMIC;
> > > >                         rte_cryptodev_event_enqueue(cryptodev, ev.mbuf, eventdev, ev);
> > > >                 }
> > > >
> > > >
> > > > > I am not saying this should be the ONLY way to do as it does not work
> > > > > very well with non NPU/FPGA class of SoC.
> > > > >
> > > > > So how about making the proposed IPSec library as plugin/driver to
> > > > > rte_security.
> > > >
> > > > As I mentioned above, I don't think that pushing whole IPSec data-path into rte_security
> > > > is the best possible approach.
> > > > Though I probably understand your concern:
> > > > In RFC code we always do whole prepare/process in SW (attach/remove ESP headers/trailers, so paddings etc.),
> > > > i.e. right now only device types: RTE_SECURITY_ACTION_TYPE_NONE and RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO are
> covered.
> > > > Though there are devices where most of prepare/process can be done in HW
> > > > (RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL/RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL),
> > > > plus in future could be devices where prepare/process would be split between HW/SW in a custom way.
> > > > Is that so?
> > > > To address that issue I suppose we can do:
> > > > 1. Add support for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL and RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> > > >     security devices into ipsec.
> > > >     We planned to do it anyway, just don't have it done yet.
> > > > 2. For custom case - introduce RTE_SECURITY_ACTION_TYPE_INLINE_CUSTOM and
> RTE_SECURITY_ACTION_TYPE_LOOKASIDE_CUSTOM
> > >
> > > The problem is, CUSTOM may have different variants and "if" conditions won't
> > > scale if we choose to have non function pointer scheme. Otherwise, it
> > > looks OK to create new SECURITY TYPE and associated plugin for prepare() and process()
> > > function in librte_ipsec library.
> >
> > In principle, I don't mind to always use function pointers for prepare()/process(), but:
> > from your description above of INLINE_PROTOCOL and LOOKASIDE_PROTOCOL
> > the process()/prepare() for such devices looks well defined and
> > straightforward to implement.
> > Not sure we'll need a function pointer for such simple and lightweight case:
> > set/check ol_flags, set/read userdata value.
> > I think extra function call here is kind of overkill and will only slowdown things.
> > But if that would be majority preference - I wouldn't argue.
> > BTW if we'll agree to always use function pointers for process/prepare,
> > then there is no point to have that all existing action types -
> > all we need is an indication is it inline or lookaside device and
> > function pointers for prepare/process().
> 
> Me too not a fan of function pointer scheme. But options are limited.
> 
> Though the generic usage seems straightforward, the implementation of
> the above modes can be very different. Vendors could optimize various
> operations (SQN update for example) for better performance on their
> hardware. Sticking to one approach would negate that advantage.
> 
> Another option would be to use multiple-worker model that Anoob had
> proposed some time back.
> https://mails.dpdk.org/archives/dev/2018-June/103808.html
> 
> Idea would be to make all lib_ipsec functions added as static inline
> functions.
> 
> static inline rte_ipsec_add_tunnel_hdr(struct rte_mbuf *mbuf);
> static inline rte_ipsec_update_sqn(struct rte_mbuf *mbuf, &seq_no);
> ...
> 
> For the regular use case, a fat
> rte_ipsec_(inbound/outbound)_(prepare/process) can be provided. The
> worker implemented for that case can directly call the function and
> forget about the other modes. For other vendors with varying
> capabilities, there can be multiple workers taking advantage of the hw
> features. For such workers, the static inline functions can be used as
> required. This gives vendors opportunity to pick and choose what they
> want from the ipsec lib. The worker to be used for that case will be
> determined based on the capabilities exposed by the PMDs.
> 
> https://mails.dpdk.org/archives/dev/2018-June/103828.html
> 
> The above email explains how multiple workers can be used with l2fwd.
> 
> For this to work, the application & library code need to be modularised.
> Like what is being done in the following series,
> https://mails.dpdk.org/archives/dev/2018-June/103786.html
> 
> This way one application can be made to run on multiple platforms, with
> the app being optimized for the platform on which it would run.
> 
> /* ST SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - NO EVENTDEV*/
> worker1()
> {
>      while(true) {
>          nb_pkts = rte_eth_rx_burst();
> 
>          if (nb_pkts != 0) {
>              /* Do lookup */
>              rte_ipsec_inbound_prepare();
>              rte_cryptodev_enqueue_burst();
>              /* Update in-flight */
>          }
> 
>          if (in_flight) {
>              rte_cryptodev_dequeue_burst();
>              rte_ipsec_outbound_process();
>          }
>          /* route packet */
> }
> 
> #include <rte_ipsec.h>   /* For IPsec lib static inlines */
> 
> static inline rte_event_enqueue(struct rte_event *ev)
> {
>      ...
> }
> 
> /* MT safe SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - EVENTDEV)
> worker2()
> {
>      while(true) {
>          nb_pkts = rte_eth_rx_burst();
> 
>          if (nb_pkts != 0) {
>              /* Do lookup */
>             rte_ipsec_add tunnel(ev->mbuf);
>             rte_event_enqueue(ev)
>             rte_cryptodev_enqueue_burst(ev->mbuf);
>              /* Update in-flight */
>          }
> 
>          if (in_flight) {
>              rte_cryptodev_dequeue_burst();
>              rte_ipsec_outbound_process();
>          }
>          /* route packet */
> }

Hmm, not sure how these 2 cases really differs in terms of ipsec processing.
I do understand the in second one we use events to propagate packets through the system,
and that eventdev might be smart enough to preserve packet ordering, etc.
But in terms of ipsec processing we have to do exactly the same for both cases.
Let say for the example above (outbound, crytpodev):
a) lookup an SA
b) increment SA.SQN and check for overflow
d) generate IV
e) generate & fill ESP header/trailer, tunnel header
f) perform actual encrypt, generate digest

So crypto_prepare() - deals with b)-e).
f) is handled by cryptodev. 
Yes, step b) might need to be atomic, or might not -
depends on particular application design.
But in both cases (polling/eventdev) we do need all these steps to be performed.
Konstantin

> 
> In short,
> 
> 1) Have separate small inline functions in library
> 2) If something can be grouped, it can be exposed a specific function
> to address a specific usecases
> 3) Let remaining code, can go in application as different worker() to
> address all the usecases.
> 
> >
> > Konstantin
> >
> > >
> > >
> > > >     and add into rte_security_ops   new functions:
> > > >     uint16_t lookaside_prepare(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> num);
> > > >     uint16_t lookaside_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t
> num);
> > > >     uint16_t inline_process(struct rte_security_session *sess, struct rte_mbuf *mb[], struct struct rte_crypto_op *cop[], uint16_t num);
> > > >     So for custom HW, PMD can overwrite normal prepare/process behavior.
> > > >
> > > > > This would give flexibly for each vendor/platform choose to different
> > > > > IPse implementation based on HW support WITHOUT CHANGING THE APPLICATION
> > > > > INTERFACE.
> > > >
> > > > Not sure what API changes you are referring to?
> > > > As I am aware we do introduce new API, but all existing APIs remain in place.
> > >
> > >
> > > What I meant was, Single application programming interface to enable IPSec processing to
> > > application.
> > >
> > >
> > > >
> > > > >
> > > > > IMO, rte_security IPsec look aside support can be simply added by
> > > > > creating the virtual crypto device(i.e move the proposed code to the virtual crypto device)
> > > > > likewise inline support
> > > > > can be added by the virtual ethdev device.
> > > >
> > > > That's probably possible and if someone would like to introduce such abstraction - NP in general
> > > > (though my suspicion - it might be too heavy to be really useful).
> > > > Though I don't think it should be the only possible way for the user to enable IPsec data-processing inside his app.
> > > > Again I guess such virtual-dev will still use rte_ipsec inside.
> > >
> > > I don't have strong opinion on virtual devices VS function pointer based
> > > prepare() and process() function in librte_ipsec library.
> > >
> > > >
> > > > > This would avoid the need for
> > > > > updating ipsec-gw application as well i.e unified interface to application.
> > > >
> > > > I think - it would  really good to simplify existing ipsec-secgw sample app.
> > > > Some parts of it seems unnecessary complex to me.
> > > > One of the reasons for it -  we don't really have an unified (and transparent) API for ipsec data-path.
> > > > Let's look at ipsec_enqueue() and related code (examples/ipsec-secgw/ipsec.c:365)
> > > > It is huge (and ugly) -  user has to handle dozen different cases just to enqueue packet for IPsec processing.
> > > > One of the aims of rte_ipsec library - hide all that complexities inside the library and provide to
> > > > the upper layer clean and transparent API.
> > > >
> > > > >
> > > > > If you don't like the above idea, any scheme of plugin based
> > > > > implementation would be fine so that vendor or platform can choose its own implementation.
> > > > > It can be based on partial HW implement too. i.e SA look can be used in SW, remaining stuff in HW
> > > > > (for example IPsec inline case)
> > > >
> > > > I am surely ok with the idea to give vendors an ability to customize implementation
> > > > and enable their HW capabilities.
> > >
> > > I think, We are on the same page, just that the fine details of "framework"
> > > for customizing implementation based on their HW capabilities need to
> > > iron out.
> > >
> > > > Do you think proposed additions to the rte_security would be  enough,
> > > > or something extra is needed?
> > >
> > > See above.
> > >
> > > Jerin
> > >
> > > >
> > > > Konstantin
> > > >
> > > >
> > > > >
> > > > > # For protocols like UDP, it makes sense to create librte_udp as there
> > > > > no much HW specific offload other than ethdev provides.
> > > > >
> > > > > # PDCP could be another library to offload to HW, So talking
> > > > > rte_security path makes more sense in that case too.
> > > > >
> > > > > Jerin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-10-02 23:56                       ` Ananyev, Konstantin
@ 2018-10-03  9:37                         ` Jerin Jacob
  2018-10-09 18:24                           ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-10-03  9:37 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: Joseph, Anoob, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad, akhil.goyal, hemant.agrawal, shreyansh.jain

-----Original Message-----
> Date: Tue, 2 Oct 2018 23:56:23 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "Joseph, Anoob" <Anoob.Joseph@caviumnetworks.com>, "dev@dpdk.org"
>  <dev@dpdk.org>, "Awal, Mohammad Abdul" <mohammad.abdul.awal@intel.com>,
>  "Doherty, Declan" <declan.doherty@intel.com>, Narayana Prasad
>  <narayanaprasad.athreya@caviumnetworks.com>, "akhil.goyal@nxp.com"
>  <akhil.goyal@nxp.com>, "hemant.agrawal@nxp.com" <hemant.agrawal@nxp.com>,
>  "shreyansh.jain@nxp.com" <shreyansh.jain@nxp.com>
> Subject: RE: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path
>  processing
> 
> External Email
> 
> Hi Jerin,

Hi Konstantin,

> 
> > static inline rte_ipsec_add_tunnel_hdr(struct rte_mbuf *mbuf);
> > static inline rte_ipsec_update_sqn(struct rte_mbuf *mbuf, &seq_no);
> > ...
> >
> > For the regular use case, a fat
> > rte_ipsec_(inbound/outbound)_(prepare/process) can be provided. The
> > worker implemented for that case can directly call the function and
> > forget about the other modes. For other vendors with varying
> > capabilities, there can be multiple workers taking advantage of the hw
> > features. For such workers, the static inline functions can be used as
> > required. This gives vendors opportunity to pick and choose what they
> > want from the ipsec lib. The worker to be used for that case will be
> > determined based on the capabilities exposed by the PMDs.
> >
> > https://mails.dpdk.org/archives/dev/2018-June/103828.html
> >
> > The above email explains how multiple workers can be used with l2fwd.
> >
> > For this to work, the application & library code need to be modularised.
> > Like what is being done in the following series,
> > https://mails.dpdk.org/archives/dev/2018-June/103786.html
> >
> > This way one application can be made to run on multiple platforms, with
> > the app being optimized for the platform on which it would run.
> >
> > /* ST SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - NO EVENTDEV*/
> > worker1()
> > {
> >      while(true) {
> >          nb_pkts = rte_eth_rx_burst();
> >
> >          if (nb_pkts != 0) {
> >              /* Do lookup */
> >              rte_ipsec_inbound_prepare();
> >              rte_cryptodev_enqueue_burst();
> >              /* Update in-flight */
> >          }
> >
> >          if (in_flight) {
> >              rte_cryptodev_dequeue_burst();
> >              rte_ipsec_outbound_process();
> >          }
> >          /* route packet */
> > }
> >
> > #include <rte_ipsec.h>   /* For IPsec lib static inlines */
> >
> > static inline rte_event_enqueue(struct rte_event *ev)
> > {
> >      ...
> > }
> >
> > /* MT safe SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - EVENTDEV)
> > worker2()
> > {
> >      while(true) {
> >          nb_pkts = rte_eth_rx_burst();
> >
> >          if (nb_pkts != 0) {
> >              /* Do lookup */
> >             rte_ipsec_add tunnel(ev->mbuf);
> >             rte_event_enqueue(ev)
> >             rte_cryptodev_enqueue_burst(ev->mbuf);
> >              /* Update in-flight */
> >          }
> >
> >          if (in_flight) {
> >              rte_cryptodev_dequeue_burst();
> >              rte_ipsec_outbound_process();
> >          }
> >          /* route packet */
> > }
> 
> Hmm, not sure how these 2 cases really differs in terms of ipsec processing.
> I do understand the in second one we use events to propagate packets through the system,
> and that eventdev might be smart enough to preserve packet ordering, etc.
> But in terms of ipsec processing we have to do exactly the same for both cases.
> Let say for the example above (outbound, crytpodev):
> a) lookup an SA
> b) increment SA.SQN and check for overflow
> d) generate IV
> e) generate & fill ESP header/trailer, tunnel header
> f) perform actual encrypt, generate digest
> 
> So crypto_prepare() - deals with b)-e).
> f) is handled by cryptodev.
> Yes, step b) might need to be atomic, or might not -
> depends on particular application design.
> But in both cases (polling/eventdev) we do need all these steps to be performed.

The real question, Is the new library should be aware of eventdev or
application decides it?

If it is former, in order to complete step (b), we need rte_event also passed to
_process() API and process() API needs to be function pointer in order to
accommodate all combinations of different HW/SW capabilities.



> Konstantin
> 
> >
> > In short,
> >
> > 1) Have separate small inline functions in library
> > 2) If something can be grouped, it can be exposed a specific function
> > to address a specific usecases
> > 3) Let remaining code, can go in application as different worker() to
> > address all the usecases.
> >
> > >

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 0/9] ipsec: new library for IPsec data-path processing
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  2018-09-03 12:41 ` Joseph, Anoob
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

This RFC targets 19.02 release.

This RFC introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
The library is concentrated on data-path protocols processing (ESP and AH),
IKE protocol(s) implementation is out of scope for that library.
Though hook/callback mechanisms might be defined in future to allow
integrate it with existing IKE implementations.
Due to quite complex nature of IPsec protocol suite and variety of user
requirements and usage scenarios a few API levels will be provided:
1) Security Association (SA-level) API
    Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
       add/remove ESP/AH related headers and data, etc.).
2) Security Association Database (SAD) API
    API to create/manage/destroy IPsec SAD.
    While DPDK IPsec library plans to have its own implementation,
    the intention is to keep it as independent from the other parts
    of IPsec library as possible.
    That is supposed to give users the ability to provide their own
    implementation of the SAD compatible with the other parts of the
    IPsec library.
3) IPsec Context (CTX) API
    This is supposed to be a high-level API, where each IPsec CTX is an
    abstraction of 'independent copy of the IPsec stack'.
    CTX owns set of SAs, SADs and assigned to it crypto-dev queues, etc.
    and provides:
    - de-multiplexing stream of inbound packets to particular SAs and
      further IPsec related processing.
    - IPsec related processing for the outbound packets.
    - SA add/delete/update functionality

Current RFC concentrates on SA-level API only (1),
detailed discussion for 2) and 3) will be subjects for separate RFC(s).

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed
by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_crypto_group(...); /* optional */
  rte_ipsec_process(...);
  
Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_process()
is sufficient for that case.

Current implementation supports all four currently defined rte_security types.
Though to accommodate future custom implementations function pointers model is
used  for both rte_ipsec_crypto_prepare() and rte_ipsec_process().

Implemented:
------------
- ESP tunnel mode support (both IPv4/IPv6)
- Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL
- Anti-Replay window and ESN support
- Unit Test (few basic cases for now)

TODO list:
----------
- ESP transport mode support (both IPv4/IPv6)
- update examples/ipsec-secgw to use librte_ipsec
- extend Unit Test

Konstantin Ananyev (9):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test

 config/common_base                     |    5 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |   74 ++
 lib/librte_ipsec/ipsec_sqn.h           |  315 ++++++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 ++
 lib/librte_ipsec/rte_ipsec.h           |  156 ++++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++++
 lib/librte_ipsec/rte_ipsec_sa.h        |  166 ++++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/rwl.h                 |   68 ++
 lib/librte_ipsec/sa.c                  | 1005 ++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |   92 +++
 lib/librte_ipsec/ses.c                 |   45 ++
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 1329 ++++++++++++++++++++++++++++++++
 23 files changed, 3528 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/rwl.h
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  2018-09-03 12:41 ` Joseph, Anoob
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 0/9] " Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Add 'uint64_t userdata' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..a150876b9 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t userdata;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 2/9] security: add opaque userdata pointer into security session
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (2 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Add 'uint64_t userdata' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index b0d1b97ee..a945dc515 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -257,6 +257,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t userdata;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 3/9] net: add ESP trailer structure definition
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (3 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 4/9] lib: introduce ipsec library Konstantin Ananyev
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 4/9] lib: introduce ipsec library
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (4 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 +++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++++
 lib/librte_ipsec/meson.build           |  10 ++
 lib/librte_ipsec/rte_ipsec_sa.h        | 139 ++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 ++
 lib/librte_ipsec/sa.c                  | 282 +++++++++++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  75 +++++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 11 files changed, 599 insertions(+)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/config/common_base b/config/common_base
index acc5211bc..e7e66390b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -885,6 +885,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
 #
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
+#
 # Compile the test application
 #
 CONFIG_RTE_APP_TEST=y
diff --git a/lib/Makefile b/lib/Makefile
index 8c839425d..8cfc59054 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -105,6 +105,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 
 ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
 DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..7758dcc6d
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..d0d122824
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/**
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequnce number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..0efda33de
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode repated parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode repated parameters */
+	};
+
+	uint32_t replay_win_sz;
+	/**< window size to enable sequence replay attack handling.
+	 * Replay checking is disabled if the window size is 0.
+	 */
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG_IPV,
+	RTE_SATP_LOG_PROTO,
+	RTE_SATP_LOG_DIR,
+	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate requied SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..913856a3d
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,282 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+
+static int
+check_crypto_xform(struct crypto_xform *xform)
+{
+	uintptr_t p;
+
+	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
+
+	/* either aead or both auth and cipher should be not NULLs */
+	if (xform->aead) {
+		if (p)
+			return -EINVAL;
+	} else if (p == (uintptr_t)xform->auth) {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+fill_crypto_xform(struct crypto_xform *xform,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf;
+
+	memset(xform, 0, sizeof(*xform));
+
+	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;
+			xform->auth = &xf->auth;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->cipher != NULL)
+				return -EINVAL;
+			xform->cipher = &xf->cipher;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+			if (xform->aead != NULL)
+				return -EINVAL;
+			xform->aead = &xf->aead;
+		} else
+			return -EINVAL;
+	}
+
+	return check_crypto_xform(xform);
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static uint64_t
+fill_sa_type(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	} else {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	}
+
+	return tp;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->ctp.auth.length + sa->ctp.cipher.offset;
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->sqn.outb = 1;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = sa->hdr_len;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static int
+esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = 4;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = 4;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_MAX_IV_SIZE;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->aad_len = 0;
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	sa->proto = prm->tun.next_proto;
+
+	if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB)
+		esp_inb_tun_init(sa);
+	else
+		esp_outb_tun_init(sa, prm);
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	type = fill_sa_type(prm);
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	type = fill_sa_type(prm);
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp inbound and outbound tunnel is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP ||
+			prm->ipsec_xform.mode !=
+			RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
+		return -EINVAL;
+
+	if (prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, prm);
+	if (rc != 0)
+		return rc;
+
+	sa->type = type;
+	sa->size = sz;
+
+	rc = esp_tun_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* check for ESN flag */
+	if (prm->ipsec_xform.options.esn == 0)
+		sa->sqn_mask = UINT32_MAX;
+	else
+		sa->sqn_mask = UINT64_MAX;
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..ef030334c
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* helper structures to store/update crypto session/op data */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+	__uint128_t raw;
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+/* Inbound replay window and last sequence number */
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+	uint8_t proto;    /* next proto */
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index 3acc67e6e..4b0c13148 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,6 +21,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'meter', 'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 32579e4b7..5756ffe40 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -62,6 +62,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (5 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 4/9] lib: introduce ipsec library Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-18 17:37   ` Jerin Jacob
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 6/9] ipsec: implement " Konstantin Ananyev
                   ` (13 subsequent siblings)
  20 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 154 +++++++++++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  98 ++++++++++++++++++++-
 lib/librte_ipsec/sa.h                  |   3 +
 lib/librte_ipsec/ses.c                 |  45 ++++++++++
 7 files changed, 306 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 7758dcc6d..79f187fae 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..5c9a1ed0b
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ * IKEv2 protocol support right now is out of scope of that draft.
+ * Though it tries to define related API in such way, that it could be adopted
+ * by IKEv2 implementation.
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_process for more details).
+ */
+struct rte_ipsec_sa_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_func func;
+};
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..47620cef5 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,9 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_crypto_prepare;
+	rte_ipsec_session_prepare;
+	rte_ipsec_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 913856a3d..ad2aa29df 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -280,3 +280,99 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+static uint16_t
+lksd_none_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	RTE_SET_USED(ss);
+	RTE_SET_USED(mb);
+	RTE_SET_USED(cop);
+	RTE_SET_USED(num);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	RTE_SET_USED(ss);
+	RTE_SET_USED(mb);
+	RTE_SET_USED(cop);
+	RTE_SET_USED(num);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+lksd_none_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	RTE_SET_USED(ss);
+	RTE_SET_USED(mb);
+	RTE_SET_USED(num);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+inline_crypto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	RTE_SET_USED(ss);
+	RTE_SET_USED(mb);
+	RTE_SET_USED(num);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+inline_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	RTE_SET_USED(ss);
+	RTE_SET_USED(mb);
+	RTE_SET_USED(num);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+lksd_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	RTE_SET_USED(ss);
+	RTE_SET_USED(mb);
+	RTE_SET_USED(num);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+const struct rte_ipsec_sa_func *
+ipsec_sa_func_select(const struct rte_ipsec_session *ss)
+{
+	static const struct rte_ipsec_sa_func tfunc[] = {
+		[RTE_SECURITY_ACTION_TYPE_NONE] = {
+			.prepare = lksd_none_prepare,
+			.process = lksd_none_process,
+		},
+		[RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO] = {
+			.prepare = NULL,
+			.process = inline_crypto_process,
+		},
+		[RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL] = {
+			.prepare = NULL,
+			.process = inline_proto_process,
+		},
+		[RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL] = {
+			.prepare = lksd_proto_prepare,
+			.process = lksd_proto_process,
+		},
+	};
+
+	if (ss->type >= RTE_DIM(tfunc))
+		return NULL;
+
+	return tfunc + ss->type;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index ef030334c..13a5a68f3 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -72,4 +72,7 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+const struct rte_ipsec_sa_func *
+ipsec_sa_func_select(const struct rte_ipsec_session *ss);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..afefda937
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	const struct rte_ipsec_sa_func *fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	fp = ipsec_sa_func_select(ss);
+	if (fp == NULL)
+		return -ENOTSUP;
+
+	ss->func = fp[0];
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->userdata = (uintptr_t)ss;
+	else
+		ss->security.ses->userdata = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 6/9] ipsec: implement SA data-path API
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (6 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_crypto_prepare() and
rte_ipsec_process().
Current implementation:
 - supports ESP protocol tunnel mode only.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
	- RTE_SECURITY_ACTION_TYPE_NONE
	- RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
	- RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
	- RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/crypto.h    |  74 +++++
 lib/librte_ipsec/ipsec_sqn.h | 144 ++++++++-
 lib/librte_ipsec/pad.h       |  45 +++
 lib/librte_ipsec/sa.c        | 681 ++++++++++++++++++++++++++++++++++++++++---
 4 files changed, 909 insertions(+), 35 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..6ff995c59
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,74 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32;
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5  AAD Construction
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, const struct gcm_esph_iv *hiv,
+	int esn)
+{
+	aad->spi = hiv->esph.spi;
+	if (esn)
+		aad->sqn.u64 = hiv->iv;
+	else
+		aad->sqn.u32 = hiv->esph.seq;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], uint64_t sqn)
+{
+	iv[0] = rte_cpu_to_be_64(sqn);
+	iv[1] = 0;
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index d0d122824..7477b8d59 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,7 +15,7 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
-/**
+/*
  * for given size, calculate required number of buckets.
  */
 static uint32_t
@@ -30,6 +30,148 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* seq not valid (first or wrapped) */
+	if (sqn == 0)
+		return -EINVAL;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if ((sqn + sa->replay.win_sz) < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if ((sqn + sa->replay.win_sz) < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequnce number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index ad2aa29df..ae8ce4f24 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,12 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -174,11 +177,13 @@ esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
 		sa->pad_align = 4;
 	} else {
+		sa->aad_len = 0;
 		sa->icv_len = cxf->auth->digest_length;
 		sa->iv_ofs = cxf->cipher->iv.offset;
 		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
@@ -191,7 +196,6 @@ esp_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 			return -EINVAL;
 	}
 
-	sa->aad_len = 0;
 	sa->udata = prm->userdata;
 	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
 	sa->salt = prm->ipsec_xform.salt;
@@ -281,72 +285,681 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+esp_outb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, uint64_t sqn,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, pdlen, pdofs, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct aead_gcm_aad *aad;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = mb->pkt_len + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - mb->pkt_len;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		struct ipv4_hdr *l3h;
+		l3h = (struct ipv4_hdr *)(ph + sa->hdr_l3_off);
+		l3h->packet_id = rte_cpu_to_be_16(sqn);
+		l3h->total_length = rte_cpu_to_be_16(mb->pkt_len -
+			sa->hdr_l3_off);
+	} else {
+		struct ipv6_hdr *l3h;
+		l3h = (struct ipv6_hdr *)(ph + sa->hdr_l3_off);
+		l3h->payload_len = rte_cpu_to_be_16(mb->pkt_len -
+			sa->hdr_l3_off - sizeof(*l3h));
+	}
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = rte_cpu_to_be_32(sqn);
+
+	/* offset for ICV */
+	pdofs += pdlen;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, (const struct gcm_esph_iv *)esph,
+			IS_ESN(sa));
+	}
+
+	return clen;
+}
+
+static inline uint16_t
+esp_outb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], struct rte_mbuf *dr[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		gen_iv(iv, sqn + i);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqn + i, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			esp_outb_tun_cop_prepare(cop[k], sa, iv, &icv, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	return k;
+}
+
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint32_t icv_ofs, plen, sqn;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct aead_gcm_aad *aad;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	sqn = rte_be_to_cpu_32(esph->seq);
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, (const struct gcm_esph_iv *)esph,
+			IS_ESN(sa));
+	}
+
+	return plen;
+}
+
+static inline uint16_t
+esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], struct rte_mbuf *dr[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	return k;
+}
+
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
 static uint16_t
 lksd_none_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_crypto_op *cop[], uint16_t num)
 {
-	RTE_SET_USED(ss);
-	RTE_SET_USED(mb);
-	RTE_SET_USED(cop);
-	RTE_SET_USED(num);
-	rte_errno = ENOTSUP;
-	return 0;
+	uint32_t n;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	sa = ss->sa;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		n = esp_inb_tun_prepare(sa, mb, cop, dr, num);
+		lksd_none_cop_prepare(ss, mb, cop, n);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		n = esp_outb_tun_prepare(sa, mb, cop, dr, num);
+		lksd_none_cop_prepare(ss, mb, cop, n);
+		break;
+	default:
+		rte_errno = ENOTSUP;
+		n = 0;
+	}
+
+	/* copy not prepared mbufs beyond good ones */
+	if (n != num && n != 0)
+		mbuf_bulk_copy(mb + n, dr, num - n);
+
+	return n;
+}
+
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
 }
 
 static uint16_t
-lksd_proto_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
-	struct rte_crypto_op *cop[], uint16_t num)
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
 {
-	RTE_SET_USED(ss);
-	RTE_SET_USED(mb);
-	RTE_SET_USED(cop);
-	RTE_SET_USED(num);
-	rte_errno = ENOTSUP;
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	*sqn = rte_be_to_cpu_32(esph->seq);
 	return 0;
 }
 
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+static inline uint16_t
+esp_inb_tun_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	if (k != num)
+		rte_errno = EBADMSG;
+	return k;
+}
+
+/*
+ * helper routine, puts packets with PKT_RX_SEC_OFFLOAD_FAILED set,
+ * into the death-row.
+ */
+static inline uint16_t
+pkt_flag_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
+	struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+
+	RTE_SET_USED(sa);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	if (k != num)
+		rte_errno = EBADMSG;
+	return k;
+}
+
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+static inline uint16_t
+inline_outb_tun_pkt_process(struct rte_ipsec_sa *sa,
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		gen_iv(iv, sqn + i);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqn + i, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	return k;
+}
+
 static uint16_t
 lksd_none_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num)
 {
-	RTE_SET_USED(ss);
-	RTE_SET_USED(mb);
-	RTE_SET_USED(num);
-	rte_errno = ENOTSUP;
-	return 0;
+	uint32_t n;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	sa = ss->sa;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		n = esp_inb_tun_pkt_process(sa, mb, dr, num);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		n = pkt_flag_process(sa, mb, dr, num);
+		break;
+	default:
+		n = 0;
+		rte_errno = ENOTSUP;
+	}
+
+	/* copy not prepared mbufs beyond good ones */
+	if (n != num && n != 0)
+		mbuf_bulk_copy(mb + n, dr, num - n);
+
+	return n;
 }
 
 static uint16_t
 inline_crypto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num)
 {
-	RTE_SET_USED(ss);
-	RTE_SET_USED(mb);
-	RTE_SET_USED(num);
-	rte_errno = ENOTSUP;
-	return 0;
+	uint32_t n;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		n = esp_inb_tun_pkt_process(sa, mb, dr, num);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		n = inline_outb_tun_pkt_process(sa, mb, dr, num);
+		inline_outb_mbuf_prepare(ss, mb, n);
+		break;
+	default:
+		n = 0;
+		rte_errno = ENOTSUP;
+	}
+
+	/* copy not processed mbufs beyond good ones */
+	if (n != num && n != 0)
+		mbuf_bulk_copy(mb + n, dr, num - n);
+
+	return n;
 }
 
 static uint16_t
 inline_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num)
 {
-	RTE_SET_USED(ss);
-	RTE_SET_USED(mb);
-	RTE_SET_USED(num);
-	rte_errno = ENOTSUP;
-	return 0;
+	uint32_t n;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* outbound, just set flags and metadata */
+	if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_OB) {
+		inline_outb_mbuf_prepare(ss, mb, num);
+		return num;
+	}
+
+	/* inbound, check that HW succesfly processed packets */
+	n = pkt_flag_process(sa, mb, dr, num);
+
+	/* copy the bad ones, after the good ones */
+	if (n != num && n != 0)
+		mbuf_bulk_copy(mb + n, dr, num - n);
+	return n;
 }
 
 static uint16_t
 lksd_proto_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	uint16_t num)
 {
-	RTE_SET_USED(ss);
-	RTE_SET_USED(mb);
-	RTE_SET_USED(num);
-	rte_errno = ENOTSUP;
-	return 0;
+	uint32_t n;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* check that HW succesfly processed packets */
+	n = pkt_flag_process(sa, mb, dr, num);
+
+	/* copy the bad ones, after the good ones */
+	if (n != num && n != 0)
+		mbuf_bulk_copy(mb + n, dr, num - n);
+	return n;
 }
 
 const struct rte_ipsec_sa_func *
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 7/9] ipsec: rework SA replay window/SQN for MT environment
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (7 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 6/9] ipsec: implement " Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

With these changes functions:
  - rte_ipsec_crypto_prepare
  - rte_ipsec_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 129 +++++++++++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  27 +++++++++
 lib/librte_ipsec/rwl.h          |  68 +++++++++++++++++++++
 lib/librte_ipsec/sa.c           |  22 +++++--
 lib/librte_ipsec/sa.h           |  22 +++++--
 5 files changed, 258 insertions(+), 10 deletions(-)
 create mode 100644 lib/librte_ipsec/rwl.h

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 7477b8d59..a3c993a52 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -5,6 +5,8 @@
 #ifndef _IPSEC_SQN_H_
 #define _IPSEC_SQN_H_
 
+#include "rwl.h"
+
 #define WINDOW_BUCKET_BITS		6 /* uint64_t */
 #define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
 #define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
@@ -15,6 +17,9 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -109,8 +114,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -173,6 +182,19 @@ esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
 }
 
 /**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
+/**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequnce number (RSN) information.
  */
@@ -187,4 +209,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rwl_try_read_lock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rwl_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rwl_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rwl_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index 0efda33de..3324cbedb 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -54,12 +54,34 @@ struct rte_ipsec_sa_prm {
 };
 
 /**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_crypto_prepare
+ *  - rte_ipsec_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
+/**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
  * - IPsec proto (ESP/AH)
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
  * ...
  */
 
@@ -68,6 +90,7 @@ enum {
 	RTE_SATP_LOG_PROTO,
 	RTE_SATP_LOG_DIR,
 	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2,
 	RTE_SATP_LOG_NUM
 };
 
@@ -88,6 +111,10 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG_SQN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/rwl.h b/lib/librte_ipsec/rwl.h
new file mode 100644
index 000000000..fc44d1e9f
--- /dev/null
+++ b/lib/librte_ipsec/rwl.h
@@ -0,0 +1,68 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RWL_H_
+#define _RWL_H_
+
+/**
+ * @file rwl.h
+ *
+ * Analog of read-write locks, very much in favour of read side.
+ * Assumes, that there are no more then INT32_MAX concurrent readers.
+ * Consider to move into librte_eal.
+ */
+
+/**
+ * release read-lock.
+ * @param p
+ *   pointer to atomic variable.
+ */
+static inline void
+rwl_read_unlock(rte_atomic32_t *p)
+{
+	rte_atomic32_sub(p, 1);
+}
+
+/**
+ * try to grab read-lock.
+ * @param p
+ *   pointer to atomic variable.
+ * @return
+ *   positive value on success
+ */
+static inline int
+rwl_try_read_lock(rte_atomic32_t *p)
+{
+	int32_t rc;
+
+	rc = rte_atomic32_add_return(p, 1);
+	if (rc < 0)
+		rwl_read_unlock(p);
+	return rc;
+}
+
+/**
+ * grab write-lock.
+ * @param p
+ *   pointer to atomic variable.
+ */
+static inline void
+rwl_write_lock(rte_atomic32_t *p)
+{
+	while (rte_atomic32_cmpset((volatile uint32_t *)p, 0, INT32_MIN) == 0)
+		rte_pause();
+}
+
+/**
+ * release write-lock.
+ * @param p
+ *   pointer to atomic variable.
+ */
+static inline void
+rwl_write_unlock(rte_atomic32_t *p)
+{
+	rte_atomic32_sub(p, INT32_MIN);
+}
+
+#endif /* _RWL_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index ae8ce4f24..e2852b020 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -89,6 +89,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -135,6 +138,12 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm)
 			tp |= RTE_IPSEC_SATP_IPV4;
 	}
 
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	return tp;
 }
 
@@ -151,7 +160,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa)
 static void
 esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 	sa->hdr_len = prm->tun.hdr_len;
 	sa->hdr_l3_off = prm->tun.hdr_l3_off;
 	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
@@ -279,7 +288,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		sa->replay.win_sz = prm->replay_win_sz;
 		sa->replay.nb_bucket = nb;
 		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+		sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+		if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+			sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+				((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb));
 	}
 
 	return sz;
@@ -564,7 +576,7 @@ esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 	struct replay_sqn *rsn;
 	union sym_op_data icv;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -583,6 +595,7 @@ esp_inb_tun_prepare(struct rte_ipsec_sa *sa, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
 	return k;
 }
 
@@ -732,7 +745,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -742,6 +755,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 13a5a68f3..9fe1f8483 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -9,7 +9,7 @@
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
 
-/* helper structures to store/update crypto session/op data */
+/* these definitions probably has to be in rte_crypto_sym.h */
 union sym_op_ofslen {
 	uint64_t raw;
 	struct {
@@ -26,8 +26,11 @@ union sym_op_data {
 	};
 };
 
-/* Inbound replay window and last sequence number */
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_atomic32_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -64,10 +67,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 8/9] ipsec: helper functions to group completed crypto-ops
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (8 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 9/9] test/ipsec: introduce functional test Konstantin Ananyev
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 79f187fae..98c52f388 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 5c9a1ed0b..aa17c78e3 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -147,6 +147,8 @@ rte_ipsec_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..df6f4fdd1
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recomended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)ss->userdata;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)cs->userdata;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_crypto_group(const struct rte_crypto_op *cop[], struct rte_mbuf *mb[],
+	struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finilise it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 47620cef5..b025b636c 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_crypto_group;
 	rte_ipsec_crypto_prepare;
 	rte_ipsec_session_prepare;
 	rte_ipsec_process;
@@ -8,6 +9,7 @@ EXPERIMENTAL {
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 
 	local: *;
 };
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [RFC v2 9/9] test/ipsec: introduce functional test
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (9 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-10-09 18:23 ` Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-10-09 18:23 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal, Bernard Iremonger

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 1329 ++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1335 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index dcea4410d..2be25808c 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -204,6 +204,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index bacb5b144..803f2e28d 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -47,6 +47,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_scaling.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -113,6 +114,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'pipeline',
@@ -172,6 +174,7 @@ test_names = [
 	'hash_multiwriter_autotest',
 	'hash_perf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..6922cbb7e
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,1329 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE 100
+#define MAX_NB_SESSIONS            8
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct crypto_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *op_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct crypto_unittest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss;
+
+	struct rte_crypto_op *op;
+
+	struct rte_mbuf *obuf, *ibuf, *testbuf;
+
+	uint8_t *digest;
+};
+
+static struct crypto_testsuite_params testsuite_params = { NULL };
+static struct crypto_unittest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->op_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->op_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->op_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->op_mpool));
+		rte_mempool_free(ts_params->op_mpool);
+		ts_params->op_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	/* free crypto operation structure */
+	if (ut_params->op)
+		rte_crypto_op_free(ut_params->op);
+
+	/*
+	 * free mbuf - both obuf and ibuf are usually the same,
+	 * so check if they point at the same address is necessary,
+	 * to avoid freeing the mbuf twice.
+	 */
+	if (ut_params->obuf) {
+		rte_pktmbuf_free(ut_params->obuf);
+		if (ut_params->ibuf == ut_params->obuf)
+			ut_params->ibuf = 0;
+		ut_params->obuf = 0;
+	}
+	if (ut_params->ibuf) {
+		rte_pktmbuf_free(ut_params->ibuf);
+		ut_params->ibuf = 0;
+	}
+
+	if (ut_params->testbuf) {
+		rte_pktmbuf_free(ut_params->testbuf);
+		ut_params->testbuf = 0;
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+#define DATA_64_BYTES		(64)
+#define DATA_80_BYTES		(80)
+#define DATA_100_BYTES		(100)
+#define INBOUND_SPI			(7)
+#define OUTBOUND_SPI		(17)
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(1);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct crypto_unittest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct crypto_unittest_params *ut,
+	struct rte_mempool *pool)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss.security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss.security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss.security.ctx = &dummy_sec_ctx;
+	ut->ss.security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct crypto_unittest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss.crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct crypto_unittest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num)
+{
+	if (ut->ss.type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num);
+	else
+		return create_dummy_sec_session(ut, pool);
+}
+
+static void
+fill_crypto_xform(struct crypto_unittest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->crypto_xforms = &ut_params->auth_xform;
+}
+
+static int
+fill_ipsec_param(uint32_t spi, enum rte_security_ipsec_sa_direction direction,
+		uint32_t replay_win_sz, uint64_t flags)
+{
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform.spi = spi;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+	prm->ipsec_xform.direction = direction;
+	prm->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	prm->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+
+	/* setup tunnel related fields */
+	prm->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(uint32_t spi, enum rte_security_ipsec_sa_direction direction,
+		enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags)
+{
+	struct crypto_testsuite_params *ts = &testsuite_params;
+	struct crypto_unittest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	const struct rte_ipsec_sa_prm prm = {
+		.flags = flags,
+		.ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+		.replay_win_sz = replay_win_sz,
+	};
+
+	memset(&ut->ss, 0, sizeof(ut->ss));
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+	ut->ss.sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss.sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	rc = fill_ipsec_param(spi, direction, replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	ut->ss.type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss.sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+
+	return rte_ipsec_session_prepare(&ut->ss);
+}
+
+static int
+crypto_ipsec(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	uint32_t k, n;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_crypto_prepare(&ut_params->ss, &ut_params->ibuf,
+		&ut_params->op, 1);
+	if (k != 1) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		&ut_params->op, k);
+	if (k != 1) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	n = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		&ut_params->op, RTE_DIM(&ut_params->op));
+	if (n != 1) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	n = rte_ipsec_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)&ut_params->op,
+		&ut_params->obuf, grp, n);
+	if (n != 1 || grp[0].m[0] != ut_params->obuf || grp[0].cnt != 1 ||
+		 grp[0].id.ptr != &ut_params->ss) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	n = rte_ipsec_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+
+	if (n != 1) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_crypto_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_ipsec_crypto_op_alloc(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int rc = 0;
+
+	ut_params->op = rte_crypto_op_alloc(ts_params->op_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+	if (ut_params->op != NULL)
+		ut_params->op->sym[0].m_src = ut_params->ibuf;
+	else {
+		RTE_LOG(ERR, USER1,
+			"Failed to allocate symmetric crypto operation struct\n");
+		rc = TEST_FAILED;
+	}
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct crypto_unittest_params *ut_params)
+{
+	if (ut_params->ibuf) {
+		printf("ibuf data:\n");
+		rte_pktmbuf_dump(stdout, ut_params->ibuf,
+			ut_params->ibuf->data_len);
+	}
+	if (ut_params->obuf) {
+		printf("obuf data:\n");
+		rte_pktmbuf_dump(stdout, ut_params->obuf,
+			ut_params->obuf->data_len);
+	}
+	if (ut_params->testbuf) {
+		printf("testbuf data:\n");
+		rte_pktmbuf_dump(stdout, ut_params->testbuf,
+			ut_params->testbuf->data_len);
+	}
+}
+
+static int
+crypto_inb_null_null_check(struct crypto_unittest_params *ut_params)
+{
+	/* compare the data buffers */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf, void *),
+			DATA_64_BYTES,
+			"input and output data does not match\n");
+	TEST_ASSERT_EQUAL(ut_params->obuf->data_len, ut_params->obuf->pkt_len,
+		"data_len is not equal to pkt_len");
+	TEST_ASSERT_EQUAL(ut_params->obuf->data_len, DATA_64_BYTES,
+		"data_len is not equal to input data");
+	return 0;
+}
+
+static void
+destroy_sa(void)
+{
+	struct crypto_unittest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss.sa);
+	rte_free(ut->ss.sa);
+	rte_cryptodev_sym_session_free(ut->ss.crypto.ses);
+	memset(&ut->ss, 0, sizeof(ut->ss));
+}
+
+static int
+test_ipsec_crypto_inb_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			RTE_SECURITY_ACTION_TYPE_NONE, 0, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, 1);
+
+	rc = test_ipsec_crypto_op_alloc();
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec();
+		if (rc == 0)
+			rc = crypto_inb_null_null_check(ut_params);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+	return rc;
+}
+
+static int
+crypto_outb_null_null_check(struct crypto_unittest_params *ut_params)
+{
+	void *obuf_data;
+	void *testbuf_data;
+
+	/* compare the buffer data */
+	testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf, void *);
+	obuf_data = rte_pktmbuf_mtod(ut_params->obuf, void *);
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+		ut_params->obuf->pkt_len,
+		"test and output data does not match\n");
+	TEST_ASSERT_EQUAL(ut_params->obuf->data_len,
+		ut_params->testbuf->data_len,
+		"obuf data_len is not equal to testbuf data_len");
+	TEST_ASSERT_EQUAL(ut_params->obuf->pkt_len,
+		ut_params->testbuf->pkt_len,
+		"obuf pkt_len is not equal to testbuf pkt_len");
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(OUTBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			RTE_SECURITY_ACTION_TYPE_NONE, 0, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		null_plain_data, DATA_80_BYTES, 0);
+
+	/* Generate test mbuf data */
+	ut_params->testbuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_plain_data, DATA_80_BYTES, OUTBOUND_SPI, 1);
+
+	rc = test_ipsec_crypto_op_alloc();
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec();
+		if (rc == 0)
+			rc = crypto_outb_null_null_check(ut_params);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+	return rc;
+}
+
+static int
+inline_inb_null_null_check(struct crypto_unittest_params *ut_params)
+{
+	void *ibuf_data;
+	void *obuf_data;
+
+	/* compare the buffer data */
+	ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf, void *);
+	obuf_data = rte_pktmbuf_mtod(ut_params->obuf, void *);
+
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+		ut_params->ibuf->data_len,
+		"input and output data does not match\n");
+	TEST_ASSERT_EQUAL(ut_params->ibuf->data_len, ut_params->obuf->data_len,
+		"ibuf data_len is not equal to obuf data_len");
+	TEST_ASSERT_EQUAL(ut_params->ibuf->pkt_len, ut_params->obuf->pkt_len,
+		"ibuf pkt_len is not equal to obuf pkt_len");
+	TEST_ASSERT_EQUAL(ut_params->ibuf->data_len, DATA_100_BYTES,
+		"data_len is not equal input data");
+
+	return 0;
+}
+
+static int
+test_ipsec_inline_inb_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, 0, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_plain_data, DATA_100_BYTES, INBOUND_SPI, 1);
+
+	/* Generate test mbuf data */
+	ut_params->obuf = setup_test_string(ts_params->mbuf_pool,
+		null_plain_data, DATA_100_BYTES, 0);
+
+	n = rte_ipsec_process(&ut_params->ss, &ut_params->ibuf, 1);
+	if (n == 1)
+		rc = inline_inb_null_null_check(ut_params);
+	else {
+		RTE_LOG(ERR, USER1, "rte_ipsec_process failed\n");
+		rc = TEST_FAILED;
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+	return rc;
+}
+
+static int
+inline_outb_null_null_check(struct crypto_unittest_params *ut_params)
+{
+	void *obuf_data;
+	void *ibuf_data;
+
+	/* compare the buffer data */
+	ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf, void *);
+	obuf_data = rte_pktmbuf_mtod(ut_params->obuf, void *);
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+		ut_params->ibuf->data_len,
+		"input and output data does not match\n");
+	TEST_ASSERT_EQUAL(ut_params->ibuf->data_len,
+		ut_params->obuf->data_len,
+		"ibuf data_len is not equal to obuf data_len");
+	TEST_ASSERT_EQUAL(ut_params->ibuf->pkt_len,
+		ut_params->obuf->pkt_len,
+		"ibuf pkt_len is not equal to obuf pkt_len");
+
+	return 0;
+}
+
+static int
+test_ipsec_inline_outb_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(OUTBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+			RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO, 0, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf = setup_test_string(ts_params->mbuf_pool,
+		null_plain_data, DATA_100_BYTES, 0);
+
+	/* Generate test tunneled mbuf data for comparison*/
+	ut_params->obuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_plain_data, DATA_100_BYTES, OUTBOUND_SPI, 1);
+
+	n = rte_ipsec_process(&ut_params->ss, &ut_params->ibuf, 1);
+	if (n == 1)
+		rc = inline_outb_null_null_check(ut_params);
+	else {
+		RTE_LOG(ERR, USER1, "rte_ipsec_process failed\n");
+		rc = TEST_FAILED;
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+	return rc;
+}
+
+#define REPLAY_WIN_64	64
+
+static int
+replay_inb_null_null_check(struct crypto_unittest_params *ut_params)
+{
+	/* compare the buffer data */
+	TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+		rte_pktmbuf_mtod(ut_params->obuf, void *),
+		DATA_64_BYTES,
+		"input and output data does not match\n");
+	TEST_ASSERT_EQUAL(ut_params->obuf->data_len, ut_params->obuf->pkt_len,
+		"data_len is not equal to pkt_len");
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			RTE_SECURITY_ACTION_TYPE_NONE, REPLAY_WIN_64, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, 1);
+
+	rc = test_ipsec_crypto_op_alloc();
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec();
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+			rc = TEST_FAILED;
+		}
+	} else {
+		RTE_LOG(ERR, USER1,
+			"Failed to allocate symmetric crypto operation struct\n");
+		rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf) {
+			rte_pktmbuf_free(ut_params->ibuf);
+			ut_params->ibuf = 0;
+		}
+
+		ut_params->ibuf = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			DATA_64_BYTES, INBOUND_SPI, REPLAY_WIN_64);
+
+		rc = test_ipsec_crypto_op_alloc();
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec();
+			if (rc == 0)
+				rc = replay_inb_null_null_check(ut_params);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			RTE_SECURITY_ACTION_TYPE_NONE, REPLAY_WIN_64, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, DATA_64_BYTES, INBOUND_SPI,
+		REPLAY_WIN_64 + 2);
+
+	rc = test_ipsec_crypto_op_alloc();
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec();
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf) {
+			rte_pktmbuf_free(ut_params->ibuf);
+			ut_params->ibuf = 0;
+		}
+		ut_params->ibuf = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			DATA_64_BYTES, INBOUND_SPI, 1);
+
+		rc = test_ipsec_crypto_op_alloc();
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec();
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not outside the replay window\n");
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window\n");
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	struct crypto_unittest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(INBOUND_SPI, RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+			RTE_SECURITY_ACTION_TYPE_NONE, REPLAY_WIN_64, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed\n");
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, DATA_64_BYTES, INBOUND_SPI, 1);
+
+	rc = test_ipsec_crypto_op_alloc();
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec();
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf) {
+			rte_pktmbuf_free(ut_params->ibuf);
+			ut_params->ibuf = 0;
+		}
+		ut_params->ibuf = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			DATA_64_BYTES, INBOUND_SPI, 1);
+
+		rc = test_ipsec_crypto_op_alloc();
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec();
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window\n");
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window\n");
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params);
+
+	destroy_sa();
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_crypto_inb_null_null),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_crypto_outb_null_null),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_inline_inb_null_null),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_inline_outb_null_null),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_replay_inb_inside_null_null),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_replay_inb_outside_null_null),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_ipsec_replay_inb_repeat_null_null),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.13.6

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing
  2018-10-03  9:37                         ` Jerin Jacob
@ 2018-10-09 18:24                           ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-10-09 18:24 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Joseph, Anoob, dev, Awal, Mohammad Abdul, Doherty, Declan,
	Narayana Prasad, akhil.goyal, hemant.agrawal, shreyansh.jain

Hi Jerin,

> >
> > > static inline rte_ipsec_add_tunnel_hdr(struct rte_mbuf *mbuf);
> > > static inline rte_ipsec_update_sqn(struct rte_mbuf *mbuf, &seq_no);
> > > ...
> > >
> > > For the regular use case, a fat
> > > rte_ipsec_(inbound/outbound)_(prepare/process) can be provided. The
> > > worker implemented for that case can directly call the function and
> > > forget about the other modes. For other vendors with varying
> > > capabilities, there can be multiple workers taking advantage of the hw
> > > features. For such workers, the static inline functions can be used as
> > > required. This gives vendors opportunity to pick and choose what they
> > > want from the ipsec lib. The worker to be used for that case will be
> > > determined based on the capabilities exposed by the PMDs.
> > >
> > > https://mails.dpdk.org/archives/dev/2018-June/103828.html
> > >
> > > The above email explains how multiple workers can be used with l2fwd.
> > >
> > > For this to work, the application & library code need to be modularised.
> > > Like what is being done in the following series,
> > > https://mails.dpdk.org/archives/dev/2018-June/103786.html
> > >
> > > This way one application can be made to run on multiple platforms, with
> > > the app being optimized for the platform on which it would run.
> > >
> > > /* ST SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - NO EVENTDEV*/
> > > worker1()
> > > {
> > >      while(true) {
> > >          nb_pkts = rte_eth_rx_burst();
> > >
> > >          if (nb_pkts != 0) {
> > >              /* Do lookup */
> > >              rte_ipsec_inbound_prepare();
> > >              rte_cryptodev_enqueue_burst();
> > >              /* Update in-flight */
> > >          }
> > >
> > >          if (in_flight) {
> > >              rte_cryptodev_dequeue_burst();
> > >              rte_ipsec_outbound_process();
> > >          }
> > >          /* route packet */
> > > }
> > >
> > > #include <rte_ipsec.h>   /* For IPsec lib static inlines */
> > >
> > > static inline rte_event_enqueue(struct rte_event *ev)
> > > {
> > >      ...
> > > }
> > >
> > > /* MT safe SA - RTE_SECURITY_ACTION_TYPE_NONE - CRYPTODEV - EVENTDEV)
> > > worker2()
> > > {
> > >      while(true) {
> > >          nb_pkts = rte_eth_rx_burst();
> > >
> > >          if (nb_pkts != 0) {
> > >              /* Do lookup */
> > >             rte_ipsec_add tunnel(ev->mbuf);
> > >             rte_event_enqueue(ev)
> > >             rte_cryptodev_enqueue_burst(ev->mbuf);
> > >              /* Update in-flight */
> > >          }
> > >
> > >          if (in_flight) {
> > >              rte_cryptodev_dequeue_burst();
> > >              rte_ipsec_outbound_process();
> > >          }
> > >          /* route packet */
> > > }
> >
> > Hmm, not sure how these 2 cases really differs in terms of ipsec processing.
> > I do understand the in second one we use events to propagate packets through the system,
> > and that eventdev might be smart enough to preserve packet ordering, etc.
> > But in terms of ipsec processing we have to do exactly the same for both cases.
> > Let say for the example above (outbound, crytpodev):
> > a) lookup an SA
> > b) increment SA.SQN and check for overflow
> > d) generate IV
> > e) generate & fill ESP header/trailer, tunnel header
> > f) perform actual encrypt, generate digest
> >
> > So crypto_prepare() - deals with b)-e).
> > f) is handled by cryptodev.
> > Yes, step b) might need to be atomic, or might not -
> > depends on particular application design.
> > But in both cases (polling/eventdev) we do need all these steps to be performed.
> 
> The real question, Is the new library should be aware of eventdev or
> application decides it?

My thought right now - it shouldn't.
Looking at rte_event_crypto_adapter - right now it accepts crypto-ops as input
for both new and forward modes.
Which means that prepare() has to called by the app before doing enqueue
(either straight to cryptodev or to eventdev).
Anyway I just sumitted RFC v2 with process/prepare as function pointers
inside ipsec_session, please have a look.
Konstantin


> 
> If it is former, in order to complete step (b), we need rte_event also passed to
> _process() API and process() API needs to be function pointer in order to
> accommodate all combinations of different HW/SW capabilities.
> 
> 

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-10-18 17:37   ` Jerin Jacob
  2018-10-21 22:01     ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-10-18 17:37 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dev, Mohammad Abdul Awal, Joseph, Anoob, Athreya, Narayana Prasad

-----Original Message-----
> Date: Tue, 9 Oct 2018 19:23:36 +0100
> From: Konstantin Ananyev <konstantin.ananyev@intel.com>
> To: dev@dpdk.org
> CC: Konstantin Ananyev <konstantin.ananyev@intel.com>, Mohammad Abdul Awal
>  <mohammad.abdul.awal@intel.com>
> Subject: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
> X-Mailer: git-send-email 1.7.0.7
> 

Hi Konstantin,

Overall it looks good, but some comments on event mode integration in
performance effective way.

> 
> Introduce Security Association (SA-level) data-path API
> Operates at SA level, provides functions to:
>     - initialize/teardown SA object
>     - process inbound/outbound ESP/AH packets associated with the given SA
>       (decrypt/encrypt, authenticate, check integrity,
>       add/remove ESP/AH related headers and data, etc.).
> 
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  lib/librte_ipsec/Makefile              |   2 +
>  lib/librte_ipsec/meson.build           |   4 +-
>  lib/librte_ipsec/rte_ipsec.h           | 154 +++++++++++++++++++++++++++++++++
>  lib/librte_ipsec/rte_ipsec_version.map |   3 +
>  lib/librte_ipsec/sa.c                  |  98 ++++++++++++++++++++-
>  lib/librte_ipsec/sa.h                  |   3 +
>  lib/librte_ipsec/ses.c                 |  45 ++++++++++
>  7 files changed, 306 insertions(+), 3 deletions(-)
>  create mode 100644 lib/librte_ipsec/rte_ipsec.h
>  create mode 100644 lib/librte_ipsec/ses.c
> 
> diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
> index 7758dcc6d..79f187fae 100644
> --- a/lib/librte_ipsec/Makefile
> +++ b/lib/librte_ipsec/Makefile
> @@ -17,8 +17,10 @@ LIBABIVER := 1
> 
>  # all source are stored in SRCS-y
>  SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
> +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
> 
>  # install header files
> +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
>  SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
> 
>  include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
> index 52c78eaeb..6e8c6fabe 100644
> --- a/lib/librte_ipsec/meson.build
> +++ b/lib/librte_ipsec/meson.build
> @@ -3,8 +3,8 @@
> 
>  allow_experimental_apis = true
> 
> -sources=files('sa.c')
> +sources=files('sa.c', 'ses.c')
> 
> -install_headers = files('rte_ipsec_sa.h')
> +install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
> 
>  deps += ['mbuf', 'net', 'cryptodev', 'security']
> diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
> new file mode 100644
> index 000000000..5c9a1ed0b
> --- /dev/null
> +++ b/lib/librte_ipsec/rte_ipsec.h
> @@ -0,0 +1,154 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _RTE_IPSEC_H_
> +#define _RTE_IPSEC_H_
> +
> +/**
> + * @file rte_ipsec.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE IPsec support.
> + * librte_ipsec provides a framework for data-path IPsec protocol
> + * processing (ESP/AH).
> + * IKEv2 protocol support right now is out of scope of that draft.
> + * Though it tries to define related API in such way, that it could be adopted
> + * by IKEv2 implementation.
> + */
> +
> +#include <rte_ipsec_sa.h>
> +#include <rte_mbuf.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +struct rte_ipsec_session;
> +
> +/**
> + * IPsec session specific functions that will be used to:
> + * - prepare - for input mbufs and given IPsec session prepare crypto ops
> + *   that can be enqueued into the cryptodev associated with given session
> + *   (see *rte_ipsec_crypto_prepare* below for more details).
> + * - process - finalize processing of packets after crypto-dev finished
> + *   with them or process packets that are subjects to inline IPsec offload
> + *   (see rte_ipsec_process for more details).
> + */
> +struct rte_ipsec_sa_func {
> +       uint16_t (*prepare)(const struct rte_ipsec_session *ss,
> +                               struct rte_mbuf *mb[],
> +                               struct rte_crypto_op *cop[],
> +                               uint16_t num);
> +       uint16_t (*process)(const struct rte_ipsec_session *ss,
> +                               struct rte_mbuf *mb[],
> +                               uint16_t num);

IMO, It makes sense to have separate function pointers for inbound and 
outbound so that, implementation would be clean and we can avoid a
"if" check.

> +};
> +
> +/**
> + * rte_ipsec_session is an aggregate structure that defines particular
> + * IPsec Security Association IPsec (SA) on given security/crypto device:
> + * - pointer to the SA object
> + * - security session action type
> + * - pointer to security/crypto session, plus other related data
> + * - session/device specific functions to prepare/process IPsec packets.
> + */
> +struct rte_ipsec_session {
> +
> +       /**
> +        * SA that session belongs to.
> +        * Note that multiple sessions can belong to the same SA.
> +        */
> +       struct rte_ipsec_sa *sa;
> +       /** session action type */
> +       enum rte_security_session_action_type type;
> +       /** session and related data */
> +       union {
> +               struct {
> +                       struct rte_cryptodev_sym_session *ses;
> +               } crypto;
> +               struct {
> +                       struct rte_security_session *ses;
> +                       struct rte_security_ctx *ctx;
> +                       uint32_t ol_flags;
> +               } security;
> +       };
> +       /** functions to prepare/process IPsec packets */
> +       struct rte_ipsec_sa_func func;
> +};

IMO, it can be cache aligned as it is used in fast path.

> +
> +/**
> + * Checks that inside given rte_ipsec_session crypto/security fields
> + * are filled correctly and setups function pointers based on these values.
> + * @param ss
> + *   Pointer to the *rte_ipsec_session* object
> + * @return
> + *   - Zero if operation completed successfully.
> + *   - -EINVAL if the parameters are invalid.
> + */
> +int __rte_experimental
> +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> +
> +/**
> + * For input mbufs and given IPsec session prepare crypto ops that can be
> + * enqueued into the cryptodev associated with given session.
> + * expects that for each input packet:
> + *      - l2_len, l3_len are setup correctly
> + * Note that erroneous mbufs are not freed by the function,
> + * but are placed beyond last valid mbuf in the *mb* array.
> + * It is a user responsibility to handle them further.
> + * @param ss
> + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> + * @param mb
> + *   The address of an array of *num* pointers to *rte_mbuf* structures
> + *   which contain the input packets.
> + * @param cop
> + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> + *   structures.
> + * @param num
> + *   The maximum number of packets to process.
> + * @return
> + *   Number of successfully processed packets, with error code set in rte_errno.
> + */
> +static inline uint16_t __rte_experimental
> +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> +{
> +       return ss->func.prepare(ss, mb, cop, num);
> +}
> +
> +/**
> + * Finalise processing of packets after crypto-dev finished with them or
> + * process packets that are subjects to inline IPsec offload.
> + * Expects that for each input packet:
> + *      - l2_len, l3_len are setup correctly
> + * Output mbufs will be:
> + * inbound - decrypted & authenticated, ESP(AH) related headers removed,
> + * *l2_len* and *l3_len* fields are updated.
> + * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
> + * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
> + * Note that erroneous mbufs are not freed by the function,
> + * but are placed beyond last valid mbuf in the *mb* array.
> + * It is a user responsibility to handle them further.
> + * @param ss
> + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> + * @param mb
> + *   The address of an array of *num* pointers to *rte_mbuf* structures
> + *   which contain the input packets.
> + * @param num
> + *   The maximum number of packets to process.
> + * @return
> + *   Number of successfully processed packets, with error code set in rte_errno.
> + */
> +static inline uint16_t __rte_experimental
> +rte_ipsec_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +       uint16_t num)
> +{
> +       return ss->func.process(ss, mb, num);
> +}

Since we have separate functions and  different application path for different mode
and the arguments also differ. I think, It is better to integrate
event mode like following

static inline uint16_t __rte_experimental
rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
{
       return ss->func.event_process(ss, ev, num);
}

This is to,
1) Avoid Event mode application code duplication
2) Better I$ utilization rather moving event specific and mbuff
specific at different code locations
3) Better performance as inside one function pointer we can do things
in one shot rather splitting the work to application and library.
4) Event mode has different modes like ATQ, non ATQ etc, These things
we can abstract through exiting function pointer scheme.
5) atomicity & ordering problems can be sorted out internally with the events,
having one function pointer for event would be enough.

We will need some event related info (event dev, port, atomic queue to
use etc) which need to be added in rte_ipsec_session *ss as UNION so it
wont impact the normal mode. This way, all the required functionality of this library 
can be used with event-based model.

See below some implementation thoughts on this.

> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_IPSEC_H_ */
> diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
> +const struct rte_ipsec_sa_func *
> +ipsec_sa_func_select(const struct rte_ipsec_session *ss)
> +{
> +       static const struct rte_ipsec_sa_func tfunc[] = {
> +               [RTE_SECURITY_ACTION_TYPE_NONE] = {
> +                       .prepare = lksd_none_prepare,
> +                       .process = lksd_none_process,
> +               },
> +               [RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO] = {
> +                       .prepare = NULL,
> +                       .process = inline_crypto_process,
> +               },
> +               [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL] = {
> +                       .prepare = NULL,
> +                       .process = inline_proto_process,
> +               },
> +               [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL] = {
> +                       .prepare = lksd_proto_prepare,
> +                       .process = lksd_proto_process,
> +               },

             [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL][EVENT] = {
                    .prepare = NULL,
                    .process = NULL,
                    .process_evt = lksd_event_process,
             },
             [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL][EVENT] = {
                    .prepare = NULL,
                    .process = NULL,
                    .process_evt = inline_event_process,
             },

Probably add one more dimension in array to choose event/poll? 


> +       };
> +
> +       if (ss->type >= RTE_DIM(tfunc))
> +               return NULL;
> +
> +       return tfunc + ss->type;
> +}
> diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
> index ef030334c..13a5a68f3 100644
> --- a/lib/librte_ipsec/sa.h
> +++ b/lib/librte_ipsec/sa.h
> @@ -72,4 +72,7 @@ struct rte_ipsec_sa {
> 
>  } __rte_cache_aligned;
> 
> +const struct rte_ipsec_sa_func *
> +ipsec_sa_func_select(const struct rte_ipsec_session *ss);
> +
>  #endif /* _SA_H_ */
> diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
> new file mode 100644
> index 000000000..afefda937
> --- /dev/null
> +++ b/lib/librte_ipsec/ses.c
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#include <rte_ipsec.h>
> +#include "sa.h"
> +
> +static int
> +session_check(struct rte_ipsec_session *ss)
> +{
> +       if (ss == NULL)
> +               return -EINVAL;
> +
> +       if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
> +               if (ss->crypto.ses == NULL)
> +                       return -EINVAL;
> +       } else if (ss->security.ses == NULL || ss->security.ctx == NULL)
> +               return -EINVAL;
> +
> +       return 0;
> +}
> +
> +int __rte_experimental
> +rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
> +{

Probably add one more argument to choose event vs poll so that
above function pointers can be selected.

or have different API like rte_ipsec_use_mode(EVENT) or API
other slow path scheme to select the mode. 

> +       int32_t rc;
> +       const struct rte_ipsec_sa_func *fp;
> +
> +       rc = session_check(ss);
> +       if (rc != 0)
> +               return rc;
> +
> +       fp = ipsec_sa_func_select(ss);
> +       if (fp == NULL)
> +               return -ENOTSUP;
> +
> +       ss->func = fp[0];
> +
> +       if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
> +               ss->crypto.ses->userdata = (uintptr_t)ss;
> +       else
> +               ss->security.ses->userdata = (uintptr_t)ss;
> +
> +       return 0;
> +}
> --
> 2.13.6
> 

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-18 17:37   ` Jerin Jacob
@ 2018-10-21 22:01     ` Ananyev, Konstantin
  2018-10-24 12:03       ` Jerin Jacob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-10-21 22:01 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, Awal, Mohammad Abdul, Joseph, Anoob, Athreya, Narayana Prasad


Hi Jerin,

> 
> Hi Konstantin,
> 
> Overall it looks good, but some comments on event mode integration in
> performance effective way.
> 
> >
> > Introduce Security Association (SA-level) data-path API
> > Operates at SA level, provides functions to:
> >     - initialize/teardown SA object
> >     - process inbound/outbound ESP/AH packets associated with the given SA
> >       (decrypt/encrypt, authenticate, check integrity,
> >       add/remove ESP/AH related headers and data, etc.).
> >
> > Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >  lib/librte_ipsec/Makefile              |   2 +
> >  lib/librte_ipsec/meson.build           |   4 +-
> >  lib/librte_ipsec/rte_ipsec.h           | 154 +++++++++++++++++++++++++++++++++
> >  lib/librte_ipsec/rte_ipsec_version.map |   3 +
> >  lib/librte_ipsec/sa.c                  |  98 ++++++++++++++++++++-
> >  lib/librte_ipsec/sa.h                  |   3 +
> >  lib/librte_ipsec/ses.c                 |  45 ++++++++++
> >  7 files changed, 306 insertions(+), 3 deletions(-)
> >  create mode 100644 lib/librte_ipsec/rte_ipsec.h
> >  create mode 100644 lib/librte_ipsec/ses.c
> >
> > diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
> > index 7758dcc6d..79f187fae 100644
> > --- a/lib/librte_ipsec/Makefile
> > +++ b/lib/librte_ipsec/Makefile
> > @@ -17,8 +17,10 @@ LIBABIVER := 1
> >
> >  # all source are stored in SRCS-y
> >  SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
> > +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
> >
> >  # install header files
> > +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
> >  SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
> >
> >  include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
> > index 52c78eaeb..6e8c6fabe 100644
> > --- a/lib/librte_ipsec/meson.build
> > +++ b/lib/librte_ipsec/meson.build
> > @@ -3,8 +3,8 @@
> >
> >  allow_experimental_apis = true
> >
> > -sources=files('sa.c')
> > +sources=files('sa.c', 'ses.c')
> >
> > -install_headers = files('rte_ipsec_sa.h')
> > +install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
> >
> >  deps += ['mbuf', 'net', 'cryptodev', 'security']
> > diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
> > new file mode 100644
> > index 000000000..5c9a1ed0b
> > --- /dev/null
> > +++ b/lib/librte_ipsec/rte_ipsec.h
> > @@ -0,0 +1,154 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018 Intel Corporation
> > + */
> > +
> > +#ifndef _RTE_IPSEC_H_
> > +#define _RTE_IPSEC_H_
> > +
> > +/**
> > + * @file rte_ipsec.h
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * RTE IPsec support.
> > + * librte_ipsec provides a framework for data-path IPsec protocol
> > + * processing (ESP/AH).
> > + * IKEv2 protocol support right now is out of scope of that draft.
> > + * Though it tries to define related API in such way, that it could be adopted
> > + * by IKEv2 implementation.
> > + */
> > +
> > +#include <rte_ipsec_sa.h>
> > +#include <rte_mbuf.h>
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +struct rte_ipsec_session;
> > +
> > +/**
> > + * IPsec session specific functions that will be used to:
> > + * - prepare - for input mbufs and given IPsec session prepare crypto ops
> > + *   that can be enqueued into the cryptodev associated with given session
> > + *   (see *rte_ipsec_crypto_prepare* below for more details).
> > + * - process - finalize processing of packets after crypto-dev finished
> > + *   with them or process packets that are subjects to inline IPsec offload
> > + *   (see rte_ipsec_process for more details).
> > + */
> > +struct rte_ipsec_sa_func {
> > +       uint16_t (*prepare)(const struct rte_ipsec_session *ss,
> > +                               struct rte_mbuf *mb[],
> > +                               struct rte_crypto_op *cop[],
> > +                               uint16_t num);
> > +       uint16_t (*process)(const struct rte_ipsec_session *ss,
> > +                               struct rte_mbuf *mb[],
> > +                               uint16_t num);
> 
> IMO, It makes sense to have separate function pointers for inbound and
> outbound so that, implementation would be clean and we can avoid a
> "if" check.

SA object by itself is unidirectional (either inbound or outbound), so
I don't think we need 2 function pointers here.
Yes, right now, inside ipsec lib we select functions by action_type only,
but it doesn't have to stay that way.
It could be action_type, direction, mode (tunnel/transport), event/poll, etc.
Let say inline_proto_process() could be split into:
inline_proto_outb_process() and inline_proto_inb_process() and 
rte_ipsec_sa_func.process will point to appropriate one.
I probably will change things that way for next version.

> 
> > +};
> > +
> > +/**
> > + * rte_ipsec_session is an aggregate structure that defines particular
> > + * IPsec Security Association IPsec (SA) on given security/crypto device:
> > + * - pointer to the SA object
> > + * - security session action type
> > + * - pointer to security/crypto session, plus other related data
> > + * - session/device specific functions to prepare/process IPsec packets.
> > + */
> > +struct rte_ipsec_session {
> > +
> > +       /**
> > +        * SA that session belongs to.
> > +        * Note that multiple sessions can belong to the same SA.
> > +        */
> > +       struct rte_ipsec_sa *sa;
> > +       /** session action type */
> > +       enum rte_security_session_action_type type;
> > +       /** session and related data */
> > +       union {
> > +               struct {
> > +                       struct rte_cryptodev_sym_session *ses;
> > +               } crypto;
> > +               struct {
> > +                       struct rte_security_session *ses;
> > +                       struct rte_security_ctx *ctx;
> > +                       uint32_t ol_flags;
> > +               } security;
> > +       };
> > +       /** functions to prepare/process IPsec packets */
> > +       struct rte_ipsec_sa_func func;
> > +};
> 
> IMO, it can be cache aligned as it is used in fast path.

Good point, will add.

> 
> > +
> > +/**
> > + * Checks that inside given rte_ipsec_session crypto/security fields
> > + * are filled correctly and setups function pointers based on these values.
> > + * @param ss
> > + *   Pointer to the *rte_ipsec_session* object
> > + * @return
> > + *   - Zero if operation completed successfully.
> > + *   - -EINVAL if the parameters are invalid.
> > + */
> > +int __rte_experimental
> > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > +
> > +/**
> > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > + * enqueued into the cryptodev associated with given session.
> > + * expects that for each input packet:
> > + *      - l2_len, l3_len are setup correctly
> > + * Note that erroneous mbufs are not freed by the function,
> > + * but are placed beyond last valid mbuf in the *mb* array.
> > + * It is a user responsibility to handle them further.
> > + * @param ss
> > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > + * @param mb
> > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > + *   which contain the input packets.
> > + * @param cop
> > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > + *   structures.
> > + * @param num
> > + *   The maximum number of packets to process.
> > + * @return
> > + *   Number of successfully processed packets, with error code set in rte_errno.
> > + */
> > +static inline uint16_t __rte_experimental
> > +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> > +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > +{
> > +       return ss->func.prepare(ss, mb, cop, num);
> > +}
> > +
> > +/**
> > + * Finalise processing of packets after crypto-dev finished with them or
> > + * process packets that are subjects to inline IPsec offload.
> > + * Expects that for each input packet:
> > + *      - l2_len, l3_len are setup correctly
> > + * Output mbufs will be:
> > + * inbound - decrypted & authenticated, ESP(AH) related headers removed,
> > + * *l2_len* and *l3_len* fields are updated.
> > + * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
> > + * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
> > + * Note that erroneous mbufs are not freed by the function,
> > + * but are placed beyond last valid mbuf in the *mb* array.
> > + * It is a user responsibility to handle them further.
> > + * @param ss
> > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > + * @param mb
> > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > + *   which contain the input packets.
> > + * @param num
> > + *   The maximum number of packets to process.
> > + * @return
> > + *   Number of successfully processed packets, with error code set in rte_errno.
> > + */
> > +static inline uint16_t __rte_experimental
> > +rte_ipsec_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> > +       uint16_t num)
> > +{
> > +       return ss->func.process(ss, mb, num);
> > +}
> 
> Since we have separate functions and  different application path for different mode
> and the arguments also differ. I think, It is better to integrate
> event mode like following
> 
> static inline uint16_t __rte_experimental
> rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> {
>        return ss->func.event_process(ss, ev, num);
> }

To fulfill that, we can either have 2 separate function pointers:
uint16_t (*pkt_process)( const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],uint16_t num);
uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);

Or we can keep one function pointer, but change it to accept just array of pointers:
uint16_t (*process)( const struct rte_ipsec_session *ss, void *in[],uint16_t num);
and then make session_prepare() to choose a proper function based on input.

I am ok with both schemes, but second one seems a bit nicer to me. 

> 
> This is to,
> 1) Avoid Event mode application code duplication
> 2) Better I$ utilization rather moving event specific and mbuff
> specific at different code locations
> 3) Better performance as inside one function pointer we can do things
> in one shot rather splitting the work to application and library.
> 4) Event mode has different modes like ATQ, non ATQ etc, These things
> we can abstract through exiting function pointer scheme.
> 5) atomicity & ordering problems can be sorted out internally with the events,
> having one function pointer for event would be enough.
> 
> We will need some event related info (event dev, port, atomic queue to
> use etc) which need to be added in rte_ipsec_session *ss as UNION so it
> wont impact the normal mode. This way, all the required functionality of this library
> can be used with event-based model.
 
Yes, to support event model, I imagine ipsec_session might need to
contain some event specific data.
I am absolutely ok with that idea in general.
Related fields can be added to the ipsec_session structure as needed,
together with actual event mode implementation. 

>
> See below some implementation thoughts on this.
> 
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_IPSEC_H_ */
> > diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
> > +const struct rte_ipsec_sa_func *
> > +ipsec_sa_func_select(const struct rte_ipsec_session *ss)
> > +{
> > +       static const struct rte_ipsec_sa_func tfunc[] = {
> > +               [RTE_SECURITY_ACTION_TYPE_NONE] = {
> > +                       .prepare = lksd_none_prepare,
> > +                       .process = lksd_none_process,
> > +               },
> > +               [RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO] = {
> > +                       .prepare = NULL,
> > +                       .process = inline_crypto_process,
> > +               },
> > +               [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL] = {
> > +                       .prepare = NULL,
> > +                       .process = inline_proto_process,
> > +               },
> > +               [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL] = {
> > +                       .prepare = lksd_proto_prepare,
> > +                       .process = lksd_proto_process,
> > +               },
> 
>              [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL][EVENT] = {
>                     .prepare = NULL,
>                     .process = NULL,
>                     .process_evt = lksd_event_process,
>              },
>              [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL][EVENT] = {
>                     .prepare = NULL,
>                     .process = NULL,
>                     .process_evt = inline_event_process,
>              },
> 
> Probably add one more dimension in array to choose event/poll?

That's a static function/array, surely we can have here as many dimensions as we need to.
As I said below, will probably need to select a function based on direction, mode, etc. anyway.
NP to have extra logic to select event/mbuf based functions.

> 
> 
> > +       };
> > +
> > +       if (ss->type >= RTE_DIM(tfunc))
> > +               return NULL;
> > +
> > +       return tfunc + ss->type;
> > +}
> > diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
> > index ef030334c..13a5a68f3 100644
> > --- a/lib/librte_ipsec/sa.h
> > +++ b/lib/librte_ipsec/sa.h
> > @@ -72,4 +72,7 @@ struct rte_ipsec_sa {
> >
> >  } __rte_cache_aligned;
> >
> > +const struct rte_ipsec_sa_func *
> > +ipsec_sa_func_select(const struct rte_ipsec_session *ss);
> > +
> >  #endif /* _SA_H_ */
> > diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
> > new file mode 100644
> > index 000000000..afefda937
> > --- /dev/null
> > +++ b/lib/librte_ipsec/ses.c
> > @@ -0,0 +1,45 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018 Intel Corporation
> > + */
> > +
> > +#include <rte_ipsec.h>
> > +#include "sa.h"
> > +
> > +static int
> > +session_check(struct rte_ipsec_session *ss)
> > +{
> > +       if (ss == NULL)
> > +               return -EINVAL;
> > +
> > +       if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
> > +               if (ss->crypto.ses == NULL)
> > +                       return -EINVAL;
> > +       } else if (ss->security.ses == NULL || ss->security.ctx == NULL)
> > +               return -EINVAL;
> > +
> > +       return 0;
> > +}
> > +
> > +int __rte_experimental
> > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
> > +{
> 
> Probably add one more argument to choose event vs poll so that
> above function pointers can be selected.
> 
> or have different API like rte_ipsec_use_mode(EVENT) or API
> other slow path scheme to select the mode.

Yes, we would need something here. 
I think we can have some field inside ipsec_session that defines
which input types (mbuf/event) session expects.
I suppose we would need such field anyway - as you said above,
ipsec_session most likely will contain a union for event/non-event related data. 

Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-21 22:01     ` Ananyev, Konstantin
@ 2018-10-24 12:03       ` Jerin Jacob
  2018-10-28 20:37         ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-10-24 12:03 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Awal, Mohammad Abdul, Joseph, Anoob, Athreya, Narayana Prasad

-----Original Message-----
> Date: Sun, 21 Oct 2018 22:01:48 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, "Awal, Mohammad Abdul"
>  <mohammad.abdul.awal@intel.com>, "Joseph, Anoob"
>  <Anoob.Joseph@cavium.com>, "Athreya, Narayana Prasad"
>  <NarayanaPrasad.Athreya@cavium.com>
> Subject: RE: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
> 
> 
> Hi Jerin,

Hi Konstantin,

> 
> >
> > > +
> > > +/**
> > > + * IPsec session specific functions that will be used to:
> > > + * - prepare - for input mbufs and given IPsec session prepare crypto ops
> > > + *   that can be enqueued into the cryptodev associated with given session
> > > + *   (see *rte_ipsec_crypto_prepare* below for more details).
> > > + * - process - finalize processing of packets after crypto-dev finished
> > > + *   with them or process packets that are subjects to inline IPsec offload
> > > + *   (see rte_ipsec_process for more details).
> > > + */
> > > +struct rte_ipsec_sa_func {
> > > +       uint16_t (*prepare)(const struct rte_ipsec_session *ss,
> > > +                               struct rte_mbuf *mb[],
> > > +                               struct rte_crypto_op *cop[],
> > > +                               uint16_t num);
> > > +       uint16_t (*process)(const struct rte_ipsec_session *ss,
> > > +                               struct rte_mbuf *mb[],
> > > +                               uint16_t num);
> >
> > IMO, It makes sense to have separate function pointers for inbound and
> > outbound so that, implementation would be clean and we can avoid a
> > "if" check.
> 
> SA object by itself is unidirectional (either inbound or outbound), so
> I don't think we need 2 function pointers here.
> Yes, right now, inside ipsec lib we select functions by action_type only,
> but it doesn't have to stay that way.
> It could be action_type, direction, mode (tunnel/transport), event/poll, etc.
> Let say inline_proto_process() could be split into:
> inline_proto_outb_process() and inline_proto_inb_process() and
> rte_ipsec_sa_func.process will point to appropriate one.
> I probably will change things that way for next version.

OK

> 
> >
> > > +};
> > > +
> > > +/**
> > > + * rte_ipsec_session is an aggregate structure that defines particular
> > > + * IPsec Security Association IPsec (SA) on given security/crypto device:
> > > + * - pointer to the SA object
> > > + * - security session action type
> > > + * - pointer to security/crypto session, plus other related data
> > > + * - session/device specific functions to prepare/process IPsec packets.
> > > + */
> > > +struct rte_ipsec_session {
> > > +
> > > +       /**
> > > +        * SA that session belongs to.
> > > +        * Note that multiple sessions can belong to the same SA.
> > > +        */
> > > +       struct rte_ipsec_sa *sa;
> > > +       /** session action type */
> > > +       enum rte_security_session_action_type type;
> > > +       /** session and related data */
> > > +       union {
> > > +               struct {
> > > +                       struct rte_cryptodev_sym_session *ses;
> > > +               } crypto;
> > > +               struct {
> > > +                       struct rte_security_session *ses;
> > > +                       struct rte_security_ctx *ctx;
> > > +                       uint32_t ol_flags;
> > > +               } security;
> > > +       };
> > > +       /** functions to prepare/process IPsec packets */
> > > +       struct rte_ipsec_sa_func func;
> > > +};
> >
> > IMO, it can be cache aligned as it is used in fast path.
> 
> Good point, will add.

OK

> 
> >
> > > +
> > > +/**
> > > + * Checks that inside given rte_ipsec_session crypto/security fields
> > > + * are filled correctly and setups function pointers based on these values.
> > > + * @param ss
> > > + *   Pointer to the *rte_ipsec_session* object
> > > + * @return
> > > + *   - Zero if operation completed successfully.
> > > + *   - -EINVAL if the parameters are invalid.
> > > + */
> > > +int __rte_experimental
> > > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > > +
> > > +/**
> > > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > > + * enqueued into the cryptodev associated with given session.
> > > + * expects that for each input packet:
> > > + *      - l2_len, l3_len are setup correctly
> > > + * Note that erroneous mbufs are not freed by the function,
> > > + * but are placed beyond last valid mbuf in the *mb* array.
> > > + * It is a user responsibility to handle them further.
> > > + * @param ss
> > > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > > + * @param mb
> > > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > > + *   which contain the input packets.
> > > + * @param cop
> > > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > > + *   structures.
> > > + * @param num
> > > + *   The maximum number of packets to process.
> > > + * @return
> > > + *   Number of successfully processed packets, with error code set in rte_errno.
> > > + */
> > > +static inline uint16_t __rte_experimental
> > > +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> > > +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > > +{
> > > +       return ss->func.prepare(ss, mb, cop, num);
> > > +}
> > > +
> > static inline uint16_t __rte_experimental
> > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> > {
> >        return ss->func.event_process(ss, ev, num);
> > }
> 
> To fulfill that, we can either have 2 separate function pointers:
> uint16_t (*pkt_process)( const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],uint16_t num);
> uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> 
> Or we can keep one function pointer, but change it to accept just array of pointers:
> uint16_t (*process)( const struct rte_ipsec_session *ss, void *in[],uint16_t num);
> and then make session_prepare() to choose a proper function based on input.
> 
> I am ok with both schemes, but second one seems a bit nicer to me.

How about best of both worlds, i.e save space and enable compile check
by anonymous union of both functions

RTE_STD_C11
union {
	uint16_t (*pkt_process)( const struct rte_ipsec_session *ss,struct rte_mbuf *mb[],uint16_t num);
	uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
};

> 
> >
> > This is to,
> > 1) Avoid Event mode application code duplication
> > 2) Better I$ utilization rather moving event specific and mbuff
> > specific at different code locations
> > 3) Better performance as inside one function pointer we can do things
> > in one shot rather splitting the work to application and library.
> > 4) Event mode has different modes like ATQ, non ATQ etc, These things
> > we can abstract through exiting function pointer scheme.
> > 5) atomicity & ordering problems can be sorted out internally with the events,
> > having one function pointer for event would be enough.
> >
> > We will need some event related info (event dev, port, atomic queue to
> > use etc) which need to be added in rte_ipsec_session *ss as UNION so it
> > wont impact the normal mode. This way, all the required functionality of this library
> > can be used with event-based model.
> 
> Yes, to support event model, I imagine ipsec_session might need to
> contain some event specific data.
> I am absolutely ok with that idea in general.
> Related fields can be added to the ipsec_session structure as needed,
> together with actual event mode implementation.

OK

> 
> >
> > See below some implementation thoughts on this.
> >
> > > +
> > > +#ifdef __cplusplus
> > > +}
> > > +#endif
> > > +
> > > +#endif /* _RTE_IPSEC_H_ */
> > > diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
> > > +const struct rte_ipsec_sa_func *
> > > +ipsec_sa_func_select(const struct rte_ipsec_session *ss)
> > > +{
> > > +       static const struct rte_ipsec_sa_func tfunc[] = {
> > > +               [RTE_SECURITY_ACTION_TYPE_NONE] = {
> > > +                       .prepare = lksd_none_prepare,
> > > +                       .process = lksd_none_process,
> > > +               },
> > > +               [RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO] = {
> > > +                       .prepare = NULL,
> > > +                       .process = inline_crypto_process,
> > > +               },
> > > +               [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL] = {
> > > +                       .prepare = NULL,
> > > +                       .process = inline_proto_process,
> > > +               },
> > > +               [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL] = {
> > > +                       .prepare = lksd_proto_prepare,
> > > +                       .process = lksd_proto_process,
> > > +               },
> >
> >              [RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL][EVENT] = {
> >                     .prepare = NULL,
> >                     .process = NULL,
> >                     .process_evt = lksd_event_process,
> >              },
> >              [RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL][EVENT] = {
> >                     .prepare = NULL,
> >                     .process = NULL,
> >                     .process_evt = inline_event_process,
> >              },
> >
> > Probably add one more dimension in array to choose event/poll?
> 
> That's a static function/array, surely we can have here as many dimensions as we need to.
> As I said below, will probably need to select a function based on direction, mode, etc. anyway.
> NP to have extra logic to select event/mbuf based functions.

OK

> 
> >
> >
> > > +       };
> > > +
> > > +       if (ss->type >= RTE_DIM(tfunc))
> > > +               return NULL;
> > > +int __rte_experimental
> > > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
> > > +{
> >
> > Probably add one more argument to choose event vs poll so that
> > above function pointers can be selected.
> >
> > or have different API like rte_ipsec_use_mode(EVENT) or API
> > other slow path scheme to select the mode.
> 
> Yes, we would need something here.
> I think we can have some field inside ipsec_session that defines
> which input types (mbuf/event) session expects.
> I suppose we would need such field anyway - as you said above,
> ipsec_session most likely will contain a union for event/non-event related data.

OK

> 
> Konstantin
> 

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-24 12:03       ` Jerin Jacob
@ 2018-10-28 20:37         ` Ananyev, Konstantin
  2018-10-29 10:19           ` Jerin Jacob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-10-28 20:37 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, Awal, Mohammad Abdul, Joseph, Anoob, Athreya, Narayana Prasad


Hi Jerin,

> > > > +
> > > > +/**
> > > > + * Checks that inside given rte_ipsec_session crypto/security fields
> > > > + * are filled correctly and setups function pointers based on these values.
> > > > + * @param ss
> > > > + *   Pointer to the *rte_ipsec_session* object
> > > > + * @return
> > > > + *   - Zero if operation completed successfully.
> > > > + *   - -EINVAL if the parameters are invalid.
> > > > + */
> > > > +int __rte_experimental
> > > > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > > > +
> > > > +/**
> > > > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > > > + * enqueued into the cryptodev associated with given session.
> > > > + * expects that for each input packet:
> > > > + *      - l2_len, l3_len are setup correctly
> > > > + * Note that erroneous mbufs are not freed by the function,
> > > > + * but are placed beyond last valid mbuf in the *mb* array.
> > > > + * It is a user responsibility to handle them further.
> > > > + * @param ss
> > > > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > > > + * @param mb
> > > > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > > > + *   which contain the input packets.
> > > > + * @param cop
> > > > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > > > + *   structures.
> > > > + * @param num
> > > > + *   The maximum number of packets to process.
> > > > + * @return
> > > > + *   Number of successfully processed packets, with error code set in rte_errno.
> > > > + */
> > > > +static inline uint16_t __rte_experimental
> > > > +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> > > > +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > > > +{
> > > > +       return ss->func.prepare(ss, mb, cop, num);
> > > > +}
> > > > +
> > > static inline uint16_t __rte_experimental
> > > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> > > {
> > >        return ss->func.event_process(ss, ev, num);
> > > }
> >
> > To fulfill that, we can either have 2 separate function pointers:
> > uint16_t (*pkt_process)( const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],uint16_t num);
> > uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> >
> > Or we can keep one function pointer, but change it to accept just array of pointers:
> > uint16_t (*process)( const struct rte_ipsec_session *ss, void *in[],uint16_t num);
> > and then make session_prepare() to choose a proper function based on input.
> >
> > I am ok with both schemes, but second one seems a bit nicer to me.
> 
> How about best of both worlds, i.e save space and enable compile check
> by anonymous union of both functions
> 
> RTE_STD_C11
> union {
> 	uint16_t (*pkt_process)( const struct rte_ipsec_session *ss,struct rte_mbuf *mb[],uint16_t num);
> 	uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> };
> 

Yes, it is definitely possible, but then we still need 2 API functions,
Depending on input type, i.e:

static inline uint16_t __rte_experimental
rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
 {
        return ss->func.event_process(ss, ev, num);
}

static inline uint16_t __rte_experimental
rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num)
 {
        return ss->func.pkt_process(ss, mb, num);
}

While if we'll have void *[], we can have just one function for both cases:

static inline uint16_t __rte_experimental
rte_ipsec_process(const struct rte_ipsec_session *ss, void *in[], uint16_t num)
 {
        return ss->func.process(ss, in, num);
}

Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-28 20:37         ` Ananyev, Konstantin
@ 2018-10-29 10:19           ` Jerin Jacob
  2018-10-30 13:53             ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Jerin Jacob @ 2018-10-29 10:19 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Awal, Mohammad Abdul, Joseph, Anoob, Athreya, Narayana Prasad

-----Original Message-----
> Date: Sun, 28 Oct 2018 20:37:23 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, "Awal, Mohammad Abdul"
>  <mohammad.abdul.awal@intel.com>, "Joseph, Anoob"
>  <Anoob.Joseph@cavium.com>, "Athreya, Narayana Prasad"
>  <NarayanaPrasad.Athreya@cavium.com>
> Subject: RE: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
> 

> 
> Hi Jerin,

Hi Konstantin,

> 
> > > > > +
> > > > > +/**
> > > > > + * Checks that inside given rte_ipsec_session crypto/security fields
> > > > > + * are filled correctly and setups function pointers based on these values.
> > > > > + * @param ss
> > > > > + *   Pointer to the *rte_ipsec_session* object
> > > > > + * @return
> > > > > + *   - Zero if operation completed successfully.
> > > > > + *   - -EINVAL if the parameters are invalid.
> > > > > + */
> > > > > +int __rte_experimental
> > > > > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > > > > +
> > > > > +/**
> > > > > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > > > > + * enqueued into the cryptodev associated with given session.
> > > > > + * expects that for each input packet:
> > > > > + *      - l2_len, l3_len are setup correctly
> > > > > + * Note that erroneous mbufs are not freed by the function,
> > > > > + * but are placed beyond last valid mbuf in the *mb* array.
> > > > > + * It is a user responsibility to handle them further.
> > > > > + * @param ss
> > > > > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > > > > + * @param mb
> > > > > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > > > > + *   which contain the input packets.
> > > > > + * @param cop
> > > > > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > > > > + *   structures.
> > > > > + * @param num
> > > > > + *   The maximum number of packets to process.
> > > > > + * @return
> > > > > + *   Number of successfully processed packets, with error code set in rte_errno.
> > > > > + */
> > > > > +static inline uint16_t __rte_experimental
> > > > > +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> > > > > +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > > > > +{
> > > > > +       return ss->func.prepare(ss, mb, cop, num);
> > > > > +}
> > > > > +
> > > > static inline uint16_t __rte_experimental
> > > > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> > > > {
> > > >        return ss->func.event_process(ss, ev, num);
> > > > }
> > >
> > > To fulfill that, we can either have 2 separate function pointers:
> > > uint16_t (*pkt_process)( const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],uint16_t num);
> > > uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> > >
> > > Or we can keep one function pointer, but change it to accept just array of pointers:
> > > uint16_t (*process)( const struct rte_ipsec_session *ss, void *in[],uint16_t num);
> > > and then make session_prepare() to choose a proper function based on input.
> > >
> > > I am ok with both schemes, but second one seems a bit nicer to me.
> >
> > How about best of both worlds, i.e save space and enable compile check
> > by anonymous union of both functions
> >
> > RTE_STD_C11
> > union {
> >       uint16_t (*pkt_process)( const struct rte_ipsec_session *ss,struct rte_mbuf *mb[],uint16_t num);
> >       uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> > };
> >
> 
> Yes, it is definitely possible, but then we still need 2 API functions,
> Depending on input type, i.e:
> 
> static inline uint16_t __rte_experimental
> rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
>  {
>         return ss->func.event_process(ss, ev, num);
> }
> 
> static inline uint16_t __rte_experimental
> rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num)
>  {
>         return ss->func.pkt_process(ss, mb, num);
> }
> 
> While if we'll have void *[], we can have just one function for both cases:
> 
> static inline uint16_t __rte_experimental
> rte_ipsec_process(const struct rte_ipsec_session *ss, void *in[], uint16_t num)
>  {
>         return ss->func.process(ss, in, num);
> }

Since it will be called from different application code path. I would
prefer to have separate functions to allow strict compiler check.



> 
> Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-29 10:19           ` Jerin Jacob
@ 2018-10-30 13:53             ` Ananyev, Konstantin
  2018-10-31  6:37               ` Jerin Jacob
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-10-30 13:53 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, Awal, Mohammad Abdul, Joseph, Anoob, Athreya, Narayana Prasad


Hi Jerin,
 

> > > > > > +
> > > > > > +/**
> > > > > > + * Checks that inside given rte_ipsec_session crypto/security fields
> > > > > > + * are filled correctly and setups function pointers based on these values.
> > > > > > + * @param ss
> > > > > > + *   Pointer to the *rte_ipsec_session* object
> > > > > > + * @return
> > > > > > + *   - Zero if operation completed successfully.
> > > > > > + *   - -EINVAL if the parameters are invalid.
> > > > > > + */
> > > > > > +int __rte_experimental
> > > > > > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > > > > > +
> > > > > > +/**
> > > > > > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > > > > > + * enqueued into the cryptodev associated with given session.
> > > > > > + * expects that for each input packet:
> > > > > > + *      - l2_len, l3_len are setup correctly
> > > > > > + * Note that erroneous mbufs are not freed by the function,
> > > > > > + * but are placed beyond last valid mbuf in the *mb* array.
> > > > > > + * It is a user responsibility to handle them further.
> > > > > > + * @param ss
> > > > > > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > > > > > + * @param mb
> > > > > > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > > > > > + *   which contain the input packets.
> > > > > > + * @param cop
> > > > > > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > > > > > + *   structures.
> > > > > > + * @param num
> > > > > > + *   The maximum number of packets to process.
> > > > > > + * @return
> > > > > > + *   Number of successfully processed packets, with error code set in rte_errno.
> > > > > > + */
> > > > > > +static inline uint16_t __rte_experimental
> > > > > > +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> > > > > > +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > > > > > +{
> > > > > > +       return ss->func.prepare(ss, mb, cop, num);
> > > > > > +}
> > > > > > +
> > > > > static inline uint16_t __rte_experimental
> > > > > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> > > > > {
> > > > >        return ss->func.event_process(ss, ev, num);
> > > > > }
> > > >
> > > > To fulfill that, we can either have 2 separate function pointers:
> > > > uint16_t (*pkt_process)( const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],uint16_t num);
> > > > uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> > > >
> > > > Or we can keep one function pointer, but change it to accept just array of pointers:
> > > > uint16_t (*process)( const struct rte_ipsec_session *ss, void *in[],uint16_t num);
> > > > and then make session_prepare() to choose a proper function based on input.
> > > >
> > > > I am ok with both schemes, but second one seems a bit nicer to me.
> > >
> > > How about best of both worlds, i.e save space and enable compile check
> > > by anonymous union of both functions
> > >
> > > RTE_STD_C11
> > > union {
> > >       uint16_t (*pkt_process)( const struct rte_ipsec_session *ss,struct rte_mbuf *mb[],uint16_t num);
> > >       uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> > > };
> > >
> >
> > Yes, it is definitely possible, but then we still need 2 API functions,
> > Depending on input type, i.e:
> >
> > static inline uint16_t __rte_experimental
> > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> >  {
> >         return ss->func.event_process(ss, ev, num);
> > }
> >
> > static inline uint16_t __rte_experimental
> > rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num)
> >  {
> >         return ss->func.pkt_process(ss, mb, num);
> > }
> >
> > While if we'll have void *[], we can have just one function for both cases:
> >
> > static inline uint16_t __rte_experimental
> > rte_ipsec_process(const struct rte_ipsec_session *ss, void *in[], uint16_t num)
> >  {
> >         return ss->func.process(ss, in, num);
> > }
> 
> Since it will be called from different application code path. I would
> prefer to have separate functions to allow strict compiler check.
> 

Ok, let's keep them separate, NP with that.
I'll rename ipsec_(prepare|process) to ipsec_pkt_(prepare_process),
so you guys can add '_event_' functions later.
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
  2018-10-30 13:53             ` Ananyev, Konstantin
@ 2018-10-31  6:37               ` Jerin Jacob
  0 siblings, 0 replies; 194+ messages in thread
From: Jerin Jacob @ 2018-10-31  6:37 UTC (permalink / raw)
  To: Ananyev, Konstantin
  Cc: dev, Awal, Mohammad Abdul, Joseph, Anoob, Athreya, Narayana Prasad

-----Original Message-----
> Date: Tue, 30 Oct 2018 13:53:30 +0000
> From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> CC: "dev@dpdk.org" <dev@dpdk.org>, "Awal, Mohammad Abdul"
>  <mohammad.abdul.awal@intel.com>, "Joseph, Anoob"
>  <Anoob.Joseph@cavium.com>, "Athreya, Narayana Prasad"
>  <NarayanaPrasad.Athreya@cavium.com>
> Subject: RE: [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API
> 
> 
> Hi Jerin,
> 
> 
> > > > > > > +
> > > > > > > +/**
> > > > > > > + * Checks that inside given rte_ipsec_session crypto/security fields
> > > > > > > + * are filled correctly and setups function pointers based on these values.
> > > > > > > + * @param ss
> > > > > > > + *   Pointer to the *rte_ipsec_session* object
> > > > > > > + * @return
> > > > > > > + *   - Zero if operation completed successfully.
> > > > > > > + *   - -EINVAL if the parameters are invalid.
> > > > > > > + */
> > > > > > > +int __rte_experimental
> > > > > > > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > > > > > > +
> > > > > > > +/**
> > > > > > > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > > > > > > + * enqueued into the cryptodev associated with given session.
> > > > > > > + * expects that for each input packet:
> > > > > > > + *      - l2_len, l3_len are setup correctly
> > > > > > > + * Note that erroneous mbufs are not freed by the function,
> > > > > > > + * but are placed beyond last valid mbuf in the *mb* array.
> > > > > > > + * It is a user responsibility to handle them further.
> > > > > > > + * @param ss
> > > > > > > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > > > > > > + * @param mb
> > > > > > > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > > > > > > + *   which contain the input packets.
> > > > > > > + * @param cop
> > > > > > > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > > > > > > + *   structures.
> > > > > > > + * @param num
> > > > > > > + *   The maximum number of packets to process.
> > > > > > > + * @return
> > > > > > > + *   Number of successfully processed packets, with error code set in rte_errno.
> > > > > > > + */
> > > > > > > +static inline uint16_t __rte_experimental
> > > > > > > +rte_ipsec_crypto_prepare(const struct rte_ipsec_session *ss,
> > > > > > > +       struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > > > > > > +{
> > > > > > > +       return ss->func.prepare(ss, mb, cop, num);
> > > > > > > +}
> > > > > > > +
> > > > > > static inline uint16_t __rte_experimental
> > > > > > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> > > > > > {
> > > > > >        return ss->func.event_process(ss, ev, num);
> > > > > > }
> > > > >
> > > > > To fulfill that, we can either have 2 separate function pointers:
> > > > > uint16_t (*pkt_process)( const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],uint16_t num);
> > > > > uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> > > > >
> > > > > Or we can keep one function pointer, but change it to accept just array of pointers:
> > > > > uint16_t (*process)( const struct rte_ipsec_session *ss, void *in[],uint16_t num);
> > > > > and then make session_prepare() to choose a proper function based on input.
> > > > >
> > > > > I am ok with both schemes, but second one seems a bit nicer to me.
> > > >
> > > > How about best of both worlds, i.e save space and enable compile check
> > > > by anonymous union of both functions
> > > >
> > > > RTE_STD_C11
> > > > union {
> > > >       uint16_t (*pkt_process)( const struct rte_ipsec_session *ss,struct rte_mbuf *mb[],uint16_t num);
> > > >       uint16_t (*event_process)( const struct rte_ipsec_session *ss, struct rte_event *ev[],uint16_t num);
> > > > };
> > > >
> > >
> > > Yes, it is definitely possible, but then we still need 2 API functions,
> > > Depending on input type, i.e:
> > >
> > > static inline uint16_t __rte_experimental
> > > rte_ipsec_event_process(const struct rte_ipsec_session *ss, struct rte_event *ev[], uint16_t num)
> > >  {
> > >         return ss->func.event_process(ss, ev, num);
> > > }
> > >
> > > static inline uint16_t __rte_experimental
> > > rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[], uint16_t num)
> > >  {
> > >         return ss->func.pkt_process(ss, mb, num);
> > > }
> > >
> > > While if we'll have void *[], we can have just one function for both cases:
> > >
> > > static inline uint16_t __rte_experimental
> > > rte_ipsec_process(const struct rte_ipsec_session *ss, void *in[], uint16_t num)
> > >  {
> > >         return ss->func.process(ss, in, num);
> > > }
> >
> > Since it will be called from different application code path. I would
> > prefer to have separate functions to allow strict compiler check.
> >
> 
> Ok, let's keep them separate, NP with that.
> I'll rename ipsec_(prepare|process) to ipsec_pkt_(prepare_process),
> so you guys can add '_event_' functions later.

OK

> Konstantin
> 

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 0/9] ipsec: new library for IPsec data-path processing
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (10 preceding siblings ...)
  2018-10-09 18:23 ` [dpdk-dev] [RFC v2 9/9] test/ipsec: introduce functional test Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

This patch series targets 19.02 release.

This patch series depends on the patch:
http://patches.dpdk.org/patch/48044/
to be applied first.

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
The library is concentrated on data-path protocols processing (ESP and AH), 
IKE protocol(s) implementation is out of scope for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed
by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process* impelementations.

Implemented:
------------
- ESP tunnel mode support (both IPv4/IPv6)
- ESP transport mode support (both IPv4/IPv6)
- Supported algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL
- Anti-Replay window and ESN support
- Unit Test

TODO list
---------
- update examples/ipsec-secgw to use librte_ipsec
  (will be subject of a separate patch).

Konstantin Ananyev (9):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test

 config/common_base                     |    5 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  119 ++
 lib/librte_ipsec/iph.h                 |   63 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  156 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  166 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1387 +++++++++++++++
 lib/librte_ipsec/sa.h                  |   98 ++
 lib/librte_ipsec/ses.c                 |   45 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2209 ++++++++++++++++++++++++
 23 files changed, 4864 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (11 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-16 10:23   ` Mohammad Abdul Awal
                     ` (10 more replies)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
                   ` (7 subsequent siblings)
  20 siblings, 11 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 2/9] security: add opaque userdata pointer into security session
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (12 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-16 10:24   ` Mohammad Abdul Awal
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 3/9] net: add ESP trailer structure definition Konstantin Ananyev
                   ` (6 subsequent siblings)
  20 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, akhil.goyal, declan.doherty

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 1431b4df1..07b315512 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -318,6 +318,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 3/9] net: add ESP trailer structure definition
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (13 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-16 10:22   ` Mohammad Abdul Awal
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 4/9] lib: introduce ipsec library Konstantin Ananyev
                   ` (5 subsequent siblings)
  20 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 4/9] lib: introduce ipsec library
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (14 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 5/9] ipsec: add SA data-path API Konstantin Ananyev
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 139 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 307 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  77 +++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 11 files changed, 626 insertions(+)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/config/common_base b/config/common_base
index d12ae98bc..32499d772 100644
--- a/config/common_base
+++ b/config/common_base
@@ -925,6 +925,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index b7370ef97..5dc774604 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -106,6 +106,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..7758dcc6d
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..4471814f9
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequnce number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..4e36fd99b
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode repated parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode repated parameters */
+	};
+
+	uint32_t replay_win_sz;
+	/**< window size to enable sequence replay attack handling.
+	 * Replay checking is disabled if the window size is 0.
+	 */
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG_IPV,
+	RTE_SATP_LOG_PROTO,
+	RTE_SATP_LOG_DIR,
+	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate requied SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..c814e5384
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,307 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+
+static int
+check_crypto_xform(struct crypto_xform *xform)
+{
+	uintptr_t p;
+
+	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
+
+	/* either aead or both auth and cipher should be not NULLs */
+	if (xform->aead) {
+		if (p)
+			return -EINVAL;
+	} else if (p == (uintptr_t)xform->auth) {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+fill_crypto_xform(struct crypto_xform *xform,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf;
+
+	memset(xform, 0, sizeof(*xform));
+
+	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;
+			xform->auth = &xf->auth;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->cipher != NULL)
+				return -EINVAL;
+			xform->cipher = &xf->cipher;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+			if (xform->aead != NULL)
+				return -EINVAL;
+			xform->aead = &xf->aead;
+		} else
+			return -EINVAL;
+	}
+
+	return check_crypto_xform(xform);
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static uint64_t
+fill_sa_type(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	} else {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	}
+
+	return tp;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = 4;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = 4;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_MAX_IV_SIZE;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	type = fill_sa_type(prm);
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	type = fill_sa_type(prm);
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, prm);
+	if (rc != 0)
+		return rc;
+
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..5d113891a
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index bb7f443f9..69684ef14 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'meter', 'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d979d..f4cd75252 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 5/9] ipsec: add SA data-path API
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (15 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 4/9] lib: introduce ipsec library Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 6/9] ipsec: implement " Konstantin Ananyev
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 154 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  45 ++++++++
 7 files changed, 230 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 7758dcc6d..79f187fae 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..429d4bf38
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ * IKEv2 protocol support right now is out of scope of that draft.
+ * Though it tries to define related API in such way, that it could be adopted
+ * by IKEv2 implementation.
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..d1c52d7ca 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,9 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_session_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index c814e5384..7f9baa602 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -305,3 +305,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 5d113891a..050a6d7ae 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -74,4 +74,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..562c1423e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 6/9] ipsec: implement SA data-path API
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (16 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 5/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-20  1:03   ` Zhang, Qi Z
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                   ` (2 subsequent siblings)
  20 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/crypto.h    |  119 ++++
 lib/librte_ipsec/iph.h       |   63 ++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1050 +++++++++++++++++++++++++++++++++-
 5 files changed, 1461 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..98f9989af
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32;
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else
+		aad->sqn.u32 = sqn_low32(sqn);
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..c85bd2866
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,63 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len - l3len);
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 4471814f9..a33ff9cca 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequnce number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 7f9baa602..00b3c8044 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -192,6 +196,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -306,18 +311,1059 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, pdlen, pdofs, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = mb->pkt_len + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - mb->pkt_len;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		struct ipv4_hdr *l3h;
+		l3h = (struct ipv4_hdr *)(ph + sa->hdr_l3_off);
+		l3h->packet_id = sqn_low16(sqc);
+		l3h->total_length = rte_cpu_to_be_16(mb->pkt_len -
+			sa->hdr_l3_off);
+	} else {
+		struct ipv6_hdr *l3h;
+		l3h = (struct ipv6_hdr *)(ph + sa->hdr_l3_off);
+		l3h->payload_len = rte_cpu_to_be_16(mb->pkt_len -
+			sa->hdr_l3_off - sizeof(*l3h));
+	}
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is done already doneby HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 7/9] ipsec: rework SA replay window/SQN for MT environment
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (17 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 6/9] ipsec: implement " Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 9/9] test/ipsec: introduce functional test Konstantin Ananyev
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  27 ++++++++
 lib/librte_ipsec/sa.c           |  23 +++++--
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 176 insertions(+), 8 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index a33ff9cca..ee5e35978 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index 4e36fd99b..35a0afec1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -53,6 +53,27 @@ struct rte_ipsec_sa_prm {
 	 */
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -60,6 +81,7 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
  * ...
  */
 
@@ -68,6 +90,7 @@ enum {
 	RTE_SATP_LOG_PROTO,
 	RTE_SATP_LOG_DIR,
 	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2,
 	RTE_SATP_LOG_NUM
 };
 
@@ -88,6 +111,10 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG_SQN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 00b3c8044..d35ed836b 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -90,6 +90,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -136,6 +139,12 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm)
 			tp |= RTE_IPSEC_SATP_IPV4;
 	}
 
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	return tp;
 }
 
@@ -159,7 +168,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -305,7 +314,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		sa->replay.win_sz = prm->replay_win_sz;
 		sa->replay.nb_bucket = nb;
 		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+		sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+		if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+			sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+				((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb));
 	}
 
 	return sz;
@@ -810,7 +822,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -829,6 +841,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -973,7 +987,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -983,6 +997,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 050a6d7ae..7dc9933f1 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -28,7 +30,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -66,10 +72,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 8/9] ipsec: helper functions to group completed crypto-ops
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (18 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 9/9] test/ipsec: introduce functional test Konstantin Ananyev
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 79f187fae..98c52f388 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 429d4bf38..0df7ea907 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -147,6 +147,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..d264d7e78
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recomended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finilise it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index d1c52d7ca..0f91fb134 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_session_prepare;
 	rte_ipsec_pkt_process;
@@ -8,6 +9,7 @@ EXPERIMENTAL {
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 
 	local: *;
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH 9/9] test/ipsec: introduce functional test
  2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                   ` (19 preceding siblings ...)
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-11-15 23:53 ` Konstantin Ananyev
  20 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-15 23:53 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2209 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2215 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 554e9945f..d4f689417 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -48,6 +48,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -115,6 +116,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -179,6 +181,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..95a447174
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->crypto_xforms = &ut_params->auth_xform;
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+
+	return rte_ipsec_session_prepare(&ut->ss[j]);
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%lx\n", test_cfg[i].flags);
+	printf("	pkt_sz %lu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH 3/9] net: add ESP trailer structure definition
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-11-16 10:22   ` Mohammad Abdul Awal
  0 siblings, 0 replies; 194+ messages in thread
From: Mohammad Abdul Awal @ 2018-11-16 10:22 UTC (permalink / raw)
  To: Konstantin Ananyev, dev



On 15/11/2018 23:53, Konstantin Ananyev wrote:
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   lib/librte_net/rte_esp.h | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
> index f77ec2eb2..8e1b3d2dd 100644
> --- a/lib/librte_net/rte_esp.h
> +++ b/lib/librte_net/rte_esp.h
> @@ -11,7 +11,7 @@
>    * ESP-related defines
>    */
>   
> -#include <stdint.h>
> +#include <rte_byteorder.h>
>   
>   #ifdef __cplusplus
>   extern "C" {
> @@ -25,6 +25,14 @@ struct esp_hdr {
>   	rte_be32_t seq;  /**< packet sequence number */
>   } __attribute__((__packed__));
>   
> +/**
> + * ESP Trailer
> + */
> +struct esp_tail {
> +	uint8_t pad_len;     /**< number of pad bytes (0-255) */
> +	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
> +} __attribute__((__packed__));
> +
>   #ifdef __cplusplus
>   }
>   #endif
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-11-16 10:23   ` Mohammad Abdul Awal
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                     ` (9 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Mohammad Abdul Awal @ 2018-11-16 10:23 UTC (permalink / raw)
  To: dev



On 15/11/2018 23:53, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> That allows upper layer to easily associate some user defined
> data with the session.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   lib/librte_cryptodev/rte_cryptodev.h | 2 ++
>   1 file changed, 2 insertions(+)
>
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index 4099823f1..009860e7b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>    * has a fixed algo, key, op-type, digest_len etc.
>    */
>   struct rte_cryptodev_sym_session {
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>   	__extension__ void *sess_private_data[0];
>   	/**< Private symmetric session material */
>   };
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH 2/9] security: add opaque userdata pointer into security session
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-11-16 10:24   ` Mohammad Abdul Awal
  0 siblings, 0 replies; 194+ messages in thread
From: Mohammad Abdul Awal @ 2018-11-16 10:24 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: akhil.goyal, declan.doherty



On 15/11/2018 23:53, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_security_session.
> That allows upper layer to easily associate some user defined
> data with the session.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   lib/librte_security/rte_security.h | 2 ++
>   1 file changed, 2 insertions(+)
>
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 1431b4df1..07b315512 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -318,6 +318,8 @@ struct rte_security_session_conf {
>   struct rte_security_session {
>   	void *sess_private_data;
>   	/**< Private session material */
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>   };
>   
>   /**
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH 6/9] ipsec: implement SA data-path API
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 6/9] ipsec: implement " Konstantin Ananyev
@ 2018-11-20  1:03   ` Zhang, Qi Z
  2018-11-20  9:44     ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Zhang, Qi Z @ 2018-11-20  1:03 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Ananyev, Konstantin, Awal, Mohammad Abdul

Hi Konstantin and Awal:
	
	I have couple questions for this patch.
	please forgive me if they are obvious, since I don't have much insight on IPsec, but I may work on related stuff in future :)

> +static inline int32_t
> +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> +	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> +	union sym_op_data *icv)
> +{
> +	uint32_t clen, hlen, pdlen, pdofs, tlen;
> +	struct rte_mbuf *ml;
> +	struct esp_hdr *esph;
> +	struct esp_tail *espt;
> +	char *ph, *pt;
> +	uint64_t *iv;
> +
> +	/* calculate extra header space required */
> +	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
> +
> +	/* number of bytes to encrypt */
> +	clen = mb->pkt_len + sizeof(*espt);
> +	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> +	/* pad length + esp tail */
> +	pdlen = clen - mb->pkt_len;
> +	tlen = pdlen + sa->icv_len;
> +
> +	/* do append and prepend */
> +	ml = rte_pktmbuf_lastseg(mb);
> +	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
> +		return -ENOSPC;
> +
> +	/* prepend header */
> +	ph = rte_pktmbuf_prepend(mb, hlen);
> +	if (ph == NULL)
> +		return -ENOSPC;
> +
> +	/* append tail */
> +	pdofs = ml->data_len;
> +	ml->data_len += tlen;
> +	mb->pkt_len += tlen;
> +	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
> +
> +	/* update pkt l2/l3 len */
> +	mb->l2_len = sa->hdr_l3_off;
> +	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
> +
> +	/* copy tunnel pkt header */
> +	rte_memcpy(ph, sa->hdr, sa->hdr_len);

I didn't get this, my understand is: 

for tunnel mode if an original packet is

	Eth + IP + UDP/TCP + data, 	

after encap, it should become
	
	Eth + encap header (IP or IP + UDP) + ESP Header + IP + UDP/TCP + Data + ESP Tailer...

So after rte_pktmbuf_prepend shouldn't we do below

1) shift L2 HEAD (Eth) ahead 
2) copy encap header and ESP header to the hole.
?

But now we just copy the sa->hdr on the pre-pend space directly? What is the sa->hdr supposed to be? but no matter what is it, we encap everything before the packet?
BTW, is UDP encapsulation also be considered here?, I didn't figure out how a IP + UDP header should be configured with sa->hdr , sa->hdr_l3_off, sa->hdr_len for this case

> +static inline int
> +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf
> *mb,
> +	uint32_t *sqn)
> +{
> +	uint32_t hlen, icv_len, tlen;
> +	struct esp_hdr *esph;
> +	struct esp_tail *espt;
> +	struct rte_mbuf *ml;
> +	char *pd;
> +
> +	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
> +		return -EBADMSG;
> +
> +	icv_len = sa->icv_len;
> +
> +	ml = rte_pktmbuf_lastseg(mb);
> +	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
> +		ml->data_len - icv_len - sizeof(*espt));

What kind of mechanism is to guarantee that last segment will always cover the esp tail?( data_len >= icv_len + sizeof (*espt))
Is that possible the esp tail be split into multi-segment for jumbo frames caes?

Thanks
Qi

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH 6/9] ipsec: implement SA data-path API
  2018-11-20  1:03   ` Zhang, Qi Z
@ 2018-11-20  9:44     ` Ananyev, Konstantin
  2018-11-20 10:02       ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-11-20  9:44 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: Awal, Mohammad Abdul


Hi Qi,

> 
> Hi Konstantin and Awal:
> 
> 	I have couple questions for this patch.
> 	please forgive me if they are obvious, since I don't have much insight on IPsec, but I may work on related stuff in future :)
> 
> > +static inline int32_t
> > +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> > +	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> > +	union sym_op_data *icv)
> > +{
> > +	uint32_t clen, hlen, pdlen, pdofs, tlen;
> > +	struct rte_mbuf *ml;
> > +	struct esp_hdr *esph;
> > +	struct esp_tail *espt;
> > +	char *ph, *pt;
> > +	uint64_t *iv;
> > +
> > +	/* calculate extra header space required */
> > +	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
> > +
> > +	/* number of bytes to encrypt */
> > +	clen = mb->pkt_len + sizeof(*espt);
> > +	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> > +
> > +	/* pad length + esp tail */
> > +	pdlen = clen - mb->pkt_len;
> > +	tlen = pdlen + sa->icv_len;
> > +
> > +	/* do append and prepend */
> > +	ml = rte_pktmbuf_lastseg(mb);
> > +	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
> > +		return -ENOSPC;
> > +
> > +	/* prepend header */
> > +	ph = rte_pktmbuf_prepend(mb, hlen);
> > +	if (ph == NULL)
> > +		return -ENOSPC;
> > +
> > +	/* append tail */
> > +	pdofs = ml->data_len;
> > +	ml->data_len += tlen;
> > +	mb->pkt_len += tlen;
> > +	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
> > +
> > +	/* update pkt l2/l3 len */
> > +	mb->l2_len = sa->hdr_l3_off;
> > +	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
> > +
> > +	/* copy tunnel pkt header */
> > +	rte_memcpy(ph, sa->hdr, sa->hdr_len);
> 
> I didn't get this, my understand is:
> 
> for tunnel mode if an original packet is
> 
> 	Eth + IP + UDP/TCP + data,

It  is assumed that input mbuf doesn't contain L2 header already (only L3/L4/...)
That's why we don't shift L2 header.
Probably have to put that into public API comments.

> 
> after encap, it should become
> 
> 	Eth + encap header (IP or IP + UDP) + ESP Header + IP + UDP/TCP + Data + ESP Tailer...
> 
> So after rte_pktmbuf_prepend shouldn't we do below
> 
> 1) shift L2 HEAD (Eth) ahead
> 2) copy encap header and ESP header to the hole.
> ?
> 
> But now we just copy the sa->hdr on the pre-pend space directly? What is the sa->hdr supposed to be?

Optional L2 header and new L3 header.

> but no matter what is it, we encap
> everything before the packet?
> BTW, is UDP encapsulation also be considered here?

Right now - no.
Might be later, if there would be a request for it.

>, I didn't figure out how a IP + UDP header should be configured with sa->hdr , sa-
> >hdr_l3_off, sa->hdr_len for this case
> 
> > +static inline int
> > +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf
> > *mb,
> > +	uint32_t *sqn)
> > +{
> > +	uint32_t hlen, icv_len, tlen;
> > +	struct esp_hdr *esph;
> > +	struct esp_tail *espt;
> > +	struct rte_mbuf *ml;
> > +	char *pd;
> > +
> > +	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
> > +		return -EBADMSG;
> > +
> > +	icv_len = sa->icv_len;
> > +
> > +	ml = rte_pktmbuf_lastseg(mb);
> > +	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
> > +		ml->data_len - icv_len - sizeof(*espt));
> 
> What kind of mechanism is to guarantee that last segment will always cover the esp tail?( data_len >= icv_len + sizeof (*espt))
> Is that possible the esp tail be split into multi-segment for jumbo frames caes?

It is possible, though right now we don't support such cases.
Right now it is up to the caller to make sure last segment contains espt+icv (plus enough free space for AAD, ESN.hi, etc.).
Plan to add proper multi-seg support later (most likely 19.05).
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH 6/9] ipsec: implement SA data-path API
  2018-11-20  9:44     ` Ananyev, Konstantin
@ 2018-11-20 10:02       ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-11-20 10:02 UTC (permalink / raw)
  To: Ananyev, Konstantin, Zhang, Qi Z, dev; +Cc: Awal, Mohammad Abdul



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
> Sent: Tuesday, November 20, 2018 9:44 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; dev@dpdk.org
> Cc: Awal, Mohammad Abdul <mohammad.abdul.awal@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 6/9] ipsec: implement SA data-path API
> 
> 
> Hi Qi,
> 
> >
> > Hi Konstantin and Awal:
> >
> > 	I have couple questions for this patch.
> > 	please forgive me if they are obvious, since I don't have much insight on IPsec, but I may work on related stuff in future :)
> >
> > > +static inline int32_t
> > > +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> > > +	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> > > +	union sym_op_data *icv)
> > > +{
> > > +	uint32_t clen, hlen, pdlen, pdofs, tlen;
> > > +	struct rte_mbuf *ml;
> > > +	struct esp_hdr *esph;
> > > +	struct esp_tail *espt;
> > > +	char *ph, *pt;
> > > +	uint64_t *iv;
> > > +
> > > +	/* calculate extra header space required */
> > > +	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
> > > +
> > > +	/* number of bytes to encrypt */
> > > +	clen = mb->pkt_len + sizeof(*espt);
> > > +	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> > > +
> > > +	/* pad length + esp tail */
> > > +	pdlen = clen - mb->pkt_len;
> > > +	tlen = pdlen + sa->icv_len;
> > > +
> > > +	/* do append and prepend */
> > > +	ml = rte_pktmbuf_lastseg(mb);
> > > +	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
> > > +		return -ENOSPC;
> > > +
> > > +	/* prepend header */
> > > +	ph = rte_pktmbuf_prepend(mb, hlen);
> > > +	if (ph == NULL)
> > > +		return -ENOSPC;
> > > +
> > > +	/* append tail */
> > > +	pdofs = ml->data_len;
> > > +	ml->data_len += tlen;
> > > +	mb->pkt_len += tlen;
> > > +	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
> > > +
> > > +	/* update pkt l2/l3 len */
> > > +	mb->l2_len = sa->hdr_l3_off;
> > > +	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
> > > +
> > > +	/* copy tunnel pkt header */
> > > +	rte_memcpy(ph, sa->hdr, sa->hdr_len);
> >
> > I didn't get this, my understand is:
> >
> > for tunnel mode if an original packet is
> >
> > 	Eth + IP + UDP/TCP + data,
> 
> It  is assumed that input mbuf doesn't contain L2 header already (only L3/L4/...)
> That's why we don't shift L2 header.
> Probably have to put that into public API comments.

After another thought - probably it is better to support the case when L2 is not stripped too here.
After all we do support it for other modes (inbound tunnel/transport, outbound transport).
Will try to add it into v2.
Konstantin

> 
> >
> > after encap, it should become
> >
> > 	Eth + encap header (IP or IP + UDP) + ESP Header + IP + UDP/TCP + Data + ESP Tailer...
> >
> > So after rte_pktmbuf_prepend shouldn't we do below
> >
> > 1) shift L2 HEAD (Eth) ahead
> > 2) copy encap header and ESP header to the hole.
> > ?
> >
> > But now we just copy the sa->hdr on the pre-pend space directly? What is the sa->hdr supposed to be?
> 
> Optional L2 header and new L3 header.
> 
> > but no matter what is it, we encap
> > everything before the packet?
> > BTW, is UDP encapsulation also be considered here?
> 
> Right now - no.
> Might be later, if there would be a request for it.
> 
> >, I didn't figure out how a IP + UDP header should be configured with sa->hdr , sa-
> > >hdr_l3_off, sa->hdr_len for this case
> >
> > > +static inline int
> > > +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf
> > > *mb,
> > > +	uint32_t *sqn)
> > > +{
> > > +	uint32_t hlen, icv_len, tlen;
> > > +	struct esp_hdr *esph;
> > > +	struct esp_tail *espt;
> > > +	struct rte_mbuf *ml;
> > > +	char *pd;
> > > +
> > > +	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
> > > +		return -EBADMSG;
> > > +
> > > +	icv_len = sa->icv_len;
> > > +
> > > +	ml = rte_pktmbuf_lastseg(mb);
> > > +	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
> > > +		ml->data_len - icv_len - sizeof(*espt));
> >
> > What kind of mechanism is to guarantee that last segment will always cover the esp tail?( data_len >= icv_len + sizeof (*espt))
> > Is that possible the esp tail be split into multi-segment for jumbo frames caes?
> 
> It is possible, though right now we don't support such cases.
> Right now it is up to the caller to make sure last segment contains espt+icv.
> Plan to add proper multi-seg support later (most likely 19.05).
> Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 0/9] ipsec: new library for IPsec data-path processing
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2018-11-16 10:23   ` Mohammad Abdul Awal
@ 2018-11-30 16:45   ` Konstantin Ananyev
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:45 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

This patch series depends on the patch:
http://patches.dpdk.org/patch/48044/
to be applied first.

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing
API.
The library is concentrated on data-path protocols processing (ESP and
AH),
IKE protocol(s) implementation is out of scope for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on
them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed
by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined rte_security
types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process*
impelementations.

TODO list
---------
 - update docs

Konstantin Ananyev (9):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test

 MAINTAINERS                            |    5 +
 config/common_base                     |    5 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  156 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  166 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1381 +++++++++++++++
 lib/librte_ipsec/sa.h                  |   98 ++
 lib/librte_ipsec/ses.c                 |   45 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2209 ++++++++++++++++++++++++
 24 files changed, 4888 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2018-11-16 10:23   ` Mohammad Abdul Awal
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2018-11-30 16:45   ` Konstantin Ananyev
  2018-12-04 13:13     ` Mohammad Abdul Awal
                       ` (10 more replies)
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
                     ` (7 subsequent siblings)
  10 siblings, 11 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:45 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (2 preceding siblings ...)
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-11-30 16:45   ` Konstantin Ananyev
  2018-12-04 13:13     ` Mohammad Abdul Awal
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
                     ` (6 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:45 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, akhil.goyal, declan.doherty

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 1431b4df1..07b315512 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -318,6 +318,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (3 preceding siblings ...)
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  2018-12-04 13:12     ` Mohammad Abdul Awal
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 4/9] lib: introduce ipsec library Konstantin Ananyev
                     ` (5 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, olivier.matz

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 4/9] lib: introduce ipsec library
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (4 preceding siblings ...)
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
                     ` (4 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 MAINTAINERS                            |   5 +
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 139 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 307 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  77 +++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 631 insertions(+)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 19353ac89..f06aee6b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1071,6 +1071,11 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
 
 Packet Framework
 ----------------
diff --git a/config/common_base b/config/common_base
index d12ae98bc..32499d772 100644
--- a/config/common_base
+++ b/config/common_base
@@ -925,6 +925,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index b7370ef97..5dc774604 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -106,6 +106,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..7758dcc6d
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..4471814f9
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequnce number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..4e36fd99b
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode repated parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode repated parameters */
+	};
+
+	uint32_t replay_win_sz;
+	/**< window size to enable sequence replay attack handling.
+	 * Replay checking is disabled if the window size is 0.
+	 */
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG_IPV,
+	RTE_SATP_LOG_PROTO,
+	RTE_SATP_LOG_DIR,
+	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate requied SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..c814e5384
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,307 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+
+static int
+check_crypto_xform(struct crypto_xform *xform)
+{
+	uintptr_t p;
+
+	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
+
+	/* either aead or both auth and cipher should be not NULLs */
+	if (xform->aead) {
+		if (p)
+			return -EINVAL;
+	} else if (p == (uintptr_t)xform->auth) {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+fill_crypto_xform(struct crypto_xform *xform,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf;
+
+	memset(xform, 0, sizeof(*xform));
+
+	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;
+			xform->auth = &xf->auth;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->cipher != NULL)
+				return -EINVAL;
+			xform->cipher = &xf->cipher;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+			if (xform->aead != NULL)
+				return -EINVAL;
+			xform->aead = &xf->aead;
+		} else
+			return -EINVAL;
+	}
+
+	return check_crypto_xform(xform);
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static uint64_t
+fill_sa_type(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	} else {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV4;
+	}
+
+	return tp;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = 4;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = 4;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_MAX_IV_SIZE;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	type = fill_sa_type(prm);
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	type = fill_sa_type(prm);
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, prm);
+	if (rc != 0)
+		return rc;
+
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..5d113891a
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index bb7f443f9..69684ef14 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'meter', 'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d979d..f4cd75252 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 5/9] ipsec: add SA data-path API
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (5 preceding siblings ...)
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 4/9] lib: introduce ipsec library Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 6/9] ipsec: implement " Konstantin Ananyev
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 154 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  45 ++++++++
 7 files changed, 230 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 7758dcc6d..79f187fae 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..429d4bf38
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ * IKEv2 protocol support right now is out of scope of that draft.
+ * Though it tries to define related API in such way, that it could be adopted
+ * by IKEv2 implementation.
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..d1c52d7ca 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,9 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_session_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index c814e5384..7f9baa602 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -305,3 +305,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 5d113891a..050a6d7ae 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -74,4 +74,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..562c1423e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 6/9] ipsec: implement SA data-path API
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (6 preceding siblings ...)
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1044 +++++++++++++++++++++++++++++++++-
 5 files changed, 1480 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..3fd93016d
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for trasnport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 4471814f9..a33ff9cca 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequnce number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 7f9baa602..6643a3293 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -192,6 +196,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -306,18 +311,1053 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is done already doneby HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 7/9] ipsec: rework SA replay window/SQN for MT environment
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (7 preceding siblings ...)
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 6/9] ipsec: implement " Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 9/9] test/ipsec: introduce functional test Konstantin Ananyev
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  27 ++++++++
 lib/librte_ipsec/sa.c           |  23 +++++--
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 176 insertions(+), 8 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index a33ff9cca..ee5e35978 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index 4e36fd99b..35a0afec1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -53,6 +53,27 @@ struct rte_ipsec_sa_prm {
 	 */
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -60,6 +81,7 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
  * ...
  */
 
@@ -68,6 +90,7 @@ enum {
 	RTE_SATP_LOG_PROTO,
 	RTE_SATP_LOG_DIR,
 	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2,
 	RTE_SATP_LOG_NUM
 };
 
@@ -88,6 +111,10 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG_SQN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 6643a3293..2690d2619 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -90,6 +90,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -136,6 +139,12 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm)
 			tp |= RTE_IPSEC_SATP_IPV4;
 	}
 
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	return tp;
 }
 
@@ -159,7 +168,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -305,7 +314,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		sa->replay.win_sz = prm->replay_win_sz;
 		sa->replay.nb_bucket = nb;
 		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+		sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+		if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+			sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+				((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb));
 	}
 
 	return sz;
@@ -804,7 +816,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -823,6 +835,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -967,7 +981,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -977,6 +991,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 050a6d7ae..7dc9933f1 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -28,7 +30,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -66,10 +72,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 8/9] ipsec: helper functions to group completed crypto-ops
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (8 preceding siblings ...)
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 9/9] test/ipsec: introduce functional test Konstantin Ananyev
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 79f187fae..98c52f388 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 429d4bf38..0df7ea907 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -147,6 +147,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..d264d7e78
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recomended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finilise it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index d1c52d7ca..0f91fb134 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_session_prepare;
 	rte_ipsec_pkt_process;
@@ -8,6 +9,7 @@ EXPERIMENTAL {
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 
 	local: *;
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v2 9/9] test/ipsec: introduce functional test
  2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                     ` (9 preceding siblings ...)
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-11-30 16:46   ` Konstantin Ananyev
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-11-30 16:46 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2209 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2215 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 554e9945f..d4f689417 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -48,6 +48,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -115,6 +116,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -179,6 +181,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..95a447174
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->crypto_xforms = &ut_params->auth_xform;
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+
+	return rte_ipsec_session_prepare(&ut->ss[j]);
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%lx\n", test_cfg[i].flags);
+	printf("	pkt_sz %lu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition
  2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-12-04 13:12     ` Mohammad Abdul Awal
  0 siblings, 0 replies; 194+ messages in thread
From: Mohammad Abdul Awal @ 2018-12-04 13:12 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: olivier.matz



On 30/11/2018 16:46, Konstantin Ananyev wrote:
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   lib/librte_net/rte_esp.h | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
> index f77ec2eb2..8e1b3d2dd 100644
> --- a/lib/librte_net/rte_esp.h
> +++ b/lib/librte_net/rte_esp.h
> @@ -11,7 +11,7 @@
>    * ESP-related defines
>    */
>   
> -#include <stdint.h>
> +#include <rte_byteorder.h>
>   
>   #ifdef __cplusplus
>   extern "C" {
> @@ -25,6 +25,14 @@ struct esp_hdr {
>   	rte_be32_t seq;  /**< packet sequence number */
>   } __attribute__((__packed__));
>   
> +/**
> + * ESP Trailer
> + */
> +struct esp_tail {
> +	uint8_t pad_len;     /**< number of pad bytes (0-255) */
> +	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
> +} __attribute__((__packed__));
> +
>   #ifdef __cplusplus
>   }
>   #endif
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-12-04 13:13     ` Mohammad Abdul Awal
  0 siblings, 0 replies; 194+ messages in thread
From: Mohammad Abdul Awal @ 2018-12-04 13:13 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: akhil.goyal, declan.doherty



On 30/11/2018 16:45, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_security_session.
> That allows upper layer to easily associate some user defined
> data with the session.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   lib/librte_security/rte_security.h | 2 ++
>   1 file changed, 2 insertions(+)
>
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 1431b4df1..07b315512 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -318,6 +318,8 @@ struct rte_security_session_conf {
>   struct rte_security_session {
>   	void *sess_private_data;
>   	/**< Private session material */
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>   };
>   
>   /**
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-12-04 13:13     ` Mohammad Abdul Awal
  2018-12-04 15:32       ` Trahe, Fiona
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                       ` (9 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Mohammad Abdul Awal @ 2018-12-04 13:13 UTC (permalink / raw)
  To: Konstantin Ananyev, dev



On 30/11/2018 16:45, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> That allows upper layer to easily associate some user defined
> data with the session.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   lib/librte_cryptodev/rte_cryptodev.h | 2 ++
>   1 file changed, 2 insertions(+)
>
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index 4099823f1..009860e7b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>    * has a fixed algo, key, op-type, digest_len etc.
>    */
>   struct rte_cryptodev_sym_session {
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>   	__extension__ void *sess_private_data[0];
>   	/**< Private symmetric session material */
>   };
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-12-04 13:13     ` Mohammad Abdul Awal
@ 2018-12-04 15:32       ` Trahe, Fiona
  0 siblings, 0 replies; 194+ messages in thread
From: Trahe, Fiona @ 2018-12-04 15:32 UTC (permalink / raw)
  To: Awal, Mohammad Abdul, Ananyev, Konstantin, dev; +Cc: Trahe, Fiona



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Mohammad Abdul Awal
> Sent: Tuesday, December 4, 2018 6:14 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym
> session
> 
> 
> 
> On 30/11/2018 16:45, Konstantin Ananyev wrote:
> > Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> > That allows upper layer to easily associate some user defined
> > data with the session.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> >   lib/librte_cryptodev/rte_cryptodev.h | 2 ++
> >   1 file changed, 2 insertions(+)
> >
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> > index 4099823f1..009860e7b 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> >    * has a fixed algo, key, op-type, digest_len etc.
> >    */
> >   struct rte_cryptodev_sym_session {
> > +	uint64_t opaque_data;
> > +	/**< Opaque user defined data */
> >   	__extension__ void *sess_private_data[0];
> >   	/**< Private symmetric session material */
> >   };
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 0/9] ipsec: new library for IPsec data-path processing
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2018-12-04 13:13     ` Mohammad Abdul Awal
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (8 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

This patch series depends on the patch:
http://patches.dpdk.org/patch/48044/
to be applied first.

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions 

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec
processing API.
The library is concentrated on data-path protocols processing
(ESP and AH), IKE protocol(s) implementation is out of scope
for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on
them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed
by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined
rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process*
impelementations.

TODO list
---------
 - update docs

Konstantin Ananyev (9):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test

 MAINTAINERS                            |    5 +
 config/common_base                     |    5 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  156 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  166 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1401 +++++++++++++++
 lib/librte_ipsec/sa.h                  |   98 ++
 lib/librte_ipsec/ses.c                 |   45 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2209 ++++++++++++++++++++++++
 24 files changed, 4908 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2018-12-04 13:13     ` Mohammad Abdul Awal
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-11 17:24       ` Doherty, Declan
                         ` (11 more replies)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
                       ` (7 subsequent siblings)
  10 siblings, 12 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 2/9] security: add opaque userdata pointer into security session
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (2 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-11 17:25       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 3/9] net: add ESP trailer structure definition Konstantin Ananyev
                       ` (6 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, akhil.goyal, declan.doherty

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 718147e00..c8e438fdd 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -317,6 +317,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 3/9] net: add ESP trailer structure definition
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (3 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-11 17:25       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 4/9] lib: introduce ipsec library Konstantin Ananyev
                       ` (5 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 4/9] lib: introduce ipsec library
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (4 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-11 17:25       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API Konstantin Ananyev
                       ` (4 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 MAINTAINERS                            |   5 +
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 139 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 327 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  77 ++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 651 insertions(+)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba31208..3cf0a84a2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1071,6 +1071,11 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
 
 Packet Framework
 ----------------
diff --git a/config/common_base b/config/common_base
index d12ae98bc..32499d772 100644
--- a/config/common_base
+++ b/config/common_base
@@ -925,6 +925,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index b7370ef97..5dc774604 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -106,6 +106,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..7758dcc6d
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..4471814f9
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequnce number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..4e36fd99b
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode repated parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode repated parameters */
+	};
+
+	uint32_t replay_win_sz;
+	/**< window size to enable sequence replay attack handling.
+	 * Replay checking is disabled if the window size is 0.
+	 */
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG_IPV,
+	RTE_SATP_LOG_PROTO,
+	RTE_SATP_LOG_DIR,
+	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate requied SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..f927a82bf
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,327 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+
+static int
+check_crypto_xform(struct crypto_xform *xform)
+{
+	uintptr_t p;
+
+	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
+
+	/* either aead or both auth and cipher should be not NULLs */
+	if (xform->aead) {
+		if (p)
+			return -EINVAL;
+	} else if (p == (uintptr_t)xform->auth) {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+fill_crypto_xform(struct crypto_xform *xform,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf;
+
+	memset(xform, 0, sizeof(*xform));
+
+	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;
+			xform->auth = &xf->auth;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->cipher != NULL)
+				return -EINVAL;
+			xform->cipher = &xf->cipher;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+			if (xform->aead != NULL)
+				return -EINVAL;
+			xform->aead = &xf->aead;
+		} else
+			return -EINVAL;
+	}
+
+	return check_crypto_xform(xform);
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static int
+fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else if (prm->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+		else
+			return -EINVAL;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else if (prm->ipsec_xform.mode ==
+			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else
+		return -EINVAL;
+
+	*type = tp;
+	return 0;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = 4;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = 4;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_MAX_IV_SIZE;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+	int32_t rc;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, prm);
+	if (rc != 0)
+		return rc;
+
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..5d113891a
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index bb7f443f9..69684ef14 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'meter', 'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d979d..f4cd75252 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (5 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 4/9] lib: introduce ipsec library Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-11 17:25       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 6/9] ipsec: implement " Konstantin Ananyev
                       ` (3 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 154 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  45 ++++++++
 7 files changed, 230 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 7758dcc6d..79f187fae 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..429d4bf38
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ * IKEv2 protocol support right now is out of scope of that draft.
+ * Though it tries to define related API in such way, that it could be adopted
+ * by IKEv2 implementation.
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..d1c52d7ca 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,9 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_session_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index f927a82bf..e4c5361e7 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -325,3 +325,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 5d113891a..050a6d7ae 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -74,4 +74,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..562c1423e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 6/9] ipsec: implement SA data-path API
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (6 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-12 17:47       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                       ` (2 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1044 +++++++++++++++++++++++++++++++++-
 5 files changed, 1480 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..3fd93016d
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for trasnport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 4471814f9..a33ff9cca 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequnce number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index e4c5361e7..bb56f42eb 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -207,6 +211,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -326,18 +331,1053 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is done already doneby HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 7/9] ipsec: rework SA replay window/SQN for MT environment
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (7 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 6/9] ipsec: implement " Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-13 12:14       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 9/9] test/ipsec: introduce functional test Konstantin Ananyev
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  27 ++++++++
 lib/librte_ipsec/sa.c           |  23 +++++--
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 176 insertions(+), 8 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index a33ff9cca..ee5e35978 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index 4e36fd99b..35a0afec1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -53,6 +53,27 @@ struct rte_ipsec_sa_prm {
 	 */
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -60,6 +81,7 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
  * ...
  */
 
@@ -68,6 +90,7 @@ enum {
 	RTE_SATP_LOG_PROTO,
 	RTE_SATP_LOG_DIR,
 	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2,
 	RTE_SATP_LOG_NUM
 };
 
@@ -88,6 +111,10 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG_SQN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index bb56f42eb..8abf3d1b1 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -90,6 +90,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -150,6 +153,12 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	} else
 		return -EINVAL;
 
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	*type = tp;
 	return 0;
 }
@@ -174,7 +183,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -325,7 +334,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		sa->replay.win_sz = prm->replay_win_sz;
 		sa->replay.nb_bucket = nb;
 		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+		sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+		if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+			sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+				((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb));
 	}
 
 	return sz;
@@ -824,7 +836,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -843,6 +855,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -987,7 +1001,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -997,6 +1011,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 050a6d7ae..7dc9933f1 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -28,7 +30,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -66,10 +72,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 8/9] ipsec: helper functions to group completed crypto-ops
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (8 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-13 12:14       ` Doherty, Declan
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 9/9] test/ipsec: introduce functional test Konstantin Ananyev
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 79f187fae..98c52f388 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 429d4bf38..0df7ea907 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -147,6 +147,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..d264d7e78
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recomended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finilise it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index d1c52d7ca..0f91fb134 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_session_prepare;
 	rte_ipsec_pkt_process;
@@ -8,6 +9,7 @@ EXPERIMENTAL {
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 
 	local: *;
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v3 9/9] test/ipsec: introduce functional test
  2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                       ` (9 preceding siblings ...)
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-12-06 15:38     ` Konstantin Ananyev
  2018-12-13 12:54       ` Doherty, Declan
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-06 15:38 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2209 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2215 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 554e9945f..d4f689417 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -48,6 +48,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -115,6 +116,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -179,6 +181,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..95a447174
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->crypto_xforms = &ut_params->auth_xform;
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+
+	return rte_ipsec_session_prepare(&ut->ss[j]);
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%lx\n", test_cfg[i].flags);
+	printf("	pkt_sz %lu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-12-11 17:24       ` Doherty, Declan
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                         ` (10 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-11 17:24 UTC (permalink / raw)
  To: Konstantin Ananyev, dev

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> That allows upper layer to easily associate some user defined
> data with the session.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> ---
>   lib/librte_cryptodev/rte_cryptodev.h | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index 4099823f1..009860e7b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>    * has a fixed algo, key, op-type, digest_len etc.
>    */
>   struct rte_cryptodev_sym_session {
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>   	__extension__ void *sess_private_data[0];
>   	/**< Private symmetric session material */
>   };
> 

Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 2/9] security: add opaque userdata pointer into security session
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-12-11 17:25       ` Doherty, Declan
  0 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-11 17:25 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: akhil.goyal

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_security_session.
> That allows upper layer to easily associate some user defined
> data with the session.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> ---
>   lib/librte_security/rte_security.h | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index 718147e00..c8e438fdd 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -317,6 +317,8 @@ struct rte_security_session_conf {
>   struct rte_security_session {
>   	void *sess_private_data;
>   	/**< Private session material */
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>   };
>   
>   /**
> 

Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 3/9] net: add ESP trailer structure definition
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 3/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-12-11 17:25       ` Doherty, Declan
  0 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-11 17:25 UTC (permalink / raw)
  To: Konstantin Ananyev, dev

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> ---
>   lib/librte_net/rte_esp.h | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
> index f77ec2eb2..8e1b3d2dd 100644
> --- a/lib/librte_net/rte_esp.h
> +++ b/lib/librte_net/rte_esp.h
> @@ -11,7 +11,7 @@
>    * ESP-related defines
>    */
>   
> -#include <stdint.h>
> +#include <rte_byteorder.h>
>   
>   #ifdef __cplusplus
>   extern "C" {
> @@ -25,6 +25,14 @@ struct esp_hdr {
>   	rte_be32_t seq;  /**< packet sequence number */
>   } __attribute__((__packed__));
>   
> +/**
> + * ESP Trailer
> + */
> +struct esp_tail {
> +	uint8_t pad_len;     /**< number of pad bytes (0-255) */
> +	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
> +} __attribute__((__packed__));
> +
>   #ifdef __cplusplus
>   }
>   #endif
> 


Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/9] lib: introduce ipsec library
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 4/9] lib: introduce ipsec library Konstantin Ananyev
@ 2018-12-11 17:25       ` Doherty, Declan
  0 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-11 17:25 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Mohammad Abdul Awal

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Introduce librte_ipsec library.
> The library is supposed to utilize existing DPDK crypto-dev and
> security API to provide application with transparent IPsec processing API.
> That initial commit provides some base API to manage
> IPsec Security Association (SA) object.
> 

So cosmetics change suggested, otherwise looks fine to me.

> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   MAINTAINERS                            |   5 +
...

> +
> +#ifndef _IPSEC_SQN_H_
> +#define _IPSEC_SQN_H_
> +
> +#define WINDOW_BUCKET_BITS		6 /* uint64_t */ > +#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)

1 << 6 is a really confusing way of defining a 64 bit bucket size, is it 
necessary to define this way?

> +#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
> +
> +/* minimum number of bucket, power of 2*/
> +#define WINDOW_BUCKET_MIN		2
> +#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
> +
> +#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> +
> +/*
> + * for given size, calculate required number of buckets.
> + */
> +static uint32_t
> +replay_num_bucket(uint32_t wsz)
> +{
> +	uint32_t nb;
> +
> +	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
> +		WINDOW_BUCKET_SIZE);
> +	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
> +
> +	return nb;
> +}
> +
> +/**
> + * Based on number of buckets calculated required size for the
> + * structure that holds replay window and sequnce number (RSN) information.

                                              ^^ typo

> + */
> +static size_t
> +rsn_size(uint32_t nb_bucket)
> +{
> +	size_t sz;
> +	struct replay_sqn *rsn;
> +
> +	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
> +	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
> +	return sz;
> +}
...

> +/**
> + * SA type is an 64-bit value that contain the following information:
> + * - IP version (IPv4/IPv6)
> + * - IPsec proto (ESP/AH)
> + * - inbound/outbound
> + * - mode (TRANSPORT/TUNNEL)
> + * - for TUNNEL outer IP version (IPv4/IPv6)
> + * ...
> + */
> +
> +enum {
> +	RTE_SATP_LOG_IPV,
> +	RTE_SATP_LOG_PROTO,
> +	RTE_SATP_LOG_DIR,
> +	RTE_SATP_LOG_MODE,
> +	RTE_SATP_LOG_NUM
> +};
> +
> +#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
> +#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
> +#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
> +
> +#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
> +#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
> +#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
> +
> +#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
> +#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
> +#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
> +
> +#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
> +#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
> +#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
> +#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
> +

for readability in the rest of the code I would suggest that using 
either use RTE_IPSEC_SA_TYPE_ or  just RTE_IPSEC_SA_ in the definitions 
above. Also in the enumeration it's not clear to me what the the _LOG_ 
means, it's being used as the offset, so maybe _OFFSET_ would be a 
better name but I I think it might clearer if absolute bit offsets were 
used.

> +/**
> + * get type of given SA
> + * @return
> + *   SA type value.
> + */
> +uint64_t __rte_experimental
> +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
> +
> +/**
> + * Calculate requied SA size based on provided input parameters.
> + * @param prm
> + *   Parameters that wil be used to initialise SA object.
                         ^^ typo
> + * @return
> + *   - Actual size required for SA with given parameters.
> + *   - -EINVAL if the parameters are invalid.
> + */
> +int __rte_experimental
> +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
> +
> +/**

...

>   _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
> 


Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-12-11 17:25       ` Doherty, Declan
  2018-12-12  7:37         ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Doherty, Declan @ 2018-12-11 17:25 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Mohammad Abdul Awal

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Introduce Security Association (SA-level) data-path API
> Operates at SA level, provides functions to:
>      - initialize/teardown SA object
>      - process inbound/outbound ESP/AH packets associated with the given SA
>        (decrypt/encrypt, authenticate, check integrity,
>        add/remove ESP/AH related headers and data, etc.).
> 
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---

...

> +#ifndef _RTE_IPSEC_H_
> +#define _RTE_IPSEC_H_
> +
> +/**
> + * @file rte_ipsec.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE IPsec support.
> + * librte_ipsec provides a framework for data-path IPsec protocol
> + * processing (ESP/AH).
> + * IKEv2 protocol support right now is out of scope of that draft.
> + * Though it tries to define related API in such way, that it could be adopted
> + * by IKEv2 implementation.
> + */

I think you can drop the IKE note from the header as key exchange is 
covered under a complete different RFC to the base IPsec one.
> +
> +#include <rte_ipsec_sa.h>
> +#include <rte_mbuf.h>
> +

...

> 


Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API
  2018-12-11 17:25       ` Doherty, Declan
@ 2018-12-12  7:37         ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-12  7:37 UTC (permalink / raw)
  To: Doherty, Declan, dev; +Cc: Awal, Mohammad Abdul



> -----Original Message-----
> From: Doherty, Declan
> Sent: Tuesday, December 11, 2018 5:26 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org
> Cc: Awal, Mohammad Abdul <mohammad.abdul.awal@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API
> 
> On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> > Introduce Security Association (SA-level) data-path API
> > Operates at SA level, provides functions to:
> >      - initialize/teardown SA object
> >      - process inbound/outbound ESP/AH packets associated with the given SA
> >        (decrypt/encrypt, authenticate, check integrity,
> >        add/remove ESP/AH related headers and data, etc.).
> >
> > Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> 
> ...
> 
> > +#ifndef _RTE_IPSEC_H_
> > +#define _RTE_IPSEC_H_
> > +
> > +/**
> > + * @file rte_ipsec.h
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * RTE IPsec support.
> > + * librte_ipsec provides a framework for data-path IPsec protocol
> > + * processing (ESP/AH).
> > + * IKEv2 protocol support right now is out of scope of that draft.
> > + * Though it tries to define related API in such way, that it could be adopted
> > + * by IKEv2 implementation.
> > + */
> 
> I think you can drop the IKE note from the header as key exchange is
> covered under a complete different RFC to the base IPsec one.

Makes sense, will do in v4.
Konstantin

> > +
> > +#include <rte_ipsec_sa.h>
> > +#include <rte_mbuf.h>
> > +
> 
> ...
> 
> >
> 
> 
> Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 6/9] ipsec: implement SA data-path API
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 6/9] ipsec: implement " Konstantin Ananyev
@ 2018-12-12 17:47       ` Doherty, Declan
  2018-12-13 11:21         ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Doherty, Declan @ 2018-12-12 17:47 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Mohammad Abdul Awal

Only some minor some cosmetic questions

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Provide implementation for rte_ipsec_pkt_crypto_prepare() and
...

> +/*
> + * Move preceding (L3) headers up to free space for ESP header and IV.
> + */
> +static inline void
> +insert_esph(char *np, char *op, uint32_t hlen)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != hlen; i++)
> +		np[i] = op[i];
> +}
> +
> +/* update original ip header fields for trasnport case */

                                            ^^typo
> +static inline int
> +update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> +		uint32_t l2len, uint32_t l3len, uint8_t proto)
> +{
> +	struct ipv4_hdr *v4h;
> +	struct ipv6_hdr *v6h;
> +	int32_t rc;
> +
> +	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
> +		v4h = p;
> +		rc = v4h->next_proto_id;
> +		v4h->next_proto_id = proto;
> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> +	} else if (l3len == sizeof(*v6h)) {

why are you using a different method of identifying ipv6 vs ipv4, would 
checking (sa->type & RTE_IPSEC_SATP_IPV_MASK)== RTE_IPSEC_SATP_IPV6 be 
not be valid here also?

> 
...
> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> index 4471814f9..a33ff9cca 100644
> --- a/lib/librte_ipsec/ipsec_sqn.h
> +++ b/lib/librte_ipsec/ipsec_sqn.h
> @@ -15,6 +15,45 @@
>   
>   #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)

Would it make more sense to have ESN as RTE_IPSEC_SATP_ property so it 
can be retrieve through the sa type API?

>   
...>

Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 6/9] ipsec: implement SA data-path API
  2018-12-12 17:47       ` Doherty, Declan
@ 2018-12-13 11:21         ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-13 11:21 UTC (permalink / raw)
  To: Doherty, Declan, dev; +Cc: Awal, Mohammad Abdul

Hi Declan,

> 
> Only some minor some cosmetic questions
> 
> On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> > Provide implementation for rte_ipsec_pkt_crypto_prepare() and
> ...
> 
> > +/*
> > + * Move preceding (L3) headers up to free space for ESP header and IV.
> > + */
> > +static inline void
> > +insert_esph(char *np, char *op, uint32_t hlen)
> > +{
> > +	uint32_t i;
> > +
> > +	for (i = 0; i != hlen; i++)
> > +		np[i] = op[i];
> > +}
> > +
> > +/* update original ip header fields for trasnport case */
> 
>                                             ^^typo
> > +static inline int
> > +update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> > +		uint32_t l2len, uint32_t l3len, uint8_t proto)
> > +{
> > +	struct ipv4_hdr *v4h;
> > +	struct ipv6_hdr *v6h;
> > +	int32_t rc;
> > +
> > +	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
> > +		v4h = p;
> > +		rc = v4h->next_proto_id;
> > +		v4h->next_proto_id = proto;
> > +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> > +	} else if (l3len == sizeof(*v6h)) {
> 
> why are you using a different method of identifying ipv6 vs ipv4, would
> checking (sa->type & RTE_IPSEC_SATP_IPV_MASK)== RTE_IPSEC_SATP_IPV6 be
> not be valid here also?

Because right now we don't have a proper support for ipv6 extended headers here.
So we accept ipv4 packets and ipv6 packets without external headers.
For ipv6 wih xhdr it will retun an error.
xhdr support is planned in 19.05

> 
> >
> ...
> > diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> > index 4471814f9..a33ff9cca 100644
> > --- a/lib/librte_ipsec/ipsec_sqn.h
> > +++ b/lib/librte_ipsec/ipsec_sqn.h
> > @@ -15,6 +15,45 @@
> >
> >   #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> 
> Would it make more sense to have ESN as RTE_IPSEC_SATP_ property so it
> can be retrieve through the sa type API?

We need sqn_mask in our sqn calculations anyway, so internally it is more
convenient to use it.
Though it probably is a good idea to add ESN into SATP too, so external
users can acess this info.
Will try to add it to v4.   
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 7/9] ipsec: rework SA replay window/SQN for MT environment
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-12-13 12:14       ` Doherty, Declan
  0 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-13 12:14 UTC (permalink / raw)
  To: Konstantin Ananyev, dev

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> With these changes functions:
>    - rte_ipsec_pkt_crypto_prepare
>    - rte_ipsec_pkt_process
>   can be safely used in MT environment, as long as the user can guarantee
>   that they obey multiple readers/single writer model for SQN+replay_window
>   operations.
>   To be more specific:
>   for outbound SA there are no restrictions.
>   for inbound SA the caller has to guarantee that at any given moment
>   only one thread is executing rte_ipsec_pkt_process() for given SA.
>   Note that it is caller responsibility to maintain correct order
>   of packets to be processed.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
...
> 

Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 8/9] ipsec: helper functions to group completed crypto-ops
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-12-13 12:14       ` Doherty, Declan
  0 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-13 12:14 UTC (permalink / raw)
  To: Konstantin Ananyev, dev

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Introduce helper functions to process completed crypto-ops
> and group related packets by sessions they belong to.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
...
> 

Acked-by: Declan Doherty <declan.doherty@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v3 9/9] test/ipsec: introduce functional test
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 9/9] test/ipsec: introduce functional test Konstantin Ananyev
@ 2018-12-13 12:54       ` Doherty, Declan
  0 siblings, 0 replies; 194+ messages in thread
From: Doherty, Declan @ 2018-12-13 12:54 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Mohammad Abdul Awal, Bernard Iremonger

Can you add a note to the commit message that tests require the null 
crypto pmd to pass successfully, and also an ERR message in the test 
suite initialisation if no null pmd would be good.

On 06/12/2018 3:38 PM, Konstantin Ananyev wrote:
> Create functional test for librte_ipsec.
> 
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
...
> 

Acked-by: Declan Doherty <declan.dohetry@intel.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2018-12-11 17:24       ` Doherty, Declan
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19  9:26         ` Akhil Goyal
                           ` (11 more replies)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
                         ` (9 subsequent siblings)
  11 siblings, 12 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 02/10] security: add opaque userdata pointer into security session
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2018-12-11 17:24       ` Doherty, Declan
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19  9:26         ` Akhil Goyal
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition Konstantin Ananyev
                         ` (8 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 718147e00..c8e438fdd 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -317,6 +317,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (2 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19  9:32         ` Akhil Goyal
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library Konstantin Ananyev
                         ` (7 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (3 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19 12:08         ` Akhil Goyal
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API Konstantin Ananyev
                         ` (6 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                            |   5 +
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 139 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 327 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  77 ++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 651 insertions(+)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 71ba31208..3cf0a84a2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1071,6 +1071,11 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
 
 Packet Framework
 ----------------
diff --git a/config/common_base b/config/common_base
index d12ae98bc..32499d772 100644
--- a/config/common_base
+++ b/config/common_base
@@ -925,6 +925,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index b7370ef97..5dc774604 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -106,6 +106,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..7758dcc6d
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..1935f6e30
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequence number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..4e36fd99b
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,139 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode repated parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode repated parameters */
+	};
+
+	uint32_t replay_win_sz;
+	/**< window size to enable sequence replay attack handling.
+	 * Replay checking is disabled if the window size is 0.
+	 */
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG_IPV,
+	RTE_SATP_LOG_PROTO,
+	RTE_SATP_LOG_DIR,
+	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate requied SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..f927a82bf
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,327 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+
+static int
+check_crypto_xform(struct crypto_xform *xform)
+{
+	uintptr_t p;
+
+	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
+
+	/* either aead or both auth and cipher should be not NULLs */
+	if (xform->aead) {
+		if (p)
+			return -EINVAL;
+	} else if (p == (uintptr_t)xform->auth) {
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int
+fill_crypto_xform(struct crypto_xform *xform,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf;
+
+	memset(xform, 0, sizeof(*xform));
+
+	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;
+			xform->auth = &xf->auth;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->cipher != NULL)
+				return -EINVAL;
+			xform->cipher = &xf->cipher;
+		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+			if (xform->aead != NULL)
+				return -EINVAL;
+			xform->aead = &xf->aead;
+		} else
+			return -EINVAL;
+	}
+
+	return check_crypto_xform(xform);
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static int
+fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else if (prm->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+		else
+			return -EINVAL;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else if (prm->ipsec_xform.mode ==
+			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else
+		return -EINVAL;
+
+	*type = tp;
+	return 0;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = 4;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = 4;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_MAX_IV_SIZE;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+	int32_t rc;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, prm);
+	if (rc != 0)
+		return rc;
+
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..5d113891a
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index bb7f443f9..69684ef14 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'meter', 'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 5699d979d..f4cd75252 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (4 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19 13:04         ` Akhil Goyal
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 06/10] ipsec: implement " Konstantin Ananyev
                         ` (5 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  45 ++++++++
 7 files changed, 227 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 7758dcc6d..79f187fae 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..cbcd861b5
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..d1c52d7ca 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,9 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_session_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index f927a82bf..e4c5361e7 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -325,3 +325,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 5d113891a..050a6d7ae 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -74,4 +74,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..562c1423e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (5 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19 15:32         ` Akhil Goyal
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                         ` (4 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1044 +++++++++++++++++++++++++++++++++-
 5 files changed, 1480 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..3fd93016d
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for trasnport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 1935f6e30..6e18c34eb 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequence number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index e4c5361e7..bb56f42eb 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -207,6 +211,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -326,18 +331,1053 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is done already doneby HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 07/10] ipsec: rework SA replay window/SQN for MT environment
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (6 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 06/10] ipsec: implement " Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
                         ` (3 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  33 ++++++++++
 lib/librte_ipsec/sa.c           |  29 ++++++--
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 188 insertions(+), 8 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 6e18c34eb..7de10bef5 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index 4e36fd99b..71a355e72 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -53,6 +53,27 @@ struct rte_ipsec_sa_prm {
 	 */
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -60,6 +81,8 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
+ * - ESN enabled/disabled
  * ...
  */
 
@@ -68,6 +91,8 @@ enum {
 	RTE_SATP_LOG_PROTO,
 	RTE_SATP_LOG_DIR,
 	RTE_SATP_LOG_MODE,
+	RTE_SATP_LOG_SQN = RTE_SATP_LOG_MODE + 2,
+	RTE_SATP_LOG_ESN,
 	RTE_SATP_LOG_NUM
 };
 
@@ -88,6 +113,14 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG_SQN)
+
+#define RTE_IPSEC_SATP_ESN_MASK		(1ULL << RTE_SATP_LOG_ESN)
+#define RTE_IPSEC_SATP_ESN_ENABLE	(0ULL << RTE_SATP_LOG_ESN)
+#define RTE_IPSEC_SATP_ESN_DISABLE	(1ULL << RTE_SATP_LOG_ESN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index bb56f42eb..9823f4bbc 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -90,6 +90,9 @@ ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -150,6 +153,18 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	} else
 		return -EINVAL;
 
+	/* check for ESN flag */
+	if (prm->ipsec_xform.options.esn == 0)
+		tp |= RTE_IPSEC_SATP_ESN_DISABLE;
+	else
+		tp |= RTE_IPSEC_SATP_ESN_ENABLE;
+
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	*type = tp;
 	return 0;
 }
@@ -174,7 +189,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -325,7 +340,10 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		sa->replay.win_sz = prm->replay_win_sz;
 		sa->replay.nb_bucket = nb;
 		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+		sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+		if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+			sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+				((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb));
 	}
 
 	return sz;
@@ -824,7 +842,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -843,6 +861,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -987,7 +1007,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -997,6 +1017,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 050a6d7ae..7dc9933f1 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -28,7 +30,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -66,10 +72,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (7 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19 15:46         ` Akhil Goyal
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test Konstantin Ananyev
                         ` (2 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev; +Cc: 0000-cover-letter.patch, Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 79f187fae..98c52f388 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index cbcd861b5..cd2e3b26c 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -144,6 +144,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..d264d7e78
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recomended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finilise it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index d1c52d7ca..0f91fb134 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,6 +1,7 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_session_prepare;
 	rte_ipsec_pkt_process;
@@ -8,6 +9,7 @@ EXPERIMENTAL {
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 
 	local: *;
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (8 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-12-14 16:23       ` Konstantin Ananyev
  2018-12-19 15:53         ` Akhil Goyal
  2018-12-14 16:27       ` [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide Konstantin Ananyev
  2018-12-14 16:29       ` [dpdk-dev] [PATCH v4 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:23 UTC (permalink / raw)
  To: dev
  Cc: 0000-cover-letter.patch, Konstantin Ananyev, Mohammad Abdul Awal,
	Bernard Iremonger

Create functional test for librte_ipsec.
Note that the test requires null crypto pmd to pass successfully.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2209 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2215 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 554e9945f..d4f689417 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -48,6 +48,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -115,6 +116,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -179,6 +181,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..95a447174
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+	ut_params->auth_xform.next = &ut_params->cipher_xform;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+	ut_params->cipher_xform.next = NULL;
+
+	ut_params->crypto_xforms = &ut_params->auth_xform;
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+
+	return rte_ipsec_session_prepare(&ut->ss[j]);
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%lx\n", test_cfg[i].flags);
+	printf("	pkt_sz %lu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (9 preceding siblings ...)
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test Konstantin Ananyev
@ 2018-12-14 16:27       ` Konstantin Ananyev
  2018-12-19  3:46         ` Thomas Monjalon
  2018-12-19 16:01         ` Akhil Goyal
  2018-12-14 16:29       ` [dpdk-dev] [PATCH v4 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  11 siblings, 2 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:27 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev, Bernard Iremonger

Add IPsec library guide and update release notes.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/prog_guide/index.rst        |  1 +
 doc/guides/prog_guide/ipsec_lib.rst    | 74 ++++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst | 10 ++++
 3 files changed, 85 insertions(+)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst

diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ba8c1f6ad..6726b1e8d 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,6 +54,7 @@ Programmer's Guide
     vhost_lib
     metrics_lib
     bpf_lib
+    ipsec_lib
     source_org
     dev_kit_build_system
     dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
new file mode 100644
index 000000000..f3b783c20
--- /dev/null
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -0,0 +1,74 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+IPsec Packet Processing Library
+===============================
+
+The DPDK provides a library for IPsec data-path processing.
+The library utilizes existing DPDK crypto-dev and
+security API to provide application with transparent and
+high peromant IPsec packet processing API.
+The library is concentrated on data-path protocols processing
+(ESP and AH), IKE protocol(s) implementation is out of scope
+for that library.
+
+SA level API
+------------
+
+This API operates on IPsec SA level.
+It provides functionality that allows user for given SA to process
+inbound and outbound IPsec packets.
+To be more specific:
+*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
+*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
+*  setup related mbuf felids (ol_flags, tx_offloads, etc.).
+*  initialize/un-initialize given SA based on user provided parameters.
+
+SA-level API is based on top of crypto-dev/security API and relies on
+them to perform actual cipher and integrity checking.
+
+Due to the nature of crypto-dev API (enqueue/deque model) library introduces
+asynchronous API for IPsec packets destined to be processed by crypto-device.
+
+Expected API call sequence for data-path processing would be:
+
+.. code-block:: c
+
+    /* enqueue for processing by crypto-device */
+    rte_ipsec_pkt_crypto_prepare(...);
+    rte_cryptodev_enqueue_burst(...);
+    /* dequeue from crypto-device and do final processing (if any) */
+    rte_cryptodev_dequeue_burst(...);
+    rte_ipsec_pkt_crypto_group(...); /* optional */
+    rte_ipsec_pkt_process(...);
+
+For packets destined for inline processing no extra overhead
+is required and synchronous API call: rte_ipsec_pkt_process()
+is sufficient for that case.
+
+.. note::
+
+    For more details about the IPsec API, please refer to the *DPDK API Reference*.
+
+Current implementation supports all four currently defined rte_security types:
+*  RTE_SECURITY_ACTION_TYPE_NONE
+
+*  RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
+
+*  RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+
+*  RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+
+To accommodate future custom implementations function pointers
+model is used for both for *crypto_prepare* and *process*
+impelementations.
+
+Supported features:
+*  ESP protocol tunnel mode.
+
+*  ESP protocol transport mode.
+
+*  ESN and replay window.
+
+*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
+
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index e86ef9511..e88289f73 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -60,6 +60,16 @@ New Features
   * Added the handler to get firmware version string.
   * Added support for multicast filtering.
 
+* **Added IPsec Library.**
+
+  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
+  transport support for IPv4 and IPv6 packets.
+
+  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
+  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
+  planned to add more algorithms in future releases.
+
+  See :doc:`../prog_guide/ipsec_lib` for more information.
 
 Removed Items
 -------------
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v4 00/10] ipsec: new library for IPsec data-path processing
  2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                         ` (10 preceding siblings ...)
  2018-12-14 16:27       ` [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide Konstantin Ananyev
@ 2018-12-14 16:29       ` Konstantin Ananyev
  2018-12-21 13:32         ` Akhil Goyal
  11 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-14 16:29 UTC (permalink / raw)
  To: dev; +Cc: Konstantin Ananyev

This patch series depends on the patch:
http://patches.dpdk.org/patch/48044/
to be applied first.

v3 -> v4
 - Changes to adress Declan comments
 - Update docs

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions 

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec
processing API.
The library is concentrated on data-path protocols processing
(ESP and AH), IKE protocol(s) implementation is out of scope
for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on
them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed
by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined
rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process*
impelementations.

Konstantin Ananyev (10):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test
  doc: add IPsec library guide

 MAINTAINERS                            |    5 +
 config/common_base                     |    5 +
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/prog_guide/ipsec_lib.rst    |   74 +
 doc/guides/rel_notes/release_19_02.rst |   10 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  153 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  172 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1407 +++++++++++++++
 lib/librte_ipsec/sa.h                  |   98 ++
 lib/librte_ipsec/ses.c                 |   45 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2209 ++++++++++++++++++++++++
 27 files changed, 5002 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide
  2018-12-14 16:27       ` [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide Konstantin Ananyev
@ 2018-12-19  3:46         ` Thomas Monjalon
  2018-12-19 16:01         ` Akhil Goyal
  1 sibling, 0 replies; 194+ messages in thread
From: Thomas Monjalon @ 2018-12-19  3:46 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev, Bernard Iremonger, marko.kovacevic

Hi,

14/12/2018 17:27, Konstantin Ananyev:
> Add IPsec library guide and update release notes.

A quick look at the guide hint me that you should check the spelling.

> --- a/doc/guides/rel_notes/release_19_02.rst
> +++ b/doc/guides/rel_notes/release_19_02.rst
> @@ -60,6 +60,16 @@ New Features
> +* **Added IPsec Library.**
> +
> +  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
> +  transport support for IPv4 and IPv6 packets.
> +
> +  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
> +  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
> +  planned to add more algorithms in future releases.
> +
> +  See :doc:`../prog_guide/ipsec_lib` for more information.
>  
>  Removed Items
>  -------------

Please keep the spacing (2 blank lines before heading).

I've seen other minor spacing mistakes in other patches.
Please check, thanks.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
@ 2018-12-19  9:26         ` Akhil Goyal
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                           ` (10 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19  9:26 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: 0000-cover-letter.patch



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> That allows upper layer to easily associate some user defined
> data with the session.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 02/10] security: add opaque userdata pointer into security session
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-12-19  9:26         ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19  9:26 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: 0000-cover-letter.patch



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Add 'uint64_t opaque_data' inside struct rte_security_session.
> That allows upper layer to easily associate some user defined
> data with the session.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-12-19  9:32         ` Akhil Goyal
  2018-12-27 10:13           ` Olivier Matz
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19  9:32 UTC (permalink / raw)
  To: Konstantin Ananyev, dev



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>   lib/librte_net/rte_esp.h | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
> index f77ec2eb2..8e1b3d2dd 100644
> --- a/lib/librte_net/rte_esp.h
> +++ b/lib/librte_net/rte_esp.h
> @@ -11,7 +11,7 @@
>    * ESP-related defines
>    */
>   
> -#include <stdint.h>
> +#include <rte_byteorder.h>
>   
>   #ifdef __cplusplus
>   extern "C" {
> @@ -25,6 +25,14 @@ struct esp_hdr {
>   	rte_be32_t seq;  /**< packet sequence number */
>   } __attribute__((__packed__));
>   
> +/**
> + * ESP Trailer
> + */
> +struct esp_tail {
> +	uint8_t pad_len;     /**< number of pad bytes (0-255) */
> +	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
> +} __attribute__((__packed__));
> +
>   #ifdef __cplusplus
>   }
>   #endif
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library Konstantin Ananyev
@ 2018-12-19 12:08         ` Akhil Goyal
  2018-12-19 12:39           ` Thomas Monjalon
  2018-12-20 14:06           ` Ananyev, Konstantin
  0 siblings, 2 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19 12:08 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Thomas Monjalon, Mohammad Abdul Awal



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Introduce librte_ipsec library.
> The library is supposed to utilize existing DPDK crypto-dev and
> security API to provide application with transparent IPsec processing API.
> That initial commit provides some base API to manage
> IPsec Security Association (SA) object.
>
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>   MAINTAINERS                            |   5 +
>   config/common_base                     |   5 +
>   lib/Makefile                           |   2 +
>   lib/librte_ipsec/Makefile              |  24 ++
>   lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
>   lib/librte_ipsec/meson.build           |  10 +
>   lib/librte_ipsec/rte_ipsec_sa.h        | 139 +++++++++++
>   lib/librte_ipsec/rte_ipsec_version.map |  10 +
>   lib/librte_ipsec/sa.c                  | 327 +++++++++++++++++++++++++
>   lib/librte_ipsec/sa.h                  |  77 ++++++
>   lib/meson.build                        |   2 +
>   mk/rte.app.mk                          |   2 +
>   12 files changed, 651 insertions(+)
>   create mode 100644 lib/librte_ipsec/Makefile
>   create mode 100644 lib/librte_ipsec/ipsec_sqn.h
>   create mode 100644 lib/librte_ipsec/meson.build
>   create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>   create mode 100644 lib/librte_ipsec/sa.c
>   create mode 100644 lib/librte_ipsec/sa.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 71ba31208..3cf0a84a2 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1071,6 +1071,11 @@ F: doc/guides/prog_guide/pdump_lib.rst
>   F: app/pdump/
>   F: doc/guides/tools/pdump.rst
>   
> +IPsec - EXPERIMENTAL
> +M: Konstantin Ananyev <konstantin.ananyev@intel.com>
> +F: lib/librte_ipsec/
> +M: Bernard Iremonger <bernard.iremonger@intel.com>
> +F: test/test/test_ipsec.c
>   
Please add "T: git://dpdk.org/next/dpdk-next-crypto" as it would be 
maintained in crypto sub tree in future.
>   Packet Framework
>   ----------------
> diff --git a/config/common_base b/config/common_base
> index d12ae98bc..32499d772 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -925,6 +925,11 @@ CONFIG_RTE_LIBRTE_BPF=y
>   # allow load BPF from ELF files (requires libelf)
>   CONFIG_RTE_LIBRTE_BPF_ELF=n
>   
> +#
> +# Compile librte_ipsec
> +#
> +CONFIG_RTE_LIBRTE_IPSEC=y
> +
>   #
>   # Compile the test application
>   #
> diff --git a/lib/Makefile b/lib/Makefile
> index b7370ef97..5dc774604 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -106,6 +106,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
>   DEPDIRS-librte_gso += librte_mempool
>   DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
>   DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> +DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
>   DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
>   DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
>   
> diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
> new file mode 100644
> index 000000000..7758dcc6d
> --- /dev/null
> +++ b/lib/librte_ipsec/Makefile
> @@ -0,0 +1,24 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_ipsec.a
> +
> +CFLAGS += -O3
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
> +CFLAGS += -DALLOW_EXPERIMENTAL_API
> +LDLIBS += -lrte_eal -lrte_mbuf -lrte_cryptodev -lrte_security
> +
> +EXPORT_MAP := rte_ipsec_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
> +
> +# install header files
> +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> new file mode 100644
> index 000000000..1935f6e30
> --- /dev/null
> +++ b/lib/librte_ipsec/ipsec_sqn.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _IPSEC_SQN_H_
> +#define _IPSEC_SQN_H_
> +
> +#define WINDOW_BUCKET_BITS		6 /* uint64_t */
> +#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
> +#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
> +
> +/* minimum number of bucket, power of 2*/
> +#define WINDOW_BUCKET_MIN		2
> +#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
> +
> +#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> +
> +/*
> + * for given size, calculate required number of buckets.
> + */
> +static uint32_t
> +replay_num_bucket(uint32_t wsz)
> +{
> +	uint32_t nb;
> +
> +	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
> +		WINDOW_BUCKET_SIZE);
> +	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
> +
> +	return nb;
> +}
> +
> +/**
> + * Based on number of buckets calculated required size for the
> + * structure that holds replay window and sequence number (RSN) information.
> + */
> +static size_t
> +rsn_size(uint32_t nb_bucket)
> +{
> +	size_t sz;
> +	struct replay_sqn *rsn;
> +
> +	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
> +	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
> +	return sz;
> +}
> +
> +#endif /* _IPSEC_SQN_H_ */
> diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
> new file mode 100644
> index 000000000..52c78eaeb
> --- /dev/null
> +++ b/lib/librte_ipsec/meson.build
> @@ -0,0 +1,10 @@
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2018 Intel Corporation
> +
> +allow_experimental_apis = true
> +
> +sources=files('sa.c')
> +
> +install_headers = files('rte_ipsec_sa.h')
> +
> +deps += ['mbuf', 'net', 'cryptodev', 'security']
we need net in meson and not in Makefile ?
> diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
> new file mode 100644
> index 000000000..4e36fd99b
> --- /dev/null
> +++ b/lib/librte_ipsec/rte_ipsec_sa.h
> @@ -0,0 +1,139 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _RTE_IPSEC_SA_H_
> +#define _RTE_IPSEC_SA_H_
> +
> +/**
> + * @file rte_ipsec_sa.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Defines API to manage IPsec Security Association (SA) objects.
> + */
> +
> +#include <rte_common.h>
> +#include <rte_cryptodev.h>
> +#include <rte_security.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * An opaque structure to represent Security Association (SA).
> + */
> +struct rte_ipsec_sa;
> +
> +/**
> + * SA initialization parameters.
> + */
> +struct rte_ipsec_sa_prm {
> +
> +	uint64_t userdata; /**< provided and interpreted by user */
> +	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
> +	/** ipsec configuration */
> +	struct rte_security_ipsec_xform ipsec_xform;
> +	struct rte_crypto_sym_xform *crypto_xform;
comment missing
> +	union {
> +		struct {
> +			uint8_t hdr_len;     /**< tunnel header len */
> +			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
> +			uint8_t next_proto;  /**< next header protocol */
> +			const void *hdr;     /**< tunnel header template */
> +		} tun; /**< tunnel mode repated parameters */
spell check
> +		struct {
> +			uint8_t proto;  /**< next header protocol */
> +		} trs; /**< transport mode repated parameters */
spell check
> +	};
> +
> +	uint32_t replay_win_sz;
> +	/**< window size to enable sequence replay attack handling.
> +	 * Replay checking is disabled if the window size is 0.
> +	 */
As per discussions on ML, comments shall either be before the param or 
it can be in the same line as param and not in next line. Please check 
in rest of the patch as well.
> +};
> +
> +/**
> + * SA type is an 64-bit value that contain the following information:
> + * - IP version (IPv4/IPv6)
> + * - IPsec proto (ESP/AH)
> + * - inbound/outbound
> + * - mode (TRANSPORT/TUNNEL)
> + * - for TUNNEL outer IP version (IPv4/IPv6)
> + * ...
> + */
> +
> +enum {
> +	RTE_SATP_LOG_IPV,
> +	RTE_SATP_LOG_PROTO,
> +	RTE_SATP_LOG_DIR,
> +	RTE_SATP_LOG_MODE,
> +	RTE_SATP_LOG_NUM
> +};
what is the significance of LOG here.
> +
> +#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG_IPV)
> +#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG_IPV)
> +#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG_IPV)
> +
> +#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG_PROTO)
> +#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG_PROTO)
> +#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG_PROTO)
> +
> +#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG_DIR)
> +#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG_DIR)
> +#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG_DIR)
> +
> +#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG_MODE)
> +#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG_MODE)
> +#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG_MODE)
> +#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG_MODE)
> +
> +/**
> + * get type of given SA
> + * @return
> + *   SA type value.
> + */
> +uint64_t __rte_experimental
> +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
> +
> +/**
> + * Calculate requied SA size based on provided input parameters.
spell check
> + * @param prm
> + *   Parameters that wil be used to initialise SA object.
> + * @return
> + *   - Actual size required for SA with given parameters.
> + *   - -EINVAL if the parameters are invalid.
> + */
> +int __rte_experimental
> +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
> +
> +/**
> + * initialise SA based on provided input parameters.
> + * @param sa
> + *   SA object to initialise.
> + * @param prm
> + *   Parameters used to initialise given SA object.
> + * @param size
> + *   size of the provided buffer for SA.
> + * @return
> + *   - Actual size of SA object if operation completed successfully.
> + *   - -EINVAL if the parameters are invalid.
> + *   - -ENOSPC if the size of the provided buffer is not big enough.
> + */
> +int __rte_experimental
> +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> +	uint32_t size);
> +
> +/**
> + * cleanup SA
> + * @param sa
> + *   Pointer to SA object to de-initialize.
> + */
> +void __rte_experimental
> +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_IPSEC_SA_H_ */
> diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
> new file mode 100644
> index 000000000..1a66726b8
> --- /dev/null
> +++ b/lib/librte_ipsec/rte_ipsec_version.map
> @@ -0,0 +1,10 @@
> +EXPERIMENTAL {
> +	global:
> +
> +	rte_ipsec_sa_fini;
> +	rte_ipsec_sa_init;
> +	rte_ipsec_sa_size;
> +	rte_ipsec_sa_type;
> +
> +	local: *;
> +};
> diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
> new file mode 100644
> index 000000000..f927a82bf
> --- /dev/null
> +++ b/lib/librte_ipsec/sa.c
> @@ -0,0 +1,327 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#include <rte_ipsec_sa.h>
> +#include <rte_esp.h>
> +#include <rte_ip.h>
> +#include <rte_errno.h>
> +
> +#include "sa.h"
> +#include "ipsec_sqn.h"
> +
> +/* some helper structures */
> +struct crypto_xform {
> +	struct rte_crypto_auth_xform *auth;
> +	struct rte_crypto_cipher_xform *cipher;
> +	struct rte_crypto_aead_xform *aead;
> +};
shouldn't this be union as aead cannot be with cipher and auth cases.

extra line
> +
> +
> +static int
> +check_crypto_xform(struct crypto_xform *xform)
> +{
> +	uintptr_t p;
> +
> +	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
what is the intent of this?
> +
> +	/* either aead or both auth and cipher should be not NULLs */
> +	if (xform->aead) {
> +		if (p)
> +			return -EINVAL;
> +	} else if (p == (uintptr_t)xform->auth) {
> +		return -EINVAL;
> +	}
This function does not look good. It will miss the case of cipher only
> +
> +	return 0;
> +}
> +
> +static int
> +fill_crypto_xform(struct crypto_xform *xform,
> +	const struct rte_ipsec_sa_prm *prm)
> +{
> +	struct rte_crypto_sym_xform *xf;
> +
> +	memset(xform, 0, sizeof(*xform));
> +
> +	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
> +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> +			if (xform->auth != NULL)
> +				return -EINVAL;
> +			xform->auth = &xf->auth;
> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
> +			if (xform->cipher != NULL)
> +				return -EINVAL;
> +			xform->cipher = &xf->cipher;
> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
> +			if (xform->aead != NULL)
> +				return -EINVAL;
> +			xform->aead = &xf->aead;
> +		} else
> +			return -EINVAL;
> +	}
> +
> +	return check_crypto_xform(xform);
> +}
how is this function handling the inbound and outbound cases.
In inbound first xform is auth and then cipher.
In outbound first is cipher and then auth. I think this should be 
checked in the lib.
Here for loop should not be there, as there would be at max only 2 xforms.
> +
> +uint64_t __rte_experimental
> +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
> +{
> +	return sa->type;
> +}
> +
> +static int32_t
> +ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
> +{
> +	uint32_t n, sz;
> +
> +	n = 0;
> +	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
> +			RTE_IPSEC_SATP_DIR_IB)
> +		n = replay_num_bucket(wsz);
> +
> +	if (n > WINDOW_BUCKET_MAX)
> +		return -EINVAL;
> +
> +	*nb_bucket = n;
> +
> +	sz = rsn_size(n);
> +	sz += sizeof(struct rte_ipsec_sa);
> +	return sz;
> +}
> +
> +void __rte_experimental
> +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
> +{
> +	memset(sa, 0, sa->size);
> +}
Where is the memory of "sa" getting initialized?
> +
> +static int
> +fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
> +{
> +	uint64_t tp;
> +
> +	tp = 0;
> +
> +	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
> +		tp |= RTE_IPSEC_SATP_PROTO_AH;
> +	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
> +		tp |= RTE_IPSEC_SATP_PROTO_ESP;
> +	else
> +		return -EINVAL;
> +
> +	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
> +		tp |= RTE_IPSEC_SATP_DIR_OB;
> +	else if (prm->ipsec_xform.direction ==
> +			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
> +		tp |= RTE_IPSEC_SATP_DIR_IB;
> +	else
> +		return -EINVAL;
> +
> +	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
> +		if (prm->ipsec_xform.tunnel.type ==
> +				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
> +			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
> +		else if (prm->ipsec_xform.tunnel.type ==
> +				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
> +			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
> +		else
> +			return -EINVAL;
> +
> +		if (prm->tun.next_proto == IPPROTO_IPIP)
> +			tp |= RTE_IPSEC_SATP_IPV4;
> +		else if (prm->tun.next_proto == IPPROTO_IPV6)
> +			tp |= RTE_IPSEC_SATP_IPV6;
> +		else
> +			return -EINVAL;
> +	} else if (prm->ipsec_xform.mode ==
> +			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
> +		tp |= RTE_IPSEC_SATP_MODE_TRANS;
> +		if (prm->trs.proto == IPPROTO_IPIP)
> +			tp |= RTE_IPSEC_SATP_IPV4;
> +		else if (prm->trs.proto == IPPROTO_IPV6)
> +			tp |= RTE_IPSEC_SATP_IPV6;
> +		else
> +			return -EINVAL;
> +	} else
> +		return -EINVAL;
> +
> +	*type = tp;
> +	return 0;
> +}
> +
> +static void
> +esp_inb_init(struct rte_ipsec_sa *sa)
> +{
> +	/* these params may differ with new algorithms support */
> +	sa->ctp.auth.offset = 0;
> +	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
> +	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
> +	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
> +}
> +
> +static void
> +esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> +{
> +	sa->proto = prm->tun.next_proto;
> +	esp_inb_init(sa);
> +}
> +
> +static void
> +esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
> +{
> +	sa->sqn.outb = 1;
> +
> +	/* these params may differ with new algorithms support */
> +	sa->ctp.auth.offset = hlen;
> +	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
> +	if (sa->aad_len != 0) {
> +		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
> +			sa->iv_len;
> +		sa->ctp.cipher.length = 0;
> +	} else {
> +		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
> +		sa->ctp.cipher.length = sa->iv_len;
> +	}
> +}
> +
> +static void
> +esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
> +{
> +	sa->proto = prm->tun.next_proto;
> +	sa->hdr_len = prm->tun.hdr_len;
> +	sa->hdr_l3_off = prm->tun.hdr_l3_off;
> +	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
> +
> +	esp_outb_init(sa, sa->hdr_len);
> +}
> +
> +static int
> +esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> +	const struct crypto_xform *cxf)
> +{
> +	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> +				RTE_IPSEC_SATP_MODE_MASK;
> +
> +	if (cxf->aead != NULL) {
> +		/* RFC 4106 */
> +		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
> +			return -EINVAL;
> +		sa->icv_len = cxf->aead->digest_length;
> +		sa->iv_ofs = cxf->aead->iv.offset;
> +		sa->iv_len = sizeof(uint64_t);
> +		sa->pad_align = 4;
hard coding ??
> +	} else {
> +		sa->icv_len = cxf->auth->digest_length;
> +		sa->iv_ofs = cxf->cipher->iv.offset;
> +		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
> +		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
> +			sa->pad_align = 4;
> +			sa->iv_len = 0;
> +		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
> +			sa->pad_align = IPSEC_MAX_IV_SIZE;
> +			sa->iv_len = IPSEC_MAX_IV_SIZE;
> +		} else
> +			return -EINVAL;
> +	}
> +
> +	sa->udata = prm->userdata;
> +	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
> +	sa->salt = prm->ipsec_xform.salt;
> +
> +	switch (sa->type & msk) {
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
> +		esp_inb_tun_init(sa, prm);
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> +		esp_inb_init(sa);
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> +		esp_outb_tun_init(sa, prm);
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> +		esp_outb_init(sa, 0);
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +int __rte_experimental
> +rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
> +{
> +	uint64_t type;
> +	uint32_t nb;
> +	int32_t rc;
> +
> +	if (prm == NULL)
> +		return -EINVAL;
> +
> +	/* determine SA type */
> +	rc = fill_sa_type(prm, &type);
> +	if (rc != 0)
> +		return rc;
> +
> +	/* determine required size */
> +	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
> +}
> +
> +int __rte_experimental
> +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> +	uint32_t size)
> +{
> +	int32_t rc, sz;
> +	uint32_t nb;
> +	uint64_t type;
> +	struct crypto_xform cxf;
> +
> +	if (sa == NULL || prm == NULL)
> +		return -EINVAL;
> +
> +	/* determine SA type */
> +	rc = fill_sa_type(prm, &type);
> +	if (rc != 0)
> +		return rc;
> +
> +	/* determine required size */
> +	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
> +	if (sz < 0)
> +		return sz;
> +	else if (size < (uint32_t)sz)
> +		return -ENOSPC;
> +
> +	/* only esp is supported right now */
> +	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
> +		return -EINVAL;
> +
> +	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
> +			prm->tun.hdr_len > sizeof(sa->hdr))
> +		return -EINVAL;
> +
> +	rc = fill_crypto_xform(&cxf, prm);
> +	if (rc != 0)
> +		return rc;
> +
> +	sa->type = type;
> +	sa->size = sz;
> +
> +	/* check for ESN flag */
> +	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
> +		UINT32_MAX : UINT64_MAX;
> +
> +	rc = esp_sa_init(sa, prm, &cxf);
> +	if (rc != 0)
> +		rte_ipsec_sa_fini(sa);
> +
> +	/* fill replay window related fields */
> +	if (nb != 0) {
move this where nb is getting updated.
> +		sa->replay.win_sz = prm->replay_win_sz;
> +		sa->replay.nb_bucket = nb;
> +		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
> +		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
> +	}
> +
> +	return sz;
> +}
> diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
> new file mode 100644
> index 000000000..5d113891a
> --- /dev/null
> +++ b/lib/librte_ipsec/sa.h
> @@ -0,0 +1,77 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _SA_H_
> +#define _SA_H_
> +
> +#define IPSEC_MAX_HDR_SIZE	64
> +#define IPSEC_MAX_IV_SIZE	16
> +#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
> +
> +/* these definitions probably has to be in rte_crypto_sym.h */
> +union sym_op_ofslen {
> +	uint64_t raw;
> +	struct {
> +		uint32_t offset;
> +		uint32_t length;
> +	};
> +};
These are already there in rte_crypto_sym_op. What is the need to 
redefine it.
offset and length can change on per packet basis, it cannot be at init 
time and on runtime you would have sym_op
> +
> +union sym_op_data {
> +#ifdef __SIZEOF_INT128__
> +	__uint128_t raw;
> +#endif
> +	struct {
> +		uint8_t *va;
> +		rte_iova_t pa;
> +	};
> +};
rte_crypto_sym_op has all this information I guess(in mbuf)
> +
> +struct replay_sqn {
> +	uint64_t sqn;
> +	__extension__ uint64_t window[0];
> +};
> +
> +struct rte_ipsec_sa {
> +	uint64_t type;     /* type of given SA */
> +	uint64_t udata;    /* user defined */
> +	uint32_t size;     /* size of given sa object */
> +	uint32_t spi;
> +	/* sqn calculations related */
> +	uint64_t sqn_mask;
> +	struct {
> +		uint32_t win_sz;
> +		uint16_t nb_bucket;
> +		uint16_t bucket_index_mask;
> +	} replay;
> +	/* template for crypto op fields */
> +	struct {
> +		union sym_op_ofslen cipher;
> +		union sym_op_ofslen auth;
> +	} ctp;
> +	uint32_t salt;
> +	uint8_t proto;    /* next proto */
> +	uint8_t aad_len;
> +	uint8_t hdr_len;
> +	uint8_t hdr_l3_off;
> +	uint8_t icv_len;
> +	uint8_t sqh_len;
> +	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
> +	uint8_t iv_len;
> +	uint8_t pad_align;
> +
> +	/* template for tunnel header */
> +	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
> +
> +	/*
> +	 * sqn and replay window
> +	 */
> +	union {
> +		uint64_t outb;
> +		struct replay_sqn *inb;
> +	} sqn;
> +
> +} __rte_cache_aligned;
> +
remove  extra lines
> +#endif /* _SA_H_ */
> diff --git a/lib/meson.build b/lib/meson.build
> index bb7f443f9..69684ef14 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
>   	'kni', 'latencystats', 'lpm', 'member',
>   	'meter', 'power', 'pdump', 'rawdev',
>   	'reorder', 'sched', 'security', 'vhost',
> +	#ipsec lib depends on crypto and security
> +	'ipsec',
>   	# add pkt framework libs which use other libs from above
>   	'port', 'table', 'pipeline',
>   	# flow_classify lib depends on pkt framework table lib
> diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> index 5699d979d..f4cd75252 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
>   endif
>   
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
> +
>   _LDLIBS-y += --whole-archive
>   
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-19 12:08         ` Akhil Goyal
@ 2018-12-19 12:39           ` Thomas Monjalon
  2018-12-20 14:06           ` Ananyev, Konstantin
  1 sibling, 0 replies; 194+ messages in thread
From: Thomas Monjalon @ 2018-12-19 12:39 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: Akhil Goyal, dev, Mohammad Abdul Awal

19/12/2018 13:08, Akhil Goyal:
> On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -1071,6 +1071,11 @@ F: doc/guides/prog_guide/pdump_lib.rst
> >   F: app/pdump/
> >   F: doc/guides/tools/pdump.rst
> >   
> > +IPsec - EXPERIMENTAL
> > +M: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > +F: lib/librte_ipsec/
> > +M: Bernard Iremonger <bernard.iremonger@intel.com>
> > +F: test/test/test_ipsec.c
> >   
> Please add "T: git://dpdk.org/next/dpdk-next-crypto" as it would be 
> maintained in crypto sub tree in future.

Right

And for keeping a logical order, please move it after IP frag and GRO/GSO.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-12-19 13:04         ` Akhil Goyal
  2018-12-20 10:17           ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19 13:04 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Thomas Monjalon, Mohammad Abdul Awal

Hi Konstantin,

Sorry for a late review. I was on unplanned leave for more than 2 weeks.

On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Introduce Security Association (SA-level) data-path API
> Operates at SA level, provides functions to:
>      - initialize/teardown SA object
>      - process inbound/outbound ESP/AH packets associated with the given SA
>        (decrypt/encrypt, authenticate, check integrity,
>        add/remove ESP/AH related headers and data, etc.).
>
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>   lib/librte_ipsec/Makefile              |   2 +
>   lib/librte_ipsec/meson.build           |   4 +-
>   lib/librte_ipsec/rte_ipsec.h           | 151 +++++++++++++++++++++++++
>   lib/librte_ipsec/rte_ipsec_version.map |   3 +
>   lib/librte_ipsec/sa.c                  |  21 +++-
>   lib/librte_ipsec/sa.h                  |   4 +
>   lib/librte_ipsec/ses.c                 |  45 ++++++++
>   7 files changed, 227 insertions(+), 3 deletions(-)
>   create mode 100644 lib/librte_ipsec/rte_ipsec.h
>   create mode 100644 lib/librte_ipsec/ses.c
>
> diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
> index 7758dcc6d..79f187fae 100644
> --- a/lib/librte_ipsec/Makefile
> +++ b/lib/librte_ipsec/Makefile
> @@ -17,8 +17,10 @@ LIBABIVER := 1
>   
>   # all source are stored in SRCS-y
>   SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
> +SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
>   
>   # install header files
> +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
>   SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
>   
>   include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
> index 52c78eaeb..6e8c6fabe 100644
> --- a/lib/librte_ipsec/meson.build
> +++ b/lib/librte_ipsec/meson.build
> @@ -3,8 +3,8 @@
>   
>   allow_experimental_apis = true
>   
> -sources=files('sa.c')
> +sources=files('sa.c', 'ses.c')
>   
> -install_headers = files('rte_ipsec_sa.h')
> +install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
>   
>   deps += ['mbuf', 'net', 'cryptodev', 'security']
> diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
> new file mode 100644
> index 000000000..cbcd861b5
> --- /dev/null
> +++ b/lib/librte_ipsec/rte_ipsec.h
> @@ -0,0 +1,151 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _RTE_IPSEC_H_
> +#define _RTE_IPSEC_H_
> +
> +/**
> + * @file rte_ipsec.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE IPsec support.
> + * librte_ipsec provides a framework for data-path IPsec protocol
> + * processing (ESP/AH).
> + */
> +
> +#include <rte_ipsec_sa.h>
> +#include <rte_mbuf.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +struct rte_ipsec_session;
> +
> +/**
> + * IPsec session specific functions that will be used to:
> + * - prepare - for input mbufs and given IPsec session prepare crypto ops
> + *   that can be enqueued into the cryptodev associated with given session
> + *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
> + * - process - finalize processing of packets after crypto-dev finished
> + *   with them or process packets that are subjects to inline IPsec offload
> + *   (see rte_ipsec_pkt_process for more details).
> + */
> +struct rte_ipsec_sa_pkt_func {
> +	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
> +				struct rte_mbuf *mb[],
> +				struct rte_crypto_op *cop[],
> +				uint16_t num);
> +	uint16_t (*process)(const struct rte_ipsec_session *ss,
> +				struct rte_mbuf *mb[],
> +				uint16_t num);
> +};
> +
> +/**
> + * rte_ipsec_session is an aggregate structure that defines particular
> + * IPsec Security Association IPsec (SA) on given security/crypto device:
> + * - pointer to the SA object
> + * - security session action type
> + * - pointer to security/crypto session, plus other related data
> + * - session/device specific functions to prepare/process IPsec packets.
> + */
> +struct rte_ipsec_session {
> +
extra line
> +	/**
> +	 * SA that session belongs to.
> +	 * Note that multiple sessions can belong to the same SA.
> +	 */
> +	struct rte_ipsec_sa *sa;
> +	/** session action type */
> +	enum rte_security_session_action_type type;
> +	/** session and related data */
> +	union {
> +		struct {
> +			struct rte_cryptodev_sym_session *ses;
> +		} crypto;
> +		struct {
> +			struct rte_security_session *ses;
> +			struct rte_security_ctx *ctx;
> +			uint32_t ol_flags;
> +		} security;
> +	};
> +	/** functions to prepare/process IPsec packets */
> +	struct rte_ipsec_sa_pkt_func pkt_func;
> +} __rte_cache_aligned;
> +
> +/**
> + * Checks that inside given rte_ipsec_session crypto/security fields
> + * are filled correctly and setups function pointers based on these values.
it means user need not fill rte_ipsec_sa_pkt_fun, specify this in the 
comments.
> + * @param ss
> + *   Pointer to the *rte_ipsec_session* object
> + * @return
> + *   - Zero if operation completed successfully.
> + *   - -EINVAL if the parameters are invalid.
> + */
> +int __rte_experimental
> +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> +
> +/**
> + * For input mbufs and given IPsec session prepare crypto ops that can be
> + * enqueued into the cryptodev associated with given session.
> + * expects that for each input packet:
> + *      - l2_len, l3_len are setup correctly
> + * Note that erroneous mbufs are not freed by the function,
> + * but are placed beyond last valid mbuf in the *mb* array.
> + * It is a user responsibility to handle them further.
How will the user know how many mbufs are correctly processed.
> + * @param ss
> + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> + * @param mb
> + *   The address of an array of *num* pointers to *rte_mbuf* structures
> + *   which contain the input packets.
> + * @param cop
> + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> + *   structures.
> + * @param num
> + *   The maximum number of packets to process.
> + * @return
> + *   Number of successfully processed packets, with error code set in rte_errno.
> + */
> +static inline uint16_t __rte_experimental
> +rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	return ss->pkt_func.prepare(ss, mb, cop, num);
> +}
> +
> +/**
> + * Finalise processing of packets after crypto-dev finished with them or
> + * process packets that are subjects to inline IPsec offload.
> + * Expects that for each input packet:
> + *      - l2_len, l3_len are setup correctly
> + * Output mbufs will be:
> + * inbound - decrypted & authenticated, ESP(AH) related headers removed,
> + * *l2_len* and *l3_len* fields are updated.
> + * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
> + * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
> + * Note that erroneous mbufs are not freed by the function,
> + * but are placed beyond last valid mbuf in the *mb* array.
same question
> + * It is a user responsibility to handle them further.
> + * @param ss
> + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> + * @param mb
> + *   The address of an array of *num* pointers to *rte_mbuf* structures
> + *   which contain the input packets.
> + * @param num
> + *   The maximum number of packets to process.
> + * @return
> + *   Number of successfully processed packets, with error code set in rte_errno.
> + */
> +static inline uint16_t __rte_experimental
> +rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	uint16_t num)
> +{
> +	return ss->pkt_func.process(ss, mb, num);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_IPSEC_H_ */
> diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
> index 1a66726b8..d1c52d7ca 100644
> --- a/lib/librte_ipsec/rte_ipsec_version.map
> +++ b/lib/librte_ipsec/rte_ipsec_version.map
> @@ -1,6 +1,9 @@
>   EXPERIMENTAL {
>   	global:
>   
> +	rte_ipsec_pkt_crypto_prepare;
> +	rte_ipsec_session_prepare;
> +	rte_ipsec_pkt_process;
alphabetical order incorrect
>   	rte_ipsec_sa_fini;
>   	rte_ipsec_sa_init;
>   	rte_ipsec_sa_size;
> diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
> index f927a82bf..e4c5361e7 100644
> --- a/lib/librte_ipsec/sa.c
> +++ b/lib/librte_ipsec/sa.c
> @@ -2,7 +2,7 @@
>    * Copyright(c) 2018 Intel Corporation
>    */
>   
> -#include <rte_ipsec_sa.h>
> +#include <rte_ipsec.h>
>   #include <rte_esp.h>
>   #include <rte_ip.h>
>   #include <rte_errno.h>
> @@ -325,3 +325,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
>   
>   	return sz;
>   }
> +
> +int
> +ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
> +	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
> +{
> +	int32_t rc;
> +
> +	RTE_SET_USED(sa);
> +
> +	rc = 0;
> +	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
> +
> +	switch (ss->type) {
> +	default:
> +		rc = -ENOTSUP;
> +	}
> +
> +	return rc;
> +}
Is this a dummy function? Will it be updated later? I believe should 
have appropriate comments in that case.
> diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
> index 5d113891a..050a6d7ae 100644
> --- a/lib/librte_ipsec/sa.h
> +++ b/lib/librte_ipsec/sa.h
> @@ -74,4 +74,8 @@ struct rte_ipsec_sa {
>   
>   } __rte_cache_aligned;
>   
> +int
> +ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
> +	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
> +
>   #endif /* _SA_H_ */
> diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
> new file mode 100644
> index 000000000..562c1423e
> --- /dev/null
> +++ b/lib/librte_ipsec/ses.c
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#include <rte_ipsec.h>
> +#include "sa.h"
> +
> +static int
> +session_check(struct rte_ipsec_session *ss)
> +{
> +	if (ss == NULL || ss->sa == NULL)
> +		return -EINVAL;
> +
> +	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
> +		if (ss->crypto.ses == NULL)
> +			return -EINVAL;
> +	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +int __rte_experimental
> +rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
> +{
> +	int32_t rc;
> +	struct rte_ipsec_sa_pkt_func fp;
> +
> +	rc = session_check(ss);
> +	if (rc != 0)
> +		return rc;
> +
> +	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
> +	if (rc != 0)
> +		return rc;
> +
> +	ss->pkt_func = fp;
> +
> +	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
> +		ss->crypto.ses->opaque_data = (uintptr_t)ss;
> +	else
> +		ss->security.ses->opaque_data = (uintptr_t)ss;
> +
> +	return 0;
> +}


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 06/10] ipsec: implement " Konstantin Ananyev
@ 2018-12-19 15:32         ` Akhil Goyal
  2018-12-20 12:56           ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19 15:32 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Thomas Monjalon, Mohammad Abdul Awal



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Provide implementation for rte_ipsec_pkt_crypto_prepare() and
> rte_ipsec_pkt_process().
> Current implementation:
>   - supports ESP protocol tunnel mode.
>   - supports ESP protocol transport mode.
>   - supports ESN and replay window.
>   - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
>   - covers all currently defined security session types:
>          - RTE_SECURITY_ACTION_TYPE_NONE
>          - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
>          - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
>          - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>
> For first two types SQN check/update is done by SW (inside the library).
> For last two type it is HW/PMD responsibility.
>
> Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>   lib/librte_ipsec/crypto.h    |  123 ++++
>   lib/librte_ipsec/iph.h       |   84 +++
>   lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
>   lib/librte_ipsec/pad.h       |   45 ++
>   lib/librte_ipsec/sa.c        | 1044 +++++++++++++++++++++++++++++++++-
>   5 files changed, 1480 insertions(+), 2 deletions(-)
>   create mode 100644 lib/librte_ipsec/crypto.h
>   create mode 100644 lib/librte_ipsec/iph.h
>   create mode 100644 lib/librte_ipsec/pad.h
>
> diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
> new file mode 100644
> index 000000000..61f5c1433
> --- /dev/null
> +++ b/lib/librte_ipsec/crypto.h
> @@ -0,0 +1,123 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _CRYPTO_H_
> +#define _CRYPTO_H_
> +
> +/**
> + * @file crypto.h
> + * Contains crypto specific functions/structures/macros used internally
> + * by ipsec library.
> + */
> +
> + /*
> +  * AES-GCM devices have some specific requirements for IV and AAD formats.
> +  * Ideally that to be done by the driver itself.
> +  */
I believe these can be moved to rte_crypto_sym.h. All crypto related 
stuff should be at same place.
> +
> +struct aead_gcm_iv {
> +	uint32_t salt;
> +	uint64_t iv;
> +	uint32_t cnt;
> +} __attribute__((packed));
> +
> +struct aead_gcm_aad {
> +	uint32_t spi;
> +	/*
> +	 * RFC 4106, section 5:
> +	 * Two formats of the AAD are defined:
> +	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
> +	 */
> +	union {
> +		uint32_t u32[2];
> +		uint64_t u64;
> +	} sqn;
> +	uint32_t align0; /* align to 16B boundary */
> +} __attribute__((packed));
> +
> +struct gcm_esph_iv {
> +	struct esp_hdr esph;
> +	uint64_t iv;
> +} __attribute__((packed));
> +
> +
> +static inline void
> +aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
> +{
> +	gcm->salt = salt;
> +	gcm->iv = iv;
> +	gcm->cnt = rte_cpu_to_be_32(1);
> +}
> +
> +/*
> + * RFC 4106, 5 AAD Construction
> + * spi and sqn should already be converted into network byte order.
> + * Make sure that not used bytes are zeroed.
> + */
> +static inline void
> +aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
> +	int esn)
> +{
> +	aad->spi = spi;
> +	if (esn)
> +		aad->sqn.u64 = sqn;
> +	else {
> +		aad->sqn.u32[0] = sqn_low32(sqn);
> +		aad->sqn.u32[1] = 0;
> +	}
> +	aad->align0 = 0;
> +}
> +
> +static inline void
> +gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
> +{
> +	iv[0] = sqn;
> +	iv[1] = 0;
> +}
> +
> +/*
> + * from RFC 4303 3.3.2.1.4:
> + * If the ESN option is enabled for the SA, the high-order 32
> + * bits of the sequence number are appended after the Next Header field
> + * for purposes of this computation, but are not transmitted.
> + */
> +
> +/*
> + * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
> + * icv parameter points to the new start of ICV.
> + */
> +static inline void
> +insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
> +{
> +	uint32_t *icv;
> +	int32_t i;
> +
> +	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
> +
> +	icv = picv;
> +	icv_len = icv_len / sizeof(uint32_t);
> +	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
> +		;
> +
> +	icv[i] = sqh;
> +}
> +
> +/*
> + * Helper function that moves ICV by 4B up, and removes SQN.hibits.
> + * icv parameter points to the new start of ICV.
> + */
> +static inline void
> +remove_sqh(void *picv, uint32_t icv_len)
> +{
> +	uint32_t i, *icv;
> +
> +	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
> +
> +	icv = picv;
> +	icv_len = icv_len / sizeof(uint32_t);
> +	for (i = 0; i != icv_len; i++)
> +		icv[i] = icv[i + 1];
> +}
> +
> +#endif /* _CRYPTO_H_ */
> diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
> new file mode 100644
> index 000000000..3fd93016d
> --- /dev/null
> +++ b/lib/librte_ipsec/iph.h
> @@ -0,0 +1,84 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _IPH_H_
> +#define _IPH_H_
> +
> +/**
> + * @file iph.h
> + * Contains functions/structures/macros to manipulate IPv/IPv6 headers
IPv4
> + * used internally by ipsec library.
> + */
> +
> +/*
> + * Move preceding (L3) headers down to remove ESP header and IV.
> + */
why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.
I believe these adjustments are happening in the mbuf itself.
Moreover these APIs are not specific to esp headers.
> +static inline void
> +remove_esph(char *np, char *op, uint32_t hlen)
> +{
> +	uint32_t i;
> +
> +	for (i = hlen; i-- != 0; np[i] = op[i])
> +		;
> +}
> +
> +/*
> + * Move preceding (L3) headers up to free space for ESP header and IV.
> + */
> +static inline void
> +insert_esph(char *np, char *op, uint32_t hlen)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != hlen; i++)
> +		np[i] = op[i];
> +}
> +
> +/* update original ip header fields for trasnport case */
spell check
> +static inline int
> +update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> +		uint32_t l2len, uint32_t l3len, uint8_t proto)
> +{
> +	struct ipv4_hdr *v4h;
> +	struct ipv6_hdr *v6h;
> +	int32_t rc;
> +
> +	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
> +		v4h = p;
> +		rc = v4h->next_proto_id;
> +		v4h->next_proto_id = proto;
> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> +	} else if (l3len == sizeof(*v6h)) {
> +		v6h = p;
> +		rc = v6h->proto;
> +		v6h->proto = proto;
> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> +				sizeof(*v6h));
> +	/* need to add support for IPv6 with options */
> +	} else
> +		rc = -ENOTSUP;
> +
> +	return rc;
> +}
> +
> +/* update original and new ip header fields for tunnel case */
> +static inline void
> +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> +		uint32_t l2len, rte_be16_t pid)
> +{
> +	struct ipv4_hdr *v4h;
> +	struct ipv6_hdr *v6h;
> +
> +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> +		v4h = p;
> +		v4h->packet_id = pid;
> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
where are we updating the rest of the fields, like ttl, checksum, ip 
addresses, etc
> +	} else {
> +		v6h = p;
> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> +				sizeof(*v6h));
> +	}
> +}
> +
> +#endif /* _IPH_H_ */
> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> index 1935f6e30..6e18c34eb 100644
> --- a/lib/librte_ipsec/ipsec_sqn.h
> +++ b/lib/librte_ipsec/ipsec_sqn.h
> @@ -15,6 +15,45 @@
>   
>   #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
>   
> +/*
> + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
> + */
> +static inline rte_be32_t
> +sqn_hi32(rte_be64_t sqn)
> +{
> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> +	return (sqn >> 32);
> +#else
> +	return sqn;
> +#endif
> +}
> +
> +/*
> + * gets SQN.low32 bits, SQN supposed to be in network byte order.
> + */
> +static inline rte_be32_t
> +sqn_low32(rte_be64_t sqn)
> +{
> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> +	return sqn;
> +#else
> +	return (sqn >> 32);
> +#endif
> +}
> +
> +/*
> + * gets SQN.low16 bits, SQN supposed to be in network byte order.
> + */
> +static inline rte_be16_t
> +sqn_low16(rte_be64_t sqn)
> +{
> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> +	return sqn;
> +#else
> +	return (sqn >> 48);
> +#endif
> +}
> +
shouldn't we move these seq number APIs in rte_esp.h and make them generic
>   /*
>    * for given size, calculate required number of buckets.
>    */
> @@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
>   	return nb;
>   }
>   
> +/*
> + * According to RFC4303 A2.1, determine the high-order bit of sequence number.
> + * use 32bit arithmetic inside, return uint64_t.
> + */
> +static inline uint64_t
> +reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
> +{
> +	uint32_t th, tl, bl;
> +
> +	tl = t;
> +	th = t >> 32;
> +	bl = tl - w + 1;
> +
> +	/* case A: window is within one sequence number subspace */
> +	if (tl >= (w - 1))
> +		th += (sqn < bl);
> +	/* case B: window spans two sequence number subspaces */
> +	else if (th != 0)
> +		th -= (sqn >= bl);
> +
> +	/* return constructed sequence with proper high-order bits */
> +	return (uint64_t)th << 32 | sqn;
> +}
> +
> +/**
> + * Perform the replay checking.
> + *
> + * struct rte_ipsec_sa contains the window and window related parameters,
> + * such as the window size, bitmask, and the last acknowledged sequence number.
> + *
> + * Based on RFC 6479.
> + * Blocks are 64 bits unsigned integers
> + */
> +static inline int32_t
> +esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
> +	uint64_t sqn)
> +{
> +	uint32_t bit, bucket;
> +
> +	/* replay not enabled */
> +	if (sa->replay.win_sz == 0)
> +		return 0;
> +
> +	/* seq is larger than lastseq */
> +	if (sqn > rsn->sqn)
> +		return 0;
> +
> +	/* seq is outside window */
> +	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
> +		return -EINVAL;
> +
> +	/* seq is inside the window */
> +	bit = sqn & WINDOW_BIT_LOC_MASK;
> +	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
> +
> +	/* already seen packet */
> +	if (rsn->window[bucket] & ((uint64_t)1 << bit))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * For outbound SA perform the sequence number update.
> + */
> +static inline uint64_t
> +esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
> +{
> +	uint64_t n, s, sqn;
> +
> +	n = *num;
> +	sqn = sa->sqn.outb + n;
> +	sa->sqn.outb = sqn;
> +
> +	/* overflow */
> +	if (sqn > sa->sqn_mask) {
> +		s = sqn - sa->sqn_mask;
> +		*num = (s < n) ?  n - s : 0;
> +	}
> +
> +	return sqn - n;
> +}
> +
> +/**
> + * For inbound SA perform the sequence number and replay window update.
> + */
> +static inline int32_t
> +esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
> +	uint64_t sqn)
> +{
> +	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
> +
> +	/* replay not enabled */
> +	if (sa->replay.win_sz == 0)
> +		return 0;
> +
> +	/* handle ESN */
> +	if (IS_ESN(sa))
> +		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
> +
> +	/* seq is outside window*/
> +	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
> +		return -EINVAL;
> +
> +	/* update the bit */
> +	bucket = (sqn >> WINDOW_BUCKET_BITS);
> +
> +	/* check if the seq is within the range */
> +	if (sqn > rsn->sqn) {
> +		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
> +		diff = bucket - last_bucket;
> +		/* seq is way after the range of WINDOW_SIZE */
> +		if (diff > sa->replay.nb_bucket)
> +			diff = sa->replay.nb_bucket;
> +
> +		for (i = 0; i != diff; i++) {
> +			new_bucket = (i + last_bucket + 1) &
> +				sa->replay.bucket_index_mask;
> +			rsn->window[new_bucket] = 0;
> +		}
> +		rsn->sqn = sqn;
> +	}
> +
> +	bucket &= sa->replay.bucket_index_mask;
> +	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
> +
> +	/* already seen packet */
> +	if (rsn->window[bucket] & bit)
> +		return -EINVAL;
> +
> +	rsn->window[bucket] |= bit;
> +	return 0;
> +}
> +
> +/**
> + * To achieve ability to do multiple readers single writer for
> + * SA replay window information and sequence number (RSN)
> + * basic RCU schema is used:
> + * SA have 2 copies of RSN (one for readers, another for writers).
> + * Each RSN contains a rwlock that has to be grabbed (for read/write)
> + * to avoid races between readers and writer.
> + * Writer is responsible to make a copy or reader RSN, update it
> + * and mark newly updated RSN as readers one.
> + * That approach is intended to minimize contention and cache sharing
> + * between writer and readers.
> + */
> +
>   /**
>    * Based on number of buckets calculated required size for the
>    * structure that holds replay window and sequence number (RSN) information.
> diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
> new file mode 100644
> index 000000000..2f5ccd00e
> --- /dev/null
> +++ b/lib/librte_ipsec/pad.h
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _PAD_H_
> +#define _PAD_H_
> +
> +#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
> +
> +static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
> +	1, 2, 3, 4, 5, 6, 7, 8,
> +	9, 10, 11, 12, 13, 14, 15, 16,
> +	17, 18, 19, 20, 21, 22, 23, 24,
> +	25, 26, 27, 28, 29, 30, 31, 32,
> +	33, 34, 35, 36, 37, 38, 39, 40,
> +	41, 42, 43, 44, 45, 46, 47, 48,
> +	49, 50, 51, 52, 53, 54, 55, 56,
> +	57, 58, 59, 60, 61, 62, 63, 64,
> +	65, 66, 67, 68, 69, 70, 71, 72,
> +	73, 74, 75, 76, 77, 78, 79, 80,
> +	81, 82, 83, 84, 85, 86, 87, 88,
> +	89, 90, 91, 92, 93, 94, 95, 96,
> +	97, 98, 99, 100, 101, 102, 103, 104,
> +	105, 106, 107, 108, 109, 110, 111, 112,
> +	113, 114, 115, 116, 117, 118, 119, 120,
> +	121, 122, 123, 124, 125, 126, 127, 128,
> +	129, 130, 131, 132, 133, 134, 135, 136,
> +	137, 138, 139, 140, 141, 142, 143, 144,
> +	145, 146, 147, 148, 149, 150, 151, 152,
> +	153, 154, 155, 156, 157, 158, 159, 160,
> +	161, 162, 163, 164, 165, 166, 167, 168,
> +	169, 170, 171, 172, 173, 174, 175, 176,
> +	177, 178, 179, 180, 181, 182, 183, 184,
> +	185, 186, 187, 188, 189, 190, 191, 192,
> +	193, 194, 195, 196, 197, 198, 199, 200,
> +	201, 202, 203, 204, 205, 206, 207, 208,
> +	209, 210, 211, 212, 213, 214, 215, 216,
> +	217, 218, 219, 220, 221, 222, 223, 224,
> +	225, 226, 227, 228, 229, 230, 231, 232,
> +	233, 234, 235, 236, 237, 238, 239, 240,
> +	241, 242, 243, 244, 245, 246, 247, 248,
> +	249, 250, 251, 252, 253, 254, 255,
> +};
> +
> +#endif /* _PAD_H_ */
> diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
> index e4c5361e7..bb56f42eb 100644
> --- a/lib/librte_ipsec/sa.c
> +++ b/lib/librte_ipsec/sa.c
> @@ -6,9 +6,13 @@
>   #include <rte_esp.h>
>   #include <rte_ip.h>
>   #include <rte_errno.h>
> +#include <rte_cryptodev.h>
>   
>   #include "sa.h"
>   #include "ipsec_sqn.h"
> +#include "crypto.h"
> +#include "iph.h"
> +#include "pad.h"
>   
>   /* some helper structures */
>   struct crypto_xform {
> @@ -207,6 +211,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
>   		/* RFC 4106 */
>   		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
>   			return -EINVAL;
> +		sa->aad_len = sizeof(struct aead_gcm_aad);
>   		sa->icv_len = cxf->aead->digest_length;
>   		sa->iv_ofs = cxf->aead->iv.offset;
>   		sa->iv_len = sizeof(uint64_t);
> @@ -326,18 +331,1053 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
>   	return sz;
>   }
>   
> +static inline void
> +mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
> +	uint32_t num)
> +{
> +	uint32_t i;
> +
> +	for (i = 0; i != num; i++)
> +		dst[i] = src[i];
> +}
> +
> +static inline void
> +lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	uint32_t i;
> +	struct rte_crypto_sym_op *sop;
> +
> +	for (i = 0; i != num; i++) {
> +		sop = cop[i]->sym;
> +		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> +		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
> +		sop->m_src = mb[i];
> +		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
> +	}
> +}
> +
> +static inline void
> +esp_outb_cop_prepare(struct rte_crypto_op *cop,
> +	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
> +	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
> +{
> +	struct rte_crypto_sym_op *sop;
> +	struct aead_gcm_iv *gcm;
> +
> +	/* fill sym op fields */
> +	sop = cop->sym;
> +
> +	/* AEAD (AES_GCM) case */
> +	if (sa->aad_len != 0) {
> +		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
> +		sop->aead.data.length = sa->ctp.cipher.length + plen;
> +		sop->aead.digest.data = icv->va;
> +		sop->aead.digest.phys_addr = icv->pa;
> +		sop->aead.aad.data = icv->va + sa->icv_len;
> +		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
> +
> +		/* fill AAD IV (located inside crypto op) */
> +		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
> +			sa->iv_ofs);
> +		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> +	/* CRYPT+AUTH case */
> +	} else {
> +		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
> +		sop->cipher.data.length = sa->ctp.cipher.length + plen;
> +		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
> +		sop->auth.data.length = sa->ctp.auth.length + plen;
> +		sop->auth.digest.data = icv->va;
> +		sop->auth.digest.phys_addr = icv->pa;
please ignore my previous comment on ctp in the previous patch.
you are making the sym_op also in this library. It would be better to 
use sym_op instead of sop to align with rest of dpdk code.
> +	}
> +}
> +
> +static inline int32_t
> +esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> +	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> +	union sym_op_data *icv)
> +{
> +	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
> +	struct rte_mbuf *ml;
> +	struct esp_hdr *esph;
> +	struct esp_tail *espt;
> +	char *ph, *pt;
> +	uint64_t *iv;
> +
> +	/* calculate extra header space required */
> +	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
> +
> +	/* size of ipsec protected data */
> +	l2len = mb->l2_len;
> +	plen = mb->pkt_len - mb->l2_len;
> +
> +	/* number of bytes to encrypt */
> +	clen = plen + sizeof(*espt);
> +	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> +	/* pad length + esp tail */
> +	pdlen = clen - plen;
> +	tlen = pdlen + sa->icv_len;
> +
> +	/* do append and prepend */
> +	ml = rte_pktmbuf_lastseg(mb);
> +	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
> +		return -ENOSPC;
> +
> +	/* prepend header */
> +	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
> +	if (ph == NULL)
> +		return -ENOSPC;
> +
> +	/* append tail */
> +	pdofs = ml->data_len;
> +	ml->data_len += tlen;
> +	mb->pkt_len += tlen;
> +	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
> +
> +	/* update pkt l2/l3 len */
> +	mb->l2_len = sa->hdr_l3_off;
> +	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
> +
> +	/* copy tunnel pkt header */
> +	rte_memcpy(ph, sa->hdr, sa->hdr_len);
> +
> +	/* update original and new ip header fields */
> +	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
> +			sqn_low16(sqc));
> +
> +	/* update spi, seqn and iv */
> +	esph = (struct esp_hdr *)(ph + sa->hdr_len);
> +	iv = (uint64_t *)(esph + 1);
> +	rte_memcpy(iv, ivp, sa->iv_len);
> +
> +	esph->spi = sa->spi;
> +	esph->seq = sqn_low32(sqc);
> +
> +	/* offset for ICV */
> +	pdofs += pdlen + sa->sqh_len;
> +
> +	/* pad length */
> +	pdlen -= sizeof(*espt);
> +
> +	/* copy padding data */
> +	rte_memcpy(pt, esp_pad_bytes, pdlen);
> +
> +	/* update esp trailer */
> +	espt = (struct esp_tail *)(pt + pdlen);
> +	espt->pad_len = pdlen;
> +	espt->next_proto = sa->proto;
> +
> +	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
> +	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
> +
> +	return clen;
> +}
> +
> +/*
> + * for pure cryptodev (lookaside none) depending on SA settings,
> + * we might have to write some extra data to the packet.
> + */
> +static inline void
> +outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
> +	const union sym_op_data *icv)
> +{
> +	uint32_t *psqh;
> +	struct aead_gcm_aad *aad;
> +
> +	/* insert SQN.hi between ESP trailer and ICV */
> +	if (sa->sqh_len != 0) {
> +		psqh = (uint32_t *)(icv->va - sa->sqh_len);
> +		psqh[0] = sqn_hi32(sqc);
> +	}
> +
> +	/*
> +	 * fill IV and AAD fields, if any (aad fields are placed after icv),
> +	 * right now we support only one AEAD algorithm: AES-GCM .
> +	 */
> +	if (sa->aad_len != 0) {
> +		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
> +		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
> +	}
> +}
> +
probably a comment before every function would be better in a library code
> +static uint16_t
> +outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	int32_t rc;
> +	uint32_t i, k, n;
> +	uint64_t sqn;
> +	rte_be64_t sqc;
> +	struct rte_ipsec_sa *sa;
> +	union sym_op_data icv;
> +	uint64_t iv[IPSEC_MAX_IV_QWORD];
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	n = num;
> +	sqn = esn_outb_update_sqn(sa, &n);
> +	if (n != num)
> +		rte_errno = EOVERFLOW;
> +
> +	k = 0;
> +	for (i = 0; i != n; i++) {
> +
> +		sqc = rte_cpu_to_be_64(sqn + i);
> +		gen_iv(iv, sqc);
> +
> +		/* try to update the packet itself */
> +		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
> +
> +		/* success, setup crypto op */
> +		if (rc >= 0) {
> +			mb[k] = mb[i];
> +			outb_pkt_xprepare(sa, sqc, &icv);
> +			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
> +			k++;
> +		/* failure, put packet into the death-row */
> +		} else {
> +			dr[i - k] = mb[i];
> +			rte_errno = -rc;
> +		}
> +	}
> +
> +	/* update cops */
> +	lksd_none_cop_prepare(ss, mb, cop, k);
> +
> +	 /* copy not prepared mbufs beyond good ones */
> +	if (k != num && k != 0)
> +		mbuf_bulk_copy(mb + k, dr, num - k);
> +
> +	return k;
> +}
> +
> +static inline int32_t
> +esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
> +	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
> +	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
> +{
> +	uint8_t np;
> +	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
> +	struct rte_mbuf *ml;
> +	struct esp_hdr *esph;
> +	struct esp_tail *espt;
> +	char *ph, *pt;
> +	uint64_t *iv;
> +
> +	uhlen = l2len + l3len;
> +	plen = mb->pkt_len - uhlen;
> +
> +	/* calculate extra header space required */
> +	hlen = sa->iv_len + sizeof(*esph);
> +
> +	/* number of bytes to encrypt */
> +	clen = plen + sizeof(*espt);
> +	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
> +
> +	/* pad length + esp tail */
> +	pdlen = clen - plen;
> +	tlen = pdlen + sa->icv_len;
> +
> +	/* do append and insert */
> +	ml = rte_pktmbuf_lastseg(mb);
> +	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
> +		return -ENOSPC;
> +
> +	/* prepend space for ESP header */
> +	ph = rte_pktmbuf_prepend(mb, hlen);
> +	if (ph == NULL)
> +		return -ENOSPC;
> +
> +	/* append tail */
> +	pdofs = ml->data_len;
> +	ml->data_len += tlen;
> +	mb->pkt_len += tlen;
> +	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
> +
> +	/* shift L2/L3 headers */
> +	insert_esph(ph, ph + hlen, uhlen);
> +
> +	/* update ip  header fields */
> +	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
> +			IPPROTO_ESP);
> +
> +	/* update spi, seqn and iv */
> +	esph = (struct esp_hdr *)(ph + uhlen);
> +	iv = (uint64_t *)(esph + 1);
> +	rte_memcpy(iv, ivp, sa->iv_len);
> +
> +	esph->spi = sa->spi;
> +	esph->seq = sqn_low32(sqc);
> +
> +	/* offset for ICV */
> +	pdofs += pdlen + sa->sqh_len;
> +
> +	/* pad length */
> +	pdlen -= sizeof(*espt);
> +
> +	/* copy padding data */
> +	rte_memcpy(pt, esp_pad_bytes, pdlen);
> +
> +	/* update esp trailer */
> +	espt = (struct esp_tail *)(pt + pdlen);
> +	espt->pad_len = pdlen;
> +	espt->next_proto = np;
> +
> +	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
> +	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
> +
> +	return clen;
> +}
> +
> +static uint16_t
> +outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	int32_t rc;
> +	uint32_t i, k, n, l2, l3;
> +	uint64_t sqn;
> +	rte_be64_t sqc;
> +	struct rte_ipsec_sa *sa;
> +	union sym_op_data icv;
> +	uint64_t iv[IPSEC_MAX_IV_QWORD];
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	n = num;
> +	sqn = esn_outb_update_sqn(sa, &n);
> +	if (n != num)
> +		rte_errno = EOVERFLOW;
> +
> +	k = 0;
> +	for (i = 0; i != n; i++) {
> +
> +		l2 = mb[i]->l2_len;
> +		l3 = mb[i]->l3_len;
> +
> +		sqc = rte_cpu_to_be_64(sqn + i);
> +		gen_iv(iv, sqc);
> +
> +		/* try to update the packet itself */
> +		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
> +				l2, l3, &icv);
> +
> +		/* success, setup crypto op */
> +		if (rc >= 0) {
> +			mb[k] = mb[i];
> +			outb_pkt_xprepare(sa, sqc, &icv);
> +			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
> +			k++;
> +		/* failure, put packet into the death-row */
> +		} else {
> +			dr[i - k] = mb[i];
> +			rte_errno = -rc;
> +		}
> +	}
> +
> +	/* update cops */
> +	lksd_none_cop_prepare(ss, mb, cop, k);
> +
> +	/* copy not prepared mbufs beyond good ones */
> +	if (k != num && k != 0)
> +		mbuf_bulk_copy(mb + k, dr, num - k);
> +
> +	return k;
> +}
> +
> +static inline int32_t
> +esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
> +	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
> +	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
> +{
> +	struct rte_crypto_sym_op *sop;
> +	struct aead_gcm_iv *gcm;
> +	uint64_t *ivc, *ivp;
> +	uint32_t clen;
> +
> +	clen = plen - sa->ctp.cipher.length;
> +	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
> +		return -EINVAL;
> +
> +	/* fill sym op fields */
> +	sop = cop->sym;
> +
> +	/* AEAD (AES_GCM) case */
> +	if (sa->aad_len != 0) {
> +		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
> +		sop->aead.data.length = clen;
> +		sop->aead.digest.data = icv->va;
> +		sop->aead.digest.phys_addr = icv->pa;
> +		sop->aead.aad.data = icv->va + sa->icv_len;
> +		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
> +
> +		/* fill AAD IV (located inside crypto op) */
> +		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
> +			sa->iv_ofs);
> +		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
> +			pofs + sizeof(struct esp_hdr));
> +		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
> +	/* CRYPT+AUTH case */
> +	} else {
> +		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
> +		sop->cipher.data.length = clen;
> +		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
> +		sop->auth.data.length = plen - sa->ctp.auth.length;
> +		sop->auth.digest.data = icv->va;
> +		sop->auth.digest.phys_addr = icv->pa;
> +
> +		/* copy iv from the input packet to the cop */
> +		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
> +		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
> +			pofs + sizeof(struct esp_hdr));
> +		rte_memcpy(ivc, ivp, sa->iv_len);
> +	}
> +	return 0;
> +}
> +
> +/*
> + * for pure cryptodev (lookaside none) depending on SA settings,
> + * we might have to write some extra data to the packet.
> + */
> +static inline void
> +inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
> +	const union sym_op_data *icv)
> +{
> +	struct aead_gcm_aad *aad;
> +
> +	/* insert SQN.hi between ESP trailer and ICV */
> +	if (sa->sqh_len != 0)
> +		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
> +
> +	/*
> +	 * fill AAD fields, if any (aad fields are placed after icv),
> +	 * right now we support only one AEAD algorithm: AES-GCM.
> +	 */
> +	if (sa->aad_len != 0) {
> +		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
> +		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
> +	}
> +}
> +
> +static inline int32_t
> +esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
> +	const struct replay_sqn *rsn, struct rte_mbuf *mb,
> +	uint32_t hlen, union sym_op_data *icv)
> +{
> +	int32_t rc;
> +	uint64_t sqn;
> +	uint32_t icv_ofs, plen;
> +	struct rte_mbuf *ml;
> +	struct esp_hdr *esph;
> +
> +	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
> +
> +	/*
> +	 * retrieve and reconstruct SQN, then check it, then
> +	 * convert it back into network byte order.
> +	 */
> +	sqn = rte_be_to_cpu_32(esph->seq);
> +	if (IS_ESN(sa))
> +		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
> +
> +	rc = esn_inb_check_sqn(rsn, sa, sqn);
> +	if (rc != 0)
> +		return rc;
> +
> +	sqn = rte_cpu_to_be_64(sqn);
> +
> +	/* start packet manipulation */
> +	plen = mb->pkt_len;
> +	plen = plen - hlen;
> +
> +	ml = rte_pktmbuf_lastseg(mb);
> +	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
> +
> +	/* we have to allocate space for AAD somewhere,
> +	 * right now - just use free trailing space at the last segment.
> +	 * Would probably be more convenient to reserve space for AAD
> +	 * inside rte_crypto_op itself
> +	 * (again for IV space is already reserved inside cop).
> +	 */
> +	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
> +		return -ENOSPC;
> +
> +	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
> +	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
> +
> +	inb_pkt_xprepare(sa, sqn, icv);
> +	return plen;
> +}
> +
> +static uint16_t
> +inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	int32_t rc;
> +	uint32_t i, k, hl;
> +	struct rte_ipsec_sa *sa;
> +	struct replay_sqn *rsn;
> +	union sym_op_data icv;
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +	rsn = sa->sqn.inb;
> +
> +	k = 0;
> +	for (i = 0; i != num; i++) {
> +
> +		hl = mb[i]->l2_len + mb[i]->l3_len;
> +		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
> +		if (rc >= 0)
> +			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
> +				hl, rc);
> +
> +		if (rc == 0)
> +			mb[k++] = mb[i];
> +		else {
> +			dr[i - k] = mb[i];
> +			rte_errno = -rc;
> +		}
> +	}
> +
> +	/* update cops */
> +	lksd_none_cop_prepare(ss, mb, cop, k);
> +
> +	/* copy not prepared mbufs beyond good ones */
> +	if (k != num && k != 0)
> +		mbuf_bulk_copy(mb + k, dr, num - k);
> +
> +	return k;
> +}
> +
naming convention used is cryptic. A comment would be appreciated.
> +static inline void
> +lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	uint32_t i;
> +	struct rte_crypto_sym_op *sop;
> +
> +	for (i = 0; i != num; i++) {
> +		sop = cop[i]->sym;
> +		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
> +		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
> +		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
> +		sop->m_src = mb[i];
> +		__rte_security_attach_session(sop, ss->security.ses);
> +	}
> +}
> +
> +static uint16_t
> +lksd_proto_prepare(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> +{
> +	lksd_proto_cop_prepare(ss, mb, cop, num);
> +	return num;
> +}
> +
> +static inline int
> +esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
> +	uint32_t *sqn)
> +{
> +	uint32_t hlen, icv_len, tlen;
> +	struct esp_hdr *esph;
> +	struct esp_tail *espt;
> +	struct rte_mbuf *ml;
> +	char *pd;
> +
> +	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
> +		return -EBADMSG;
> +
> +	icv_len = sa->icv_len;
> +
> +	ml = rte_pktmbuf_lastseg(mb);
> +	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
> +		ml->data_len - icv_len - sizeof(*espt));
> +
> +	/*
> +	 * check padding and next proto.
> +	 * return an error if something is wrong.
> +	 */
> +	pd = (char *)espt - espt->pad_len;
> +	if (espt->next_proto != sa->proto ||
> +			memcmp(pd, esp_pad_bytes, espt->pad_len))
> +		return -EINVAL;
> +
> +	/* cut of ICV, ESP tail and padding bytes */
> +	tlen = icv_len + sizeof(*espt) + espt->pad_len;
> +	ml->data_len -= tlen;
> +	mb->pkt_len -= tlen;
> +
> +	/* cut of L2/L3 headers, ESP header and IV */
> +	hlen = mb->l2_len + mb->l3_len;
> +	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
> +	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
> +
> +	/* retrieve SQN for later check */
> +	*sqn = rte_be_to_cpu_32(esph->seq);
> +
> +	/* reset mbuf metatdata: L2/L3 len, packet type */
> +	mb->packet_type = RTE_PTYPE_UNKNOWN;
> +	mb->l2_len = 0;
> +	mb->l3_len = 0;
> +
> +	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
> +	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
> +	return 0;
> +}
> +
> +static inline int
> +esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
> +	uint32_t *sqn)
> +{
> +	uint32_t hlen, icv_len, l2len, l3len, tlen;
> +	struct esp_hdr *esph;
> +	struct esp_tail *espt;
> +	struct rte_mbuf *ml;
> +	char *np, *op, *pd;
> +
> +	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
> +		return -EBADMSG;
> +
> +	icv_len = sa->icv_len;
> +
> +	ml = rte_pktmbuf_lastseg(mb);
> +	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
> +		ml->data_len - icv_len - sizeof(*espt));
> +
> +	/* check padding, return an error if something is wrong. */
> +	pd = (char *)espt - espt->pad_len;
> +	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
> +		return -EINVAL;
> +
> +	/* cut of ICV, ESP tail and padding bytes */
> +	tlen = icv_len + sizeof(*espt) + espt->pad_len;
> +	ml->data_len -= tlen;
> +	mb->pkt_len -= tlen;
> +
> +	/* retrieve SQN for later check */
> +	l2len = mb->l2_len;
> +	l3len = mb->l3_len;
> +	hlen = l2len + l3len;
> +	op = rte_pktmbuf_mtod(mb, char *);
> +	esph = (struct esp_hdr *)(op + hlen);
> +	*sqn = rte_be_to_cpu_32(esph->seq);
> +
> +	/* cut off ESP header and IV, update L3 header */
> +	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
> +	remove_esph(np, op, hlen);
> +	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
> +			espt->next_proto);
> +
> +	/* reset mbuf packet type */
> +	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
> +
> +	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
> +	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
> +	return 0;
> +}
> +
> +static inline uint16_t
> +esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
> +	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
> +{
> +	uint32_t i, k;
> +	struct replay_sqn *rsn;
> +
> +	rsn = sa->sqn.inb;
> +
> +	k = 0;
> +	for (i = 0; i != num; i++) {
> +		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
> +			mb[k++] = mb[i];
> +		else
> +			dr[i - k] = mb[i];
> +	}
> +
> +	return k;
> +}
> +
> +static uint16_t
> +inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	uint16_t num)
> +{
> +	uint32_t i, k;
> +	struct rte_ipsec_sa *sa;
> +	uint32_t sqn[num];
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	/* process packets, extract seq numbers */
> +
> +	k = 0;
> +	for (i = 0; i != num; i++) {
> +		/* good packet */
> +		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
> +			mb[k++] = mb[i];
> +		/* bad packet, will drop from furhter processing */
> +		else
> +			dr[i - k] = mb[i];
> +	}
> +
> +	/* update seq # and replay winow */
> +	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
> +
> +	/* handle unprocessed mbufs */
> +	if (k != num) {
> +		rte_errno = EBADMSG;
> +		if (k != 0)
> +			mbuf_bulk_copy(mb + k, dr, num - k);
> +	}
> +
> +	return k;
> +}
> +
> +static uint16_t
> +inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	uint16_t num)
> +{
> +	uint32_t i, k;
> +	uint32_t sqn[num];
> +	struct rte_ipsec_sa *sa;
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	/* process packets, extract seq numbers */
> +
> +	k = 0;
> +	for (i = 0; i != num; i++) {
> +		/* good packet */
> +		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
> +			mb[k++] = mb[i];
> +		/* bad packet, will drop from furhter processing */
> +		else
> +			dr[i - k] = mb[i];
> +	}
> +
> +	/* update seq # and replay winow */
> +	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
> +
> +	/* handle unprocessed mbufs */
> +	if (k != num) {
> +		rte_errno = EBADMSG;
> +		if (k != 0)
> +			mbuf_bulk_copy(mb + k, dr, num - k);
> +	}
> +
> +	return k;
> +}
> +
> +/*
> + * process outbound packets for SA with ESN support,
> + * for algorithms that require SQN.hibits to be implictly included
> + * into digest computation.
> + * In that case we have to move ICV bytes back to their proper place.
> + */
> +static uint16_t
> +outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	uint16_t num)
> +{
> +	uint32_t i, k, icv_len, *icv;
> +	struct rte_mbuf *ml;
> +	struct rte_ipsec_sa *sa;
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	k = 0;
> +	icv_len = sa->icv_len;
> +
> +	for (i = 0; i != num; i++) {
> +		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
> +			ml = rte_pktmbuf_lastseg(mb[i]);
> +			icv = rte_pktmbuf_mtod_offset(ml, void *,
> +				ml->data_len - icv_len);
> +			remove_sqh(icv, icv_len);
> +			mb[k++] = mb[i];
> +		} else
> +			dr[i - k] = mb[i];
> +	}
> +
> +	/* handle unprocessed mbufs */
> +	if (k != num) {
> +		rte_errno = EBADMSG;
> +		if (k != 0)
> +			mbuf_bulk_copy(mb + k, dr, num - k);
> +	}
> +
> +	return k;
> +}
> +
> +/*
> + * simplest pkt process routine:
> + * all actual processing is done already doneby HW/PMD,

all actual processing is already done by HW/PMD

> + * just check mbuf ol_flags.
> + * used for:
> + * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
> + * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> + * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
> + */
> +static uint16_t
> +pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> +	uint16_t num)
> +{
> +	uint32_t i, k;
> +	struct rte_mbuf *dr[num];
> +
> +	RTE_SET_USED(ss);
> +
> +	k = 0;
> +	for (i = 0; i != num; i++) {
> +		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
> +			mb[k++] = mb[i];
> +		else
> +			dr[i - k] = mb[i];
> +	}
> +
> +	/* handle unprocessed mbufs */
> +	if (k != num) {
> +		rte_errno = EBADMSG;
> +		if (k != 0)
> +			mbuf_bulk_copy(mb + k, dr, num - k);
> +	}
> +
> +	return k;
> +}
> +
> +/*
> + * prepare packets for inline ipsec processing:
> + * set ol_flags and attach metadata.
> + */
> +static inline void
> +inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], uint16_t num)
> +{
> +	uint32_t i, ol_flags;
> +
> +	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
> +	for (i = 0; i != num; i++) {
> +
> +		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
> +		if (ol_flags != 0)
> +			rte_security_set_pkt_metadata(ss->security.ctx,
> +				ss->security.ses, mb[i], NULL);
> +	}
> +}
> +
> +static uint16_t
> +inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], uint16_t num)
> +{
> +	int32_t rc;
> +	uint32_t i, k, n;
> +	uint64_t sqn;
> +	rte_be64_t sqc;
> +	struct rte_ipsec_sa *sa;
> +	union sym_op_data icv;
> +	uint64_t iv[IPSEC_MAX_IV_QWORD];
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	n = num;
> +	sqn = esn_outb_update_sqn(sa, &n);
> +	if (n != num)
> +		rte_errno = EOVERFLOW;
> +
> +	k = 0;
> +	for (i = 0; i != n; i++) {
> +
> +		sqc = rte_cpu_to_be_64(sqn + i);
> +		gen_iv(iv, sqc);
> +
> +		/* try to update the packet itself */
> +		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
> +
> +		/* success, update mbuf fields */
> +		if (rc >= 0)
> +			mb[k++] = mb[i];
> +		/* failure, put packet into the death-row */
> +		else {
> +			dr[i - k] = mb[i];
> +			rte_errno = -rc;
> +		}
> +	}
> +
> +	inline_outb_mbuf_prepare(ss, mb, k);
> +
> +	/* copy not processed mbufs beyond good ones */
> +	if (k != num && k != 0)
> +		mbuf_bulk_copy(mb + k, dr, num - k);
> +
> +	return k;
> +}
> +
> +static uint16_t
> +inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
> +	struct rte_mbuf *mb[], uint16_t num)
> +{
> +	int32_t rc;
> +	uint32_t i, k, n, l2, l3;
> +	uint64_t sqn;
> +	rte_be64_t sqc;
> +	struct rte_ipsec_sa *sa;
> +	union sym_op_data icv;
> +	uint64_t iv[IPSEC_MAX_IV_QWORD];
> +	struct rte_mbuf *dr[num];
> +
> +	sa = ss->sa;
> +
> +	n = num;
> +	sqn = esn_outb_update_sqn(sa, &n);
> +	if (n != num)
> +		rte_errno = EOVERFLOW;
> +
> +	k = 0;
> +	for (i = 0; i != n; i++) {
> +
> +		l2 = mb[i]->l2_len;
> +		l3 = mb[i]->l3_len;
> +
> +		sqc = rte_cpu_to_be_64(sqn + i);
> +		gen_iv(iv, sqc);
> +
> +		/* try to update the packet itself */
> +		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
> +				l2, l3, &icv);
> +
> +		/* success, update mbuf fields */
> +		if (rc >= 0)
> +			mb[k++] = mb[i];
> +		/* failure, put packet into the death-row */
> +		else {
> +			dr[i - k] = mb[i];
> +			rte_errno = -rc;
> +		}
> +	}
> +
> +	inline_outb_mbuf_prepare(ss, mb, k);
> +
> +	/* copy not processed mbufs beyond good ones */
> +	if (k != num && k != 0)
> +		mbuf_bulk_copy(mb + k, dr, num - k);
> +
> +	return k;
> +}
> +
> +/*
> + * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
> + * actual processing is done by HW/PMD, just set flags and metadata.
> + */
> +static uint16_t
> +outb_inline_proto_process(const struct rte_ipsec_session *ss,
> +		struct rte_mbuf *mb[], uint16_t num)
> +{
> +	inline_outb_mbuf_prepare(ss, mb, num);
> +	return num;
> +}
> +
> +static int
> +lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
> +		struct rte_ipsec_sa_pkt_func *pf)
> +{
> +	int32_t rc;
> +
> +	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> +			RTE_IPSEC_SATP_MODE_MASK;
> +
> +	rc = 0;
> +	switch (sa->type & msk) {
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
> +		pf->prepare = inb_pkt_prepare;
> +		pf->process = inb_tun_pkt_process;
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> +		pf->prepare = inb_pkt_prepare;
> +		pf->process = inb_trs_pkt_process;
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> +		pf->prepare = outb_tun_prepare;
> +		pf->process = (sa->sqh_len != 0) ?
> +			outb_sqh_process : pkt_flag_process;
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> +		pf->prepare = outb_trs_prepare;
> +		pf->process = (sa->sqh_len != 0) ?
> +			outb_sqh_process : pkt_flag_process;
> +		break;
> +	default:
> +		rc = -ENOTSUP;
> +	}
> +
> +	return rc;
> +}
> +
> +static int
> +inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
> +		struct rte_ipsec_sa_pkt_func *pf)
> +{
> +	int32_t rc;
> +
> +	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> +			RTE_IPSEC_SATP_MODE_MASK;
> +
> +	rc = 0;
> +	switch (sa->type & msk) {
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
> +		pf->process = inb_tun_pkt_process;
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
> +		pf->process = inb_trs_pkt_process;
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
> +		pf->process = inline_outb_tun_pkt_process;
> +		break;
> +	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
> +		pf->process = inline_outb_trs_pkt_process;
> +		break;
> +	default:
> +		rc = -ENOTSUP;
> +	}
> +
> +	return rc;
> +}
> +
>   int
>   ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
>   	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
>   {
>   	int32_t rc;
>   
> -	RTE_SET_USED(sa);
> -
>   	rc = 0;
>   	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
>   
>   	switch (ss->type) {
> +	case RTE_SECURITY_ACTION_TYPE_NONE:
> +		rc = lksd_none_pkt_func_select(sa, pf);
> +		break;
> +	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
> +		rc = inline_crypto_pkt_func_select(sa, pf);
> +		break;
> +	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
> +		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
> +				RTE_IPSEC_SATP_DIR_IB)
> +			pf->process = pkt_flag_process;
> +		else
> +			pf->process = outb_inline_proto_process;
> +		break;
> +	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
> +		pf->prepare = lksd_proto_prepare;
better to use lookaside instead of lksd
> +		pf->process = pkt_flag_process;
> +		break;
>   	default:
>   		rc = -ENOTSUP;
>   	}


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-12-19 15:46         ` Akhil Goyal
  2018-12-20 13:00           ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19 15:46 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: 0000-cover-letter.patch



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> Introduce helper functions to process completed crypto-ops
> and group related packets by sessions they belong to.
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> ---
>   lib/librte_ipsec/Makefile              |   1 +
>   lib/librte_ipsec/meson.build           |   2 +-
>   lib/librte_ipsec/rte_ipsec.h           |   2 +
>   lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
>   lib/librte_ipsec/rte_ipsec_version.map |   2 +
>   5 files changed, 157 insertions(+), 1 deletion(-)
>   create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
>
> diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
> index 79f187fae..98c52f388 100644
> --- a/lib/librte_ipsec/Makefile
> +++ b/lib/librte_ipsec/Makefile
> @@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
>   
>   # install header files
>   SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
>   SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
>   
>   include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
> index 6e8c6fabe..d2427b809 100644
> --- a/lib/librte_ipsec/meson.build
> +++ b/lib/librte_ipsec/meson.build
> @@ -5,6 +5,6 @@ allow_experimental_apis = true
>   
>   sources=files('sa.c', 'ses.c')
>   
> -install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
> +install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
>   
>   deps += ['mbuf', 'net', 'cryptodev', 'security']
> diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
> index cbcd861b5..cd2e3b26c 100644
> --- a/lib/librte_ipsec/rte_ipsec.h
> +++ b/lib/librte_ipsec/rte_ipsec.h
> @@ -144,6 +144,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
>   	return ss->pkt_func.process(ss, mb, num);
>   }
>   
> +#include <rte_ipsec_group.h>
> +
>   #ifdef __cplusplus
>   }
>   #endif
> diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
> new file mode 100644
> index 000000000..d264d7e78
> --- /dev/null
> +++ b/lib/librte_ipsec/rte_ipsec_group.h
> @@ -0,0 +1,151 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2018 Intel Corporation
> + */
> +
> +#ifndef _RTE_IPSEC_GROUP_H_
> +#define _RTE_IPSEC_GROUP_H_
> +
> +/**
> + * @file rte_ipsec_group.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE IPsec support.
> + * It is not recomended to include this file direclty,
spell check
> + * include <rte_ipsec.h> instead.
> + * Contains helper functions to process completed crypto-ops
> + * and group related packets by sessions they belong to.
> + */
> +
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Used to group mbufs by some id.
> + * See below for particular usage.
> + */
> +struct rte_ipsec_group {
> +	union {
> +		uint64_t val;
> +		void *ptr;
> +	} id; /**< grouped by value */
> +	struct rte_mbuf **m;  /**< start of the group */
> +	uint32_t cnt;         /**< number of entries in the group */
> +	int32_t rc;           /**< status code associated with the group */
> +};
> +
> +/**
> + * Take crypto-op as an input and extract pointer to related ipsec session.
> + * @param cop
> + *   The address of an input *rte_crypto_op* structure.
> + * @return
> + *   The pointer to the related *rte_ipsec_session* structure.
> + */
> +static inline __rte_experimental struct rte_ipsec_session *
> +rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
__rte_experimental placement not correct
> +{
> +	const struct rte_security_session *ss;
> +	const struct rte_cryptodev_sym_session *cs;
> +
> +	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
> +		ss = cop->sym[0].sec_session;
> +		return (void *)(uintptr_t)ss->opaque_data;
> +	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> +		cs = cop->sym[0].session;
> +		return (void *)(uintptr_t)cs->opaque_data;
> +	}
> +	return NULL;
> +}
> +
> +/**
> + * Take as input completed crypto ops, extract related mbufs
> + * and group them by rte_ipsec_session they belong to.
> + * For mbuf which crypto-op wasn't completed successfully
> + * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
> + * Note that mbufs with undetermined SA (session-less) are not freed
> + * by the function, but are placed beyond mbufs for the last valid group.
> + * It is a user responsibility to handle them further.
> + * @param cop
> + *   The address of an array of *num* pointers to the input *rte_crypto_op*
> + *   structures.
> + * @param mb
> + *   The address of an array of *num* pointers to output *rte_mbuf* structures.
> + * @param grp
> + *   The address of an array of *num* to output *rte_ipsec_group* structures.
> + * @param num
> + *   The maximum number of crypto-ops to process.
> + * @return
> + *   Number of filled elements in *grp* array.
> + */
> +static inline uint16_t __rte_experimental
> +rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
> +	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
> +{
> +	uint32_t i, j, k, n;
> +	void *ns, *ps;
> +	struct rte_mbuf *m, *dr[num];
> +
> +	j = 0;
> +	k = 0;
> +	n = 0;
> +	ps = NULL;
> +
> +	for (i = 0; i != num; i++) {
> +
> +		m = cop[i]->sym[0].m_src;
> +		ns = cop[i]->sym[0].session;
> +
> +		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
> +		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
> +			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
> +
> +		/* no valid session found */
> +		if (ns == NULL) {
> +			dr[k++] = m;
> +			continue;
> +		}
> +
> +		/* different SA */
> +		if (ps != ns) {
> +
> +			/*
> +			 * we already have an open group - finilise it,
finalise
> +			 * then open a new one.
> +			 */
> +			if (ps != NULL) {
> +				grp[n].id.ptr =
> +					rte_ipsec_ses_from_crypto(cop[i - 1]);
> +				grp[n].cnt = mb + j - grp[n].m;
> +				n++;
> +			}
> +
> +			/* start new group */
> +			grp[n].m = mb + j;
> +			ps = ns;
> +		}
> +
> +		mb[j++] = m;
> +	}
> +
> +	/* finalise last group */
> +	if (ps != NULL) {
> +		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
> +		grp[n].cnt = mb + j - grp[n].m;
> +		n++;
> +	}
> +
> +	/* copy mbufs with unknown session beyond recognised ones */
> +	if (k != 0 && k != num) {
> +		for (i = 0; i != k; i++)
> +			mb[j + i] = dr[i];
> +	}
> +
> +	return n;
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_IPSEC_GROUP_H_ */
> diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
> index d1c52d7ca..0f91fb134 100644
> --- a/lib/librte_ipsec/rte_ipsec_version.map
> +++ b/lib/librte_ipsec/rte_ipsec_version.map
> @@ -1,6 +1,7 @@
>   EXPERIMENTAL {
>   	global:
>   
> +	rte_ipsec_pkt_crypto_group;
>   	rte_ipsec_pkt_crypto_prepare;
>   	rte_ipsec_session_prepare;
>   	rte_ipsec_pkt_process;
> @@ -8,6 +9,7 @@ EXPERIMENTAL {
>   	rte_ipsec_sa_init;
>   	rte_ipsec_sa_size;
>   	rte_ipsec_sa_type;
> +	rte_ipsec_ses_from_crypto;
>   
>   	local: *;
>   };


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test Konstantin Ananyev
@ 2018-12-19 15:53         ` Akhil Goyal
  2018-12-20 13:03           ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19 15:53 UTC (permalink / raw)
  To: Konstantin Ananyev, dev
  Cc: Thomas Monjalon, Mohammad Abdul Awal, Bernard Iremonger



On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> +static struct unit_test_suite ipsec_testsuite  = {
> +	.suite_name = "IPsec NULL Unit Test Suite",
> +	.setup = testsuite_setup,
> +	.teardown = testsuite_teardown,
> +	.unit_test_cases = {
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_crypto_inb_burst_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_crypto_outb_burst_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_inline_inb_burst_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_inline_outb_burst_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_replay_inb_inside_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_replay_inb_outside_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_replay_inb_repeat_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
> +		TEST_CASE_ST(ut_setup, ut_teardown,
> +			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
> +		TEST_CASES_END() /**< NULL terminate unit test array */
> +	}
> +};
> +
test case for lookaside proto and inline proto case should also be added 
here.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide
  2018-12-14 16:27       ` [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide Konstantin Ananyev
  2018-12-19  3:46         ` Thomas Monjalon
@ 2018-12-19 16:01         ` Akhil Goyal
  2018-12-20 13:06           ` Ananyev, Konstantin
  1 sibling, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-19 16:01 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Bernard Iremonger



On 12/14/2018 9:57 PM, Konstantin Ananyev wrote:
> Add IPsec library guide and update release notes.
>
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>   doc/guides/prog_guide/index.rst        |  1 +
>   doc/guides/prog_guide/ipsec_lib.rst    | 74 ++++++++++++++++++++++++++
>   doc/guides/rel_notes/release_19_02.rst | 10 ++++
>   3 files changed, 85 insertions(+)
>   create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
>
> diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
> index ba8c1f6ad..6726b1e8d 100644
> --- a/doc/guides/prog_guide/index.rst
> +++ b/doc/guides/prog_guide/index.rst
> @@ -54,6 +54,7 @@ Programmer's Guide
>       vhost_lib
>       metrics_lib
>       bpf_lib
> +    ipsec_lib
>       source_org
>       dev_kit_build_system
>       dev_kit_root_make_help
> diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
> new file mode 100644
> index 000000000..f3b783c20
> --- /dev/null
> +++ b/doc/guides/prog_guide/ipsec_lib.rst
> @@ -0,0 +1,74 @@
> +..  SPDX-License-Identifier: BSD-3-Clause
> +    Copyright(c) 2018 Intel Corporation.
> +
> +IPsec Packet Processing Library
> +===============================
> +
> +The DPDK provides a library for IPsec data-path processing.
> +The library utilizes existing DPDK crypto-dev and
> +security API to provide application with transparent and
> +high peromant IPsec packet processing API.
> +The library is concentrated on data-path protocols processing
> +(ESP and AH), IKE protocol(s) implementation is out of scope
> +for that library.
I do not see AH processing in the library
> +
> +SA level API
> +------------
> +
> +This API operates on IPsec SA level.
> +It provides functionality that allows user for given SA to process
> +inbound and outbound IPsec packets.
> +To be more specific:
> +*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
> +*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
> +*  setup related mbuf felids (ol_flags, tx_offloads, etc.).
> +*  initialize/un-initialize given SA based on user provided parameters.
> +
> +SA-level API is based on top of crypto-dev/security API and relies on
> +them to perform actual cipher and integrity checking.
> +
> +Due to the nature of crypto-dev API (enqueue/deque model) library introduces
> +asynchronous API for IPsec packets destined to be processed by crypto-device.
> +
> +Expected API call sequence for data-path processing would be:
> +
> +.. code-block:: c
> +
> +    /* enqueue for processing by crypto-device */
> +    rte_ipsec_pkt_crypto_prepare(...);
> +    rte_cryptodev_enqueue_burst(...);
> +    /* dequeue from crypto-device and do final processing (if any) */
> +    rte_cryptodev_dequeue_burst(...);
> +    rte_ipsec_pkt_crypto_group(...); /* optional */
> +    rte_ipsec_pkt_process(...);
> +
> +For packets destined for inline processing no extra overhead
> +is required and synchronous API call: rte_ipsec_pkt_process()
> +is sufficient for that case.
> +
> +.. note::
> +
> +    For more details about the IPsec API, please refer to the *DPDK API Reference*.
> +
> +Current implementation supports all four currently defined rte_security types:
> +*  RTE_SECURITY_ACTION_TYPE_NONE
> +
> +*  RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
> +
> +*  RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
> +
> +*  RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> +
probably a code flow diagram should be added and explained in detail for 
each of the action types
> +To accommodate future custom implementations function pointers
> +model is used for both for *crypto_prepare* and *process*
> +impelementations.
> +
> +Supported features:
> +*  ESP protocol tunnel mode.
> +
> +*  ESP protocol transport mode.
> +
> +*  ESN and replay window.
> +
> +*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
The supported features should be elaborated further more.
> +
> diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
> index e86ef9511..e88289f73 100644
> --- a/doc/guides/rel_notes/release_19_02.rst
> +++ b/doc/guides/rel_notes/release_19_02.rst
> @@ -60,6 +60,16 @@ New Features
>     * Added the handler to get firmware version string.
>     * Added support for multicast filtering.
>   
> +* **Added IPsec Library.**
> +
> +  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
> +  transport support for IPv4 and IPv6 packets.
> +
> +  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
> +  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
> +  planned to add more algorithms in future releases.
> +
> +  See :doc:`../prog_guide/ipsec_lib` for more information.
>   
>   Removed Items
>   -------------


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API
  2018-12-19 13:04         ` Akhil Goyal
@ 2018-12-20 10:17           ` Ananyev, Konstantin
  2018-12-21 12:14             ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 10:17 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul


Hi Akhil,

> 
> Hi Konstantin,
> 
> Sorry for a late review. I was on unplanned leave for more than 2 weeks.

No worries, thanks for review.
Comments/answers inline.
Konstantin

> > +/**
> > + * Checks that inside given rte_ipsec_session crypto/security fields
> > + * are filled correctly and setups function pointers based on these values.
> it means user need not fill rte_ipsec_sa_pkt_fun, 

Yes.

> specify this in the comments.

Ok.

> > + * @param ss
> > + *   Pointer to the *rte_ipsec_session* object
> > + * @return
> > + *   - Zero if operation completed successfully.
> > + *   - -EINVAL if the parameters are invalid.
> > + */
> > +int __rte_experimental
> > +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
> > +
> > +/**
> > + * For input mbufs and given IPsec session prepare crypto ops that can be
> > + * enqueued into the cryptodev associated with given session.
> > + * expects that for each input packet:
> > + *      - l2_len, l3_len are setup correctly
> > + * Note that erroneous mbufs are not freed by the function,
> > + * but are placed beyond last valid mbuf in the *mb* array.
> > + * It is a user responsibility to handle them further.
> How will the user know how many mbufs are correctly processed.

Function return value contains number of successfully processed packets,
see comments below.
As an example, let say at input mb[]={A, B, C, D}, num=4, and prepare()
was able to successfully process A, B, D mbufs but failed to process C. 
Then return value will be 3, and mb[]={A, B, D, C}. 

> > + * @param ss
> > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > + * @param mb
> > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > + *   which contain the input packets.
> > + * @param cop
> > + *   The address of an array of *num* pointers to the output *rte_crypto_op*
> > + *   structures.
> > + * @param num
> > + *   The maximum number of packets to process.
> > + * @return
> > + *   Number of successfully processed packets, with error code set in rte_errno.
> > + */
> > +static inline uint16_t __rte_experimental
> > +rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
> > +	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
> > +{
> > +	return ss->pkt_func.prepare(ss, mb, cop, num);
> > +}
> > +
> > +/**
> > + * Finalise processing of packets after crypto-dev finished with them or
> > + * process packets that are subjects to inline IPsec offload.
> > + * Expects that for each input packet:
> > + *      - l2_len, l3_len are setup correctly
> > + * Output mbufs will be:
> > + * inbound - decrypted & authenticated, ESP(AH) related headers removed,
> > + * *l2_len* and *l3_len* fields are updated.
> > + * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
> > + * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
> > + * Note that erroneous mbufs are not freed by the function,
> > + * but are placed beyond last valid mbuf in the *mb* array.
> same question

Same answer as above.

> > + * It is a user responsibility to handle them further.
> > + * @param ss
> > + *   Pointer to the *rte_ipsec_session* object the packets belong to.
> > + * @param mb
> > + *   The address of an array of *num* pointers to *rte_mbuf* structures
> > + *   which contain the input packets.
> > + * @param num
> > + *   The maximum number of packets to process.
> > + * @return
> > + *   Number of successfully processed packets, with error code set in rte_errno.
> > + */
> > +static inline uint16_t __rte_experimental
> > +rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
> > +	uint16_t num)
> > +{
> > +	return ss->pkt_func.process(ss, mb, num);
> > +}
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_IPSEC_H_ */


> > +
> > +int
> > +ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
> > +	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
> > +{
> > +	int32_t rc;
> > +
> > +	RTE_SET_USED(sa);
> > +
> > +	rc = 0;
> > +	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
> > +
> > +	switch (ss->type) {
> > +	default:
> > +		rc = -ENOTSUP;
> > +	}
> > +
> > +	return rc;
> > +}
> Is this a dummy function? Will it be updated later?

Yes it is a dummy function in that patch, will be filled in patch #6.

 I believe should
> have appropriate comments in that case.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-19 15:32         ` Akhil Goyal
@ 2018-12-20 12:56           ` Ananyev, Konstantin
  2018-12-21 12:36             ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 12:56 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



> >
> > diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
> > new file mode 100644
> > index 000000000..61f5c1433
> > --- /dev/null
> > +++ b/lib/librte_ipsec/crypto.h
> > @@ -0,0 +1,123 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018 Intel Corporation
> > + */
> > +
> > +#ifndef _CRYPTO_H_
> > +#define _CRYPTO_H_
> > +
> > +/**
> > + * @file crypto.h
> > + * Contains crypto specific functions/structures/macros used internally
> > + * by ipsec library.
> > + */
> > +
> > + /*
> > +  * AES-GCM devices have some specific requirements for IV and AAD formats.
> > +  * Ideally that to be done by the driver itself.
> > +  */
> I believe these can be moved to rte_crypto_sym.h. All crypto related
> stuff should be at same place.

Not sure what exactly you suggest to put into rte_crypto_sym.h?
struct aead_gcm_iv? Something else?
From my perspective it would be good if user in ctypto_sym_op
just fill salt and IV fields, and then PMD setup things in needed 
format internally.
Again it would be really good if crypto_sym_op has reserved space
for AAD...
But  all that implies quite a big change in cryptodev and PMDs,
so I think should be subject of a separate patch.

> > +
> > +struct aead_gcm_iv {
> > +	uint32_t salt;
> > +	uint64_t iv;
> > +	uint32_t cnt;
> > +} __attribute__((packed));
> > +
> > +struct aead_gcm_aad {
> > +	uint32_t spi;
> > +	/*
> > +	 * RFC 4106, section 5:
> > +	 * Two formats of the AAD are defined:
> > +	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
> > +	 */
> > +	union {
> > +		uint32_t u32[2];
> > +		uint64_t u64;
> > +	} sqn;
> > +	uint32_t align0; /* align to 16B boundary */
> > +} __attribute__((packed));
> > +
> > +struct gcm_esph_iv {
> > +	struct esp_hdr esph;
> > +	uint64_t iv;
> > +} __attribute__((packed));
> > +
> > +
> > +static inline void
> > +aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
> > +{
> > +	gcm->salt = salt;
> > +	gcm->iv = iv;
> > +	gcm->cnt = rte_cpu_to_be_32(1);
> > +}
> > +
> > +/*


> > diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
> > new file mode 100644
> > index 000000000..3fd93016d
> > --- /dev/null
> > +++ b/lib/librte_ipsec/iph.h
> > @@ -0,0 +1,84 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018 Intel Corporation
> > + */
> > +
> > +#ifndef _IPH_H_
> > +#define _IPH_H_
> > +
> > +/**
> > + * @file iph.h
> > + * Contains functions/structures/macros to manipulate IPv/IPv6 headers
> IPv4
> > + * used internally by ipsec library.
> > + */
> > +
> > +/*
> > + * Move preceding (L3) headers down to remove ESP header and IV.
> > + */
> why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.

We do use rte_mbuf append/trim, etc. adjust mbuf's data_ofs and data_len.
But apart from that for transport mode we have to move actual packet headers.
Let say for inbound we have to get rid of ESP header (which is after IP header),
but preserve IP header, so we moving L2/L3 headers down, overwriting ESP header.

> I believe these adjustments are happening in the mbuf itself.
> Moreover these APIs are not specific to esp headers.

I didn't get your last sentence: that function is used to remove esp header
(see above) - that's why I named it that way.

> > +static inline void
> > +remove_esph(char *np, char *op, uint32_t hlen)
> > +{
> > +	uint32_t i;
> > +
> > +	for (i = hlen; i-- != 0; np[i] = op[i])
> > +		;
> > +}
> > +
> > +/*


> > +
> > +/* update original and new ip header fields for tunnel case */
> > +static inline void
> > +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> > +		uint32_t l2len, rte_be16_t pid)
> > +{
> > +	struct ipv4_hdr *v4h;
> > +	struct ipv6_hdr *v6h;
> > +
> > +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> > +		v4h = p;
> > +		v4h->packet_id = pid;
> > +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> where are we updating the rest of the fields, like ttl, checksum, ip
> addresses, etc

TTL, ip addresses and other fileds supposed to be setuped by user
and provided via rte_ipsec_sa_init():
struct rte_ipsec_sa_prm.tun.hdr  should contain prepared template
for L3(and L2 if user wants to) header.
Checksum calculation is not done inside the lib right now -
it is a user responsibility to caclucate/set it after librte_ipsec
finishes processing the packet.

> > +	} else {
> > +		v6h = p;
> > +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> > +				sizeof(*v6h));
> > +	}
> > +}
> > +
> > +#endif /* _IPH_H_ */
> > diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> > index 1935f6e30..6e18c34eb 100644
> > --- a/lib/librte_ipsec/ipsec_sqn.h
> > +++ b/lib/librte_ipsec/ipsec_sqn.h
> > @@ -15,6 +15,45 @@
> >
> >   #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> >
> > +/*
> > + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
> > + */
> > +static inline rte_be32_t
> > +sqn_hi32(rte_be64_t sqn)
> > +{
> > +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > +	return (sqn >> 32);
> > +#else
> > +	return sqn;
> > +#endif
> > +}
> > +
> > +/*
> > + * gets SQN.low32 bits, SQN supposed to be in network byte order.
> > + */
> > +static inline rte_be32_t
> > +sqn_low32(rte_be64_t sqn)
> > +{
> > +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > +	return sqn;
> > +#else
> > +	return (sqn >> 32);
> > +#endif
> > +}
> > +
> > +/*
> > + * gets SQN.low16 bits, SQN supposed to be in network byte order.
> > + */
> > +static inline rte_be16_t
> > +sqn_low16(rte_be64_t sqn)
> > +{
> > +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > +	return sqn;
> > +#else
> > +	return (sqn >> 48);
> > +#endif
> > +}
> > +
> shouldn't we move these seq number APIs in rte_esp.h and make them generic

It could be done, but who will use them except librte_ipsec?


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops
  2018-12-19 15:46         ` Akhil Goyal
@ 2018-12-20 13:00           ` Ananyev, Konstantin
  2018-12-21 12:37             ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 13:00 UTC (permalink / raw)
  To: Akhil Goyal, dev



> > +
> > +/**
> > + * Take crypto-op as an input and extract pointer to related ipsec session.
> > + * @param cop
> > + *   The address of an input *rte_crypto_op* structure.
> > + * @return
> > + *   The pointer to the related *rte_ipsec_session* structure.
> > + */
> > +static inline __rte_experimental struct rte_ipsec_session *
> > +rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
> __rte_experimental placement not correct

You mean why not: 
static inline struct rte_ipsec_session * __rte_experimental
?
Then checkpatch will complain about the space after '*'.
BTW why do you think current definition is wrong?

> > +{
> > +	const struct rte_security_session *ss;
> > +	const struct rte_cryptodev_sym_session *cs;
> > +
> > +	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
> > +		ss = cop->sym[0].sec_session;
> > +		return (void *)(uintptr_t)ss->opaque_data;
> > +	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
> > +		cs = cop->sym[0].session;
> > +		return (void *)(uintptr_t)cs->opaque_data;
> > +	}
> > +	return NULL;
> > +}
> > +

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test
  2018-12-19 15:53         ` Akhil Goyal
@ 2018-12-20 13:03           ` Ananyev, Konstantin
  2018-12-21 12:41             ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 13:03 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: Thomas Monjalon, Awal, Mohammad Abdul, Iremonger, Bernard



> 
> 
> On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> > +static struct unit_test_suite ipsec_testsuite  = {
> > +	.suite_name = "IPsec NULL Unit Test Suite",
> > +	.setup = testsuite_setup,
> > +	.teardown = testsuite_teardown,
> > +	.unit_test_cases = {
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_crypto_inb_burst_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_crypto_outb_burst_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_inline_inb_burst_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_inline_outb_burst_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_replay_inb_inside_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_replay_inb_outside_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_replay_inb_repeat_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
> > +		TEST_CASE_ST(ut_setup, ut_teardown,
> > +			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
> > +		TEST_CASES_END() /**< NULL terminate unit test array */
> > +	}
> > +};
> > +
> test case for lookaside proto and inline proto case should also be added
> here.

Do you mean one with dummy security context and session as we done for inline-crypto here?
Konstantin


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide
  2018-12-19 16:01         ` Akhil Goyal
@ 2018-12-20 13:06           ` Ananyev, Konstantin
  2018-12-21 12:58             ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 13:06 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Iremonger, Bernard



> > --- /dev/null
> > +++ b/doc/guides/prog_guide/ipsec_lib.rst
> > @@ -0,0 +1,74 @@
> > +..  SPDX-License-Identifier: BSD-3-Clause
> > +    Copyright(c) 2018 Intel Corporation.
> > +
> > +IPsec Packet Processing Library
> > +===============================
> > +
> > +The DPDK provides a library for IPsec data-path processing.
> > +The library utilizes existing DPDK crypto-dev and
> > +security API to provide application with transparent and
> > +high peromant IPsec packet processing API.
> > +The library is concentrated on data-path protocols processing
> > +(ESP and AH), IKE protocol(s) implementation is out of scope
> > +for that library.
> I do not see AH processing in the library

Right now it is not implemented.
But the whole library code structure allows it to be added (if someone would decide to).

> > +
> > +SA level API
> > +------------
> > +
> > +This API operates on IPsec SA level.
> > +It provides functionality that allows user for given SA to process
> > +inbound and outbound IPsec packets.
> > +To be more specific:
> > +*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
> > +*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
> > +*  setup related mbuf felids (ol_flags, tx_offloads, etc.).
> > +*  initialize/un-initialize given SA based on user provided parameters.
> > +
> > +SA-level API is based on top of crypto-dev/security API and relies on
> > +them to perform actual cipher and integrity checking.
> > +
> > +Due to the nature of crypto-dev API (enqueue/deque model) library introduces
> > +asynchronous API for IPsec packets destined to be processed by crypto-device.
> > +
> > +Expected API call sequence for data-path processing would be:
> > +
> > +.. code-block:: c
> > +
> > +    /* enqueue for processing by crypto-device */
> > +    rte_ipsec_pkt_crypto_prepare(...);
> > +    rte_cryptodev_enqueue_burst(...);
> > +    /* dequeue from crypto-device and do final processing (if any) */
> > +    rte_cryptodev_dequeue_burst(...);
> > +    rte_ipsec_pkt_crypto_group(...); /* optional */
> > +    rte_ipsec_pkt_process(...);
> > +
> > +For packets destined for inline processing no extra overhead
> > +is required and synchronous API call: rte_ipsec_pkt_process()
> > +is sufficient for that case.
> > +
> > +.. note::
> > +
> > +    For more details about the IPsec API, please refer to the *DPDK API Reference*.
> > +
> > +Current implementation supports all four currently defined rte_security types:
> > +*  RTE_SECURITY_ACTION_TYPE_NONE
> > +
> > +*  RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
> > +
> > +*  RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
> > +
> > +*  RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
> > +
> probably a code flow diagram should be added and explained in detail for
> each of the action types

I think it is way above my drawing capabilities :)

> > +To accommodate future custom implementations function pointers
> > +model is used for both for *crypto_prepare* and *process*
> > +impelementations.
> > +
> > +Supported features:
> > +*  ESP protocol tunnel mode.
> > +
> > +*  ESP protocol transport mode.
> > +
> > +*  ESN and replay window.
> > +
> > +*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
> The supported features should be elaborated further more.

Ok, anything specific information you think has to be added here?


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-19 12:08         ` Akhil Goyal
  2018-12-19 12:39           ` Thomas Monjalon
@ 2018-12-20 14:06           ` Ananyev, Konstantin
  2018-12-20 14:14             ` Thomas Monjalon
                               ` (2 more replies)
  1 sibling, 3 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 14:06 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



> > diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
> > new file mode 100644
> > index 000000000..52c78eaeb
> > --- /dev/null
> > +++ b/lib/librte_ipsec/meson.build
> > @@ -0,0 +1,10 @@
> > +# SPDX-License-Identifier: BSD-3-Clause
> > +# Copyright(c) 2018 Intel Corporation
> > +
> > +allow_experimental_apis = true
> > +
> > +sources=files('sa.c')
> > +
> > +install_headers = files('rte_ipsec_sa.h')
> > +
> > +deps += ['mbuf', 'net', 'cryptodev', 'security']
> we need net in meson and not in Makefile ?

I suppose we need it both, will update.

> > +
> > +enum {
> > +	RTE_SATP_LOG_IPV,
> > +	RTE_SATP_LOG_PROTO,
> > +	RTE_SATP_LOG_DIR,
> > +	RTE_SATP_LOG_MODE,
> > +	RTE_SATP_LOG_NUM
> > +};
> what is the significance of LOG here.

_LOG_ is for logarithm of 2 here.

> 
> > diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
> > new file mode 100644
> > index 000000000..f927a82bf
> > --- /dev/null
> > +++ b/lib/librte_ipsec/sa.c
> > @@ -0,0 +1,327 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2018 Intel Corporation
> > + */
> > +
> > +#include <rte_ipsec_sa.h>
> > +#include <rte_esp.h>
> > +#include <rte_ip.h>
> > +#include <rte_errno.h>
> > +
> > +#include "sa.h"
> > +#include "ipsec_sqn.h"
> > +
> > +/* some helper structures */
> > +struct crypto_xform {
> > +	struct rte_crypto_auth_xform *auth;
> > +	struct rte_crypto_cipher_xform *cipher;
> > +	struct rte_crypto_aead_xform *aead;
> > +};
> shouldn't this be union as aead cannot be with cipher and auth cases.

That's used internally to collect/analyze xforms provided by prm->crypto_xform.


> 
> extra line
> > +
> > +
> > +static int
> > +check_crypto_xform(struct crypto_xform *xform)
> > +{
> > +	uintptr_t p;
> > +
> > +	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
> what is the intent of this?

It is used below to check that if aead is present both cipher and auth
are  not.

> > +
> > +	/* either aead or both auth and cipher should be not NULLs */
> > +	if (xform->aead) {
> > +		if (p)
> > +			return -EINVAL;
> > +	} else if (p == (uintptr_t)xform->auth) {
> > +		return -EINVAL;
> > +	}
> This function does not look good. It will miss the case of cipher only

Cipher only is not supported right now and  I am not aware about plans
to support it in future.
If someone would like to add cipher onl,then yes he/she probably would
have to update this function.

> > +
> > +	return 0;
> > +}
> > +
> > +static int
> > +fill_crypto_xform(struct crypto_xform *xform,
> > +	const struct rte_ipsec_sa_prm *prm)
> > +{
> > +	struct rte_crypto_sym_xform *xf;
> > +
> > +	memset(xform, 0, sizeof(*xform));
> > +
> > +	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
> > +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > +			if (xform->auth != NULL)
> > +				return -EINVAL;
> > +			xform->auth = &xf->auth;
> > +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
> > +			if (xform->cipher != NULL)
> > +				return -EINVAL;
> > +			xform->cipher = &xf->cipher;
> > +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
> > +			if (xform->aead != NULL)
> > +				return -EINVAL;
> > +			xform->aead = &xf->aead;
> > +		} else
> > +			return -EINVAL;
> > +	}
> > +
> > +	return check_crypto_xform(xform);
> > +}
> how is this function handling the inbound and outbound cases.
> In inbound first xform is auth and then cipher.
> In outbound first is cipher and then auth. I think this should be
> checked in the lib.

Interesting, I didn't know about such limitation.
My understanding was that the any order (<auth,cipher>, <cipher,auth>)
for both inbound and outbound is acceptable.
Is that order restriction is documented somewhere?

> Here for loop should not be there, as there would be at max only 2 xforms.
> > +
> > +uint64_t __rte_experimental
> > +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
> > +{
> > +	return sa->type;
> > +}
> > +
> > +static int32_t
> > +ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
> > +{
> > +	uint32_t n, sz;
> > +
> > +	n = 0;
> > +	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
> > +			RTE_IPSEC_SATP_DIR_IB)
> > +		n = replay_num_bucket(wsz);
> > +
> > +	if (n > WINDOW_BUCKET_MAX)
> > +		return -EINVAL;
> > +
> > +	*nb_bucket = n;
> > +
> > +	sz = rsn_size(n);
> > +	sz += sizeof(struct rte_ipsec_sa);
> > +	return sz;
> > +}
> > +
> > +void __rte_experimental
> > +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
> > +{
> > +	memset(sa, 0, sa->size);
> > +}
> Where is the memory of "sa" getting initialized?

Not sure I understand your question...
Do you mean we missed memset(sa, 0, size)
in rte_ipsec_sa_init()?

> > +
> > +static int
> > +esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> > +	const struct crypto_xform *cxf)
> > +{
> > +	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
> > +				RTE_IPSEC_SATP_MODE_MASK;
> > +
> > +	if (cxf->aead != NULL) {
> > +		/* RFC 4106 */
> > +		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
> > +			return -EINVAL;
> > +		sa->icv_len = cxf->aead->digest_length;
> > +		sa->iv_ofs = cxf->aead->iv.offset;
> > +		sa->iv_len = sizeof(uint64_t);
> > +		sa->pad_align = 4;
> hard coding ??

Will add some define or enum.


> > +	} else {
> > +		sa->icv_len = cxf->auth->digest_length;
> > +		sa->iv_ofs = cxf->cipher->iv.offset;
> > +		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
> > +		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
> > +			sa->pad_align = 4;
> > +			sa->iv_len = 0;
> > +		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
> > +			sa->pad_align = IPSEC_MAX_IV_SIZE;
> > +			sa->iv_len = IPSEC_MAX_IV_SIZE;
> > +		} else
> > +			return -EINVAL;
> > +	}
> > +


> > +
> > +int __rte_experimental
> > +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
> > +	uint32_t size)
> > +{
> > +	int32_t rc, sz;
> > +	uint32_t nb;
> > +	uint64_t type;
> > +	struct crypto_xform cxf;
> > +
> > +	if (sa == NULL || prm == NULL)
> > +		return -EINVAL;
> > +
> > +	/* determine SA type */
> > +	rc = fill_sa_type(prm, &type);
> > +	if (rc != 0)
> > +		return rc;
> > +
> > +	/* determine required size */
> > +	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
> > +	if (sz < 0)
> > +		return sz;
> > +	else if (size < (uint32_t)sz)
> > +		return -ENOSPC;
> > +
> > +	/* only esp is supported right now */
> > +	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
> > +		return -EINVAL;
> > +
> > +	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
> > +			prm->tun.hdr_len > sizeof(sa->hdr))
> > +		return -EINVAL;
> > +
> > +	rc = fill_crypto_xform(&cxf, prm);
> > +	if (rc != 0)
> > +		return rc;
> > +
> > +	sa->type = type;
> > +	sa->size = sz;
> > +
> > +	/* check for ESN flag */
> > +	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
> > +		UINT32_MAX : UINT64_MAX;
> > +
> > +	rc = esp_sa_init(sa, prm, &cxf);
> > +	if (rc != 0)
> > +		rte_ipsec_sa_fini(sa);
> > +
> > +	/* fill replay window related fields */
> > +	if (nb != 0) {
> move this where nb is getting updated.

I don't think it is a good idea.
We calulate nb first and required sa size first without updating provided memory buffer.
If the buffer is not big enough, will return an error without updating the buffer.
Cleaner and safer to keep it as it is.

> > +		sa->replay.win_sz = prm->replay_win_sz;
> > +		sa->replay.nb_bucket = nb;
> > +		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
> > +		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
> > +	}
> > +
> > +	return sz;
> > +}

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-20 14:06           ` Ananyev, Konstantin
@ 2018-12-20 14:14             ` Thomas Monjalon
  2018-12-20 14:26               ` Ananyev, Konstantin
  2018-12-20 18:17             ` Ananyev, Konstantin
  2018-12-21 11:53             ` Akhil Goyal
  2 siblings, 1 reply; 194+ messages in thread
From: Thomas Monjalon @ 2018-12-20 14:14 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: Akhil Goyal, dev, Awal, Mohammad Abdul

20/12/2018 15:06, Ananyev, Konstantin:
> > > +enum {
> > > +	RTE_SATP_LOG_IPV,
> > > +	RTE_SATP_LOG_PROTO,
> > > +	RTE_SATP_LOG_DIR,
> > > +	RTE_SATP_LOG_MODE,
> > > +	RTE_SATP_LOG_NUM
> > > +};
> > what is the significance of LOG here.
> 
> _LOG_ is for logarithm of 2 here.

_LOG2_ ?

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-20 14:14             ` Thomas Monjalon
@ 2018-12-20 14:26               ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 14:26 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Akhil Goyal, dev, Awal, Mohammad Abdul



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, December 20, 2018 2:14 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: Akhil Goyal <akhil.goyal@nxp.com>; dev@dpdk.org; Awal, Mohammad Abdul <mohammad.abdul.awal@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
> 
> 20/12/2018 15:06, Ananyev, Konstantin:
> > > > +enum {
> > > > +	RTE_SATP_LOG_IPV,
> > > > +	RTE_SATP_LOG_PROTO,
> > > > +	RTE_SATP_LOG_DIR,
> > > > +	RTE_SATP_LOG_MODE,
> > > > +	RTE_SATP_LOG_NUM
> > > > +};
> > > what is the significance of LOG here.
> >
> > _LOG_ is for logarithm of 2 here.
> 
> _LOG2_ ?
> 

Ok, will update.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-20 14:06           ` Ananyev, Konstantin
  2018-12-20 14:14             ` Thomas Monjalon
@ 2018-12-20 18:17             ` Ananyev, Konstantin
  2018-12-21 11:57               ` Akhil Goyal
  2018-12-21 11:53             ` Akhil Goyal
  2 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-20 18:17 UTC (permalink / raw)
  To: Ananyev, Konstantin, Akhil Goyal, dev
  Cc: Thomas Monjalon, Awal, Mohammad Abdul

> > > +
> > > +static int
> > > +fill_crypto_xform(struct crypto_xform *xform,
> > > +	const struct rte_ipsec_sa_prm *prm)
> > > +{
> > > +	struct rte_crypto_sym_xform *xf;
> > > +
> > > +	memset(xform, 0, sizeof(*xform));
> > > +
> > > +	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
> > > +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> > > +			if (xform->auth != NULL)
> > > +				return -EINVAL;
> > > +			xform->auth = &xf->auth;
> > > +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
> > > +			if (xform->cipher != NULL)
> > > +				return -EINVAL;
> > > +			xform->cipher = &xf->cipher;
> > > +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
> > > +			if (xform->aead != NULL)
> > > +				return -EINVAL;
> > > +			xform->aead = &xf->aead;
> > > +		} else
> > > +			return -EINVAL;
> > > +	}
> > > +
> > > +	return check_crypto_xform(xform);
> > > +}
> > how is this function handling the inbound and outbound cases.
> > In inbound first xform is auth and then cipher.
> > In outbound first is cipher and then auth. I think this should be
> > checked in the lib.
> 
> Interesting, I didn't know about such limitation.
> My understanding was that the any order (<auth,cipher>, <cipher,auth>)
> for both inbound and outbound is acceptable.
> Is that order restriction is documented somewhere?
> 

Actually, if such restriction really exists, and cryptodev framework obeys it,
then crypto session creation will fail anyway.

> > Here for loop should not be there, as there would be at max only 2 xforms.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-20 14:06           ` Ananyev, Konstantin
  2018-12-20 14:14             ` Thomas Monjalon
  2018-12-20 18:17             ` Ananyev, Konstantin
@ 2018-12-21 11:53             ` Akhil Goyal
  2018-12-21 12:41               ` Ananyev, Konstantin
  2 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 11:53 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



On 12/20/2018 7:36 PM, Ananyev, Konstantin wrote:
>
>>> diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
>>> new file mode 100644
>>> index 000000000..f927a82bf
>>> --- /dev/null
>>> +++ b/lib/librte_ipsec/sa.c
>>> @@ -0,0 +1,327 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2018 Intel Corporation
>>> + */
>>> +
>>> +#include <rte_ipsec_sa.h>
>>> +#include <rte_esp.h>
>>> +#include <rte_ip.h>
>>> +#include <rte_errno.h>
>>> +
>>> +#include "sa.h"
>>> +#include "ipsec_sqn.h"
>>> +
>>> +/* some helper structures */
>>> +struct crypto_xform {
>>> +	struct rte_crypto_auth_xform *auth;
>>> +	struct rte_crypto_cipher_xform *cipher;
>>> +	struct rte_crypto_aead_xform *aead;
>>> +};
>> shouldn't this be union as aead cannot be with cipher and auth cases.
> That's used internally to collect/analyze xforms provided by prm->crypto_xform.

>
>
>> extra line
>>> +
>>> +
>>> +static int
>>> +check_crypto_xform(struct crypto_xform *xform)
>>> +{
>>> +	uintptr_t p;
>>> +
>>> +	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
>> what is the intent of this?
> It is used below to check that if aead is present both cipher and auth
> are  not.
>
>>> +
>>> +	/* either aead or both auth and cipher should be not NULLs */
>>> +	if (xform->aead) {
>>> +		if (p)
>>> +			return -EINVAL;
>>> +	} else if (p == (uintptr_t)xform->auth) {
>>> +		return -EINVAL;
>>> +	}
>> This function does not look good. It will miss the case of cipher only
> Cipher only is not supported right now and  I am not aware about plans
> to support it in future.
> If someone would like to add cipher onl,then yes he/she probably would
> have to update this function.
I know that cipher_only is not supported and nobody will support it in 
case of ipsec.
My point is if somebody gives only auth or only cipher xform, then this 
function would not be able to detect that case and will not return error.

>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int
>>> +fill_crypto_xform(struct crypto_xform *xform,
>>> +	const struct rte_ipsec_sa_prm *prm)
>>> +{
>>> +	struct rte_crypto_sym_xform *xf;
>>> +
>>> +	memset(xform, 0, sizeof(*xform));
>>> +
>>> +	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
>>> +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
>>> +			if (xform->auth != NULL)
>>> +				return -EINVAL;
>>> +			xform->auth = &xf->auth;
>>> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
>>> +			if (xform->cipher != NULL)
>>> +				return -EINVAL;
>>> +			xform->cipher = &xf->cipher;
>>> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
>>> +			if (xform->aead != NULL)
>>> +				return -EINVAL;
>>> +			xform->aead = &xf->aead;
>>> +		} else
>>> +			return -EINVAL;
>>> +	}
>>> +
>>> +	return check_crypto_xform(xform);
>>> +}
>> how is this function handling the inbound and outbound cases.
>> In inbound first xform is auth and then cipher.
>> In outbound first is cipher and then auth. I think this should be
>> checked in the lib.
> Interesting, I didn't know about such limitation.
> My understanding was that the any order (<auth,cipher>, <cipher,auth>)
> for both inbound and outbound is acceptable.
> Is that order restriction is documented somewhere?
/**
  * Symmetric crypto transform structure.
  *
  * This is used to specify the crypto transforms required, multiple 
transforms
  * can be chained together to specify a chain transforms such as 
authentication
  * then cipher, or cipher then authentication. Each transform structure can
  * hold a single transform, the type field is used to specify which 
transform
  * is contained within the union
  */
struct rte_crypto_sym_xform {

This is not a limitation, this is how it is designed to handle 2 cases 
of crypto - auth then cipher and cipher then auth.


>> Here for loop should not be there, as there would be at max only 2 xforms.
>>> +
>>> +uint64_t __rte_experimental
>>> +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
>>> +{
>>> +	return sa->type;
>>> +}
>>> +
>>> +static int32_t
>>> +ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
>>> +{
>>> +	uint32_t n, sz;
>>> +
>>> +	n = 0;
>>> +	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
>>> +			RTE_IPSEC_SATP_DIR_IB)
>>> +		n = replay_num_bucket(wsz);
>>> +
>>> +	if (n > WINDOW_BUCKET_MAX)
>>> +		return -EINVAL;
>>> +
>>> +	*nb_bucket = n;
>>> +
>>> +	sz = rsn_size(n);
>>> +	sz += sizeof(struct rte_ipsec_sa);
>>> +	return sz;
>>> +}
>>> +
>>> +void __rte_experimental
>>> +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
>>> +{
>>> +	memset(sa, 0, sa->size);
>>> +}
>> Where is the memory of "sa" getting initialized?
> Not sure I understand your question...
> Do you mean we missed memset(sa, 0, size)
> in rte_ipsec_sa_init()?
Sorry I did not ask the correct question, I was asking  - where it is 
allocated?
Is it application's responsibility?
>
>
>>> +
>>> +int __rte_experimental
>>> +rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
>>> +	uint32_t size)
>>> +{
>>> +	int32_t rc, sz;
>>> +	uint32_t nb;
>>> +	uint64_t type;
>>> +	struct crypto_xform cxf;
>>> +
>>> +	if (sa == NULL || prm == NULL)
>>> +		return -EINVAL;
>>> +
>>> +	/* determine SA type */
>>> +	rc = fill_sa_type(prm, &type);
>>> +	if (rc != 0)
>>> +		return rc;
>>> +
>>> +	/* determine required size */
>>> +	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
>>> +	if (sz < 0)
>>> +		return sz;
>>> +	else if (size < (uint32_t)sz)
>>> +		return -ENOSPC;
>>> +
>>> +	/* only esp is supported right now */
>>> +	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
>>> +		return -EINVAL;
>>> +
>>> +	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
>>> +			prm->tun.hdr_len > sizeof(sa->hdr))
>>> +		return -EINVAL;
>>> +
>>> +	rc = fill_crypto_xform(&cxf, prm);
>>> +	if (rc != 0)
>>> +		return rc;
>>> +
>>> +	sa->type = type;
>>> +	sa->size = sz;
>>> +
>>> +	/* check for ESN flag */
>>> +	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
>>> +		UINT32_MAX : UINT64_MAX;
>>> +
>>> +	rc = esp_sa_init(sa, prm, &cxf);
>>> +	if (rc != 0)
>>> +		rte_ipsec_sa_fini(sa);
>>> +
>>> +	/* fill replay window related fields */
>>> +	if (nb != 0) {
>> move this where nb is getting updated.
> I don't think it is a good idea.
> We calulate nb first and required sa size first without updating provided memory buffer.
> If the buffer is not big enough, will return an error without updating the buffer.
> Cleaner and safer to keep it as it is.
ok
>>> +		sa->replay.win_sz = prm->replay_win_sz;
>>> +		sa->replay.nb_bucket = nb;
>>> +		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
>>> +		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
>>> +	}
>>> +
>>> +	return sz;
>>> +}


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-20 18:17             ` Ananyev, Konstantin
@ 2018-12-21 11:57               ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 11:57 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



On 12/20/2018 11:47 PM, Ananyev, Konstantin wrote:
>>>> +
>>>> +static int
>>>> +fill_crypto_xform(struct crypto_xform *xform,
>>>> +	const struct rte_ipsec_sa_prm *prm)
>>>> +{
>>>> +	struct rte_crypto_sym_xform *xf;
>>>> +
>>>> +	memset(xform, 0, sizeof(*xform));
>>>> +
>>>> +	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
>>>> +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
>>>> +			if (xform->auth != NULL)
>>>> +				return -EINVAL;
>>>> +			xform->auth = &xf->auth;
>>>> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
>>>> +			if (xform->cipher != NULL)
>>>> +				return -EINVAL;
>>>> +			xform->cipher = &xf->cipher;
>>>> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
>>>> +			if (xform->aead != NULL)
>>>> +				return -EINVAL;
>>>> +			xform->aead = &xf->aead;
>>>> +		} else
>>>> +			return -EINVAL;
>>>> +	}
>>>> +
>>>> +	return check_crypto_xform(xform);
>>>> +}
>>> how is this function handling the inbound and outbound cases.
>>> In inbound first xform is auth and then cipher.
>>> In outbound first is cipher and then auth. I think this should be
>>> checked in the lib.
>> Interesting, I didn't know about such limitation.
>> My understanding was that the any order (<auth,cipher>, <cipher,auth>)
>> for both inbound and outbound is acceptable.
>> Is that order restriction is documented somewhere?
>>
> Actually, if such restriction really exists, and cryptodev framework obeys it,
> then crypto session creation will fail anyway.
ipsec library should not rely on other components to give error.
it should handle the cases which it is expected to.
As per my understanding, IPSEC is a cipher then authenticate protocol 
for outbound case and it should give error in other case.
Similarly, auth then cipher in case of inbound case.
>>> Here for loop should not be there, as there would be at max only 2 xforms.


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API
  2018-12-20 10:17           ` Ananyev, Konstantin
@ 2018-12-21 12:14             ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 12:14 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



On 12/20/2018 3:47 PM, Ananyev, Konstantin wrote:
>
>>> + * @param ss
>>> + *   Pointer to the *rte_ipsec_session* object
>>> + * @return
>>> + *   - Zero if operation completed successfully.
>>> + *   - -EINVAL if the parameters are invalid.
>>> + */
>>> +int __rte_experimental
>>> +rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
>>> +
>>> +/**
>>> + * For input mbufs and given IPsec session prepare crypto ops that can be
>>> + * enqueued into the cryptodev associated with given session.
>>> + * expects that for each input packet:
>>> + *      - l2_len, l3_len are setup correctly
>>> + * Note that erroneous mbufs are not freed by the function,
>>> + * but are placed beyond last valid mbuf in the *mb* array.
>>> + * It is a user responsibility to handle them further.
>> How will the user know how many mbufs are correctly processed.
> Function return value contains number of successfully processed packets,
> see comments below.
> As an example, let say at input mb[]={A, B, C, D}, num=4, and prepare()
> was able to successfully process A, B, D mbufs but failed to process C.
> Then return value will be 3, and mb[]={A, B, D, C}.
Wouldn't that hit performance, movement of mbufs.
Can we leverage the crypto_op->status field. It is just a thought.


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-20 12:56           ` Ananyev, Konstantin
@ 2018-12-21 12:36             ` Akhil Goyal
  2018-12-21 14:27               ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 12:36 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



On 12/20/2018 6:26 PM, Ananyev, Konstantin wrote:
>
>>> diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
>>> new file mode 100644
>>> index 000000000..61f5c1433
>>> --- /dev/null
>>> +++ b/lib/librte_ipsec/crypto.h
>>> @@ -0,0 +1,123 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2018 Intel Corporation
>>> + */
>>> +
>>> +#ifndef _CRYPTO_H_
>>> +#define _CRYPTO_H_
>>> +
>>> +/**
>>> + * @file crypto.h
>>> + * Contains crypto specific functions/structures/macros used internally
>>> + * by ipsec library.
>>> + */
>>> +
>>> + /*
>>> +  * AES-GCM devices have some specific requirements for IV and AAD formats.
>>> +  * Ideally that to be done by the driver itself.
>>> +  */
>> I believe these can be moved to rte_crypto_sym.h. All crypto related
>> stuff should be at same place.
> Not sure what exactly you suggest to put into rte_crypto_sym.h?
> struct aead_gcm_iv? Something else?
>  From my perspective it would be good if user in ctypto_sym_op
> just fill salt and IV fields, and then PMD setup things in needed
> format internally.
> Again it would be really good if crypto_sym_op has reserved space
> for AAD...
> But  all that implies quite a big change in cryptodev and PMDs,
> so I think should be subject of a separate patch.
>
>>> +
>>> +struct aead_gcm_iv {
>>> +	uint32_t salt;
>>> +	uint64_t iv;
>>> +	uint32_t cnt;
>>> +} __attribute__((packed));
>>> +
>>> +struct aead_gcm_aad {
>>> +	uint32_t spi;
>>> +	/*
>>> +	 * RFC 4106, section 5:
>>> +	 * Two formats of the AAD are defined:
>>> +	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
>>> +	 */
>>> +	union {
>>> +		uint32_t u32[2];
>>> +		uint64_t u64;
>>> +	} sqn;
>>> +	uint32_t align0; /* align to 16B boundary */
>>> +} __attribute__((packed));
>>> +
>>> +struct gcm_esph_iv {
>>> +	struct esp_hdr esph;
>>> +	uint64_t iv;
>>> +} __attribute__((packed));
>>> +
>>> +
>>> +static inline void
>>> +aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
>>> +{
>>> +	gcm->salt = salt;
>>> +	gcm->iv = iv;
>>> +	gcm->cnt = rte_cpu_to_be_32(1);
>>> +}
>>> +
>>> +/*
>
>>> diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
>>> new file mode 100644
>>> index 000000000..3fd93016d
>>> --- /dev/null
>>> +++ b/lib/librte_ipsec/iph.h
>>> @@ -0,0 +1,84 @@
>>> +/* SPDX-License-Identifier: BSD-3-Clause
>>> + * Copyright(c) 2018 Intel Corporation
>>> + */
>>> +
>>> +#ifndef _IPH_H_
>>> +#define _IPH_H_
>>> +
>>> +/**
>>> + * @file iph.h
>>> + * Contains functions/structures/macros to manipulate IPv/IPv6 headers
>> IPv4
>>> + * used internally by ipsec library.
>>> + */
>>> +
>>> +/*
>>> + * Move preceding (L3) headers down to remove ESP header and IV.
>>> + */
>> why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.
> We do use rte_mbuf append/trim, etc. adjust mbuf's data_ofs and data_len.
> But apart from that for transport mode we have to move actual packet headers.
> Let say for inbound we have to get rid of ESP header (which is after IP header),
> but preserve IP header, so we moving L2/L3 headers down, overwriting ESP header.
ok got your point
>> I believe these adjustments are happening in the mbuf itself.
>> Moreover these APIs are not specific to esp headers.
> I didn't get your last sentence: that function is used to remove esp header
> (see above) - that's why I named it that way.
These can be used to remove any header and not specifically esp. So this 
API could be generic in rte_mbuf.
>
>>> +static inline void
>>> +remove_esph(char *np, char *op, uint32_t hlen)
>>> +{
>>> +	uint32_t i;
>>> +
>>> +	for (i = hlen; i-- != 0; np[i] = op[i])
>>> +		;
>>> +}
>>> +
>>> +/*
>
>>> +
>>> +/* update original and new ip header fields for tunnel case */
>>> +static inline void
>>> +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>>> +		uint32_t l2len, rte_be16_t pid)
>>> +{
>>> +	struct ipv4_hdr *v4h;
>>> +	struct ipv6_hdr *v6h;
>>> +
>>> +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
>>> +		v4h = p;
>>> +		v4h->packet_id = pid;
>>> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
>> where are we updating the rest of the fields, like ttl, checksum, ip
>> addresses, etc
> TTL, ip addresses and other fileds supposed to be setuped by user
> and provided via rte_ipsec_sa_init():
> struct rte_ipsec_sa_prm.tun.hdr  should contain prepared template
> for L3(and L2 if user wants to) header.
> Checksum calculation is not done inside the lib right now -
> it is a user responsibility to caclucate/set it after librte_ipsec
> finishes processing the packet.
I believe static fields are updated during sa init but some fields like 
ttl and checksum,
can be updated in the library itself which is updated for every packet. 
(https://tools.ietf.org/html/rfc1624)
>
>>> +	} else {
>>> +		v6h = p;
>>> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
>>> +				sizeof(*v6h));
>>> +	}
>>> +}
>>> +
>>> +#endif /* _IPH_H_ */
>>> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
>>> index 1935f6e30..6e18c34eb 100644
>>> --- a/lib/librte_ipsec/ipsec_sqn.h
>>> +++ b/lib/librte_ipsec/ipsec_sqn.h
>>> @@ -15,6 +15,45 @@
>>>
>>>    #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
>>>
>>> +/*
>>> + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
>>> + */
>>> +static inline rte_be32_t
>>> +sqn_hi32(rte_be64_t sqn)
>>> +{
>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>>> +	return (sqn >> 32);
>>> +#else
>>> +	return sqn;
>>> +#endif
>>> +}
>>> +
>>> +/*
>>> + * gets SQN.low32 bits, SQN supposed to be in network byte order.
>>> + */
>>> +static inline rte_be32_t
>>> +sqn_low32(rte_be64_t sqn)
>>> +{
>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>>> +	return sqn;
>>> +#else
>>> +	return (sqn >> 32);
>>> +#endif
>>> +}
>>> +
>>> +/*
>>> + * gets SQN.low16 bits, SQN supposed to be in network byte order.
>>> + */
>>> +static inline rte_be16_t
>>> +sqn_low16(rte_be64_t sqn)
>>> +{
>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>>> +	return sqn;
>>> +#else
>>> +	return (sqn >> 48);
>>> +#endif
>>> +}
>>> +
>> shouldn't we move these seq number APIs in rte_esp.h and make them generic
> It could be done, but who will use them except librte_ipsec?
Whoever uses rte_esp.h and not use ipsec lib. The intent of rte_esp.h is 
just for that only, otherwise we don't need rte_esp.h, we can have the 
content of rte_esp.h in ipsec itself.


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops
  2018-12-20 13:00           ` Ananyev, Konstantin
@ 2018-12-21 12:37             ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 12:37 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev



On 12/20/2018 6:30 PM, Ananyev, Konstantin wrote:
>
>>> +
>>> +/**
>>> + * Take crypto-op as an input and extract pointer to related ipsec session.
>>> + * @param cop
>>> + *   The address of an input *rte_crypto_op* structure.
>>> + * @return
>>> + *   The pointer to the related *rte_ipsec_session* structure.
>>> + */
>>> +static inline __rte_experimental struct rte_ipsec_session *
>>> +rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
>> __rte_experimental placement not correct
> You mean why not:
> static inline struct rte_ipsec_session * __rte_experimental
> ?
yes
> Then checkpatch will complain about the space after '*'.
ok
> BTW why do you think current definition is wrong?
this is how it is being used in the rest of the code.
>
>>> +{
>>> +	const struct rte_security_session *ss;
>>> +	const struct rte_cryptodev_sym_session *cs;
>>> +
>>> +	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
>>> +		ss = cop->sym[0].sec_session;
>>> +		return (void *)(uintptr_t)ss->opaque_data;
>>> +	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
>>> +		cs = cop->sym[0].session;
>>> +		return (void *)(uintptr_t)cs->opaque_data;
>>> +	}
>>> +	return NULL;
>>> +}
>>> +


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test
  2018-12-20 13:03           ` Ananyev, Konstantin
@ 2018-12-21 12:41             ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 12:41 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Thomas Monjalon, Awal, Mohammad Abdul, Iremonger, Bernard



On 12/20/2018 6:33 PM, Ananyev, Konstantin wrote:
>
>>
>> On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
>>> +static struct unit_test_suite ipsec_testsuite  = {
>>> +	.suite_name = "IPsec NULL Unit Test Suite",
>>> +	.setup = testsuite_setup,
>>> +	.teardown = testsuite_teardown,
>>> +	.unit_test_cases = {
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_crypto_inb_burst_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_crypto_outb_burst_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_inline_inb_burst_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_inline_outb_burst_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_replay_inb_inside_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_replay_inb_outside_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_replay_inb_repeat_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
>>> +		TEST_CASE_ST(ut_setup, ut_teardown,
>>> +			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
>>> +		TEST_CASES_END() /**< NULL terminate unit test array */
>>> +	}
>>> +};
>>> +
>> test case for lookaside proto and inline proto case should also be added
>> here.
> Do you mean one with dummy security context and session as we done for inline-crypto here?
> Konstantin
yes.


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-21 11:53             ` Akhil Goyal
@ 2018-12-21 12:41               ` Ananyev, Konstantin
  2018-12-21 12:54                 ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-21 12:41 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, December 21, 2018 11:53 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org
> Cc: Thomas Monjalon <thomas@monjalon.net>; Awal, Mohammad Abdul <mohammad.abdul.awal@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
> 
> 
> 
> On 12/20/2018 7:36 PM, Ananyev, Konstantin wrote:
> >
> >>> diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
> >>> new file mode 100644
> >>> index 000000000..f927a82bf
> >>> --- /dev/null
> >>> +++ b/lib/librte_ipsec/sa.c
> >>> @@ -0,0 +1,327 @@
> >>> +/* SPDX-License-Identifier: BSD-3-Clause
> >>> + * Copyright(c) 2018 Intel Corporation
> >>> + */
> >>> +
> >>> +#include <rte_ipsec_sa.h>
> >>> +#include <rte_esp.h>
> >>> +#include <rte_ip.h>
> >>> +#include <rte_errno.h>
> >>> +
> >>> +#include "sa.h"
> >>> +#include "ipsec_sqn.h"
> >>> +
> >>> +/* some helper structures */
> >>> +struct crypto_xform {
> >>> +	struct rte_crypto_auth_xform *auth;
> >>> +	struct rte_crypto_cipher_xform *cipher;
> >>> +	struct rte_crypto_aead_xform *aead;
> >>> +};
> >> shouldn't this be union as aead cannot be with cipher and auth cases.
> > That's used internally to collect/analyze xforms provided by prm->crypto_xform.
> 
> >
> >
> >> extra line
> >>> +
> >>> +
> >>> +static int
> >>> +check_crypto_xform(struct crypto_xform *xform)
> >>> +{
> >>> +	uintptr_t p;
> >>> +
> >>> +	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
> >> what is the intent of this?
> > It is used below to check that if aead is present both cipher and auth
> > are  not.
> >
> >>> +
> >>> +	/* either aead or both auth and cipher should be not NULLs */
> >>> +	if (xform->aead) {
> >>> +		if (p)
> >>> +			return -EINVAL;
> >>> +	} else if (p == (uintptr_t)xform->auth) {
> >>> +		return -EINVAL;
> >>> +	}
> >> This function does not look good. It will miss the case of cipher only
> > Cipher only is not supported right now and  I am not aware about plans
> > to support it in future.
> > If someone would like to add cipher onl,then yes he/she probably would
> > have to update this function.
> I know that cipher_only is not supported and nobody will support it in
> case of ipsec.
> My point is if somebody gives only auth or only cipher xform, then this
> function would not be able to detect that case and will not return error.

fill_crypto_xform() (the function below) will detect it and return an error:
+		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->auth != NULL)
+				return -EINVAL;

> 
> >>> +
> >>> +	return 0;
> >>> +}
> >>> +
> >>> +static int
> >>> +fill_crypto_xform(struct crypto_xform *xform,
> >>> +	const struct rte_ipsec_sa_prm *prm)
> >>> +{
> >>> +	struct rte_crypto_sym_xform *xf;
> >>> +
> >>> +	memset(xform, 0, sizeof(*xform));
> >>> +
> >>> +	for (xf = prm->crypto_xform; xf != NULL; xf = xf->next) {
> >>> +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> >>> +			if (xform->auth != NULL)
> >>> +				return -EINVAL;
> >>> +			xform->auth = &xf->auth;
> >>> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
> >>> +			if (xform->cipher != NULL)
> >>> +				return -EINVAL;
> >>> +			xform->cipher = &xf->cipher;
> >>> +		} else if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
> >>> +			if (xform->aead != NULL)
> >>> +				return -EINVAL;
> >>> +			xform->aead = &xf->aead;
> >>> +		} else
> >>> +			return -EINVAL;
> >>> +	}
> >>> +
> >>> +	return check_crypto_xform(xform);
> >>> +}
> >> how is this function handling the inbound and outbound cases.
> >> In inbound first xform is auth and then cipher.
> >> In outbound first is cipher and then auth. I think this should be
> >> checked in the lib.
> > Interesting, I didn't know about such limitation.
> > My understanding was that the any order (<auth,cipher>, <cipher,auth>)
> > for both inbound and outbound is acceptable.
> > Is that order restriction is documented somewhere?
> /**
>   * Symmetric crypto transform structure.
>   *
>   * This is used to specify the crypto transforms required, multiple
> transforms
>   * can be chained together to specify a chain transforms such as
> authentication
>   * then cipher, or cipher then authentication. Each transform structure can
>   * hold a single transform, the type field is used to specify which
> transform
>   * is contained within the union
>   */
> struct rte_crypto_sym_xform {

Yes, I read this but I don't see where it says that order of xforms implicitly
defines order of operations for that session within crypto-dev.
Or is it just me?
I don't mind to add extra check here, just want to be sure it is really required
for crypto PMD to work correctly.

> 
> This is not a limitation, this is how it is designed to handle 2 cases
> of crypto - auth then cipher and cipher then auth.
> 

Ok, if you sure it is a valid check - I'll add it.

> 
> >> Here for loop should not be there, as there would be at max only 2 xforms.
> >>> +
> >>> +uint64_t __rte_experimental
> >>> +rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
> >>> +{
> >>> +	return sa->type;
> >>> +}
> >>> +
> >>> +static int32_t
> >>> +ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
> >>> +{
> >>> +	uint32_t n, sz;
> >>> +
> >>> +	n = 0;
> >>> +	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
> >>> +			RTE_IPSEC_SATP_DIR_IB)
> >>> +		n = replay_num_bucket(wsz);
> >>> +
> >>> +	if (n > WINDOW_BUCKET_MAX)
> >>> +		return -EINVAL;
> >>> +
> >>> +	*nb_bucket = n;
> >>> +
> >>> +	sz = rsn_size(n);
> >>> +	sz += sizeof(struct rte_ipsec_sa);
> >>> +	return sz;
> >>> +}
> >>> +
> >>> +void __rte_experimental
> >>> +rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
> >>> +{
> >>> +	memset(sa, 0, sa->size);
> >>> +}
> >> Where is the memory of "sa" getting initialized?
> > Not sure I understand your question...
> > Do you mean we missed memset(sa, 0, size)
> > in rte_ipsec_sa_init()?
> Sorry I did not ask the correct question, I was asking  - where it is
> allocated?
> Is it application's responsibility?

Yes, it is an application responsibility to allocate the memory buffer.
But looking at code again - actually we did miss memset() here,
will update.


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library
  2018-12-21 12:41               ` Ananyev, Konstantin
@ 2018-12-21 12:54                 ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-21 12:54 UTC (permalink / raw)
  To: Ananyev, Konstantin, Akhil Goyal, dev
  Cc: Thomas Monjalon, Awal, Mohammad Abdul


> >
> >
> > On 12/20/2018 7:36 PM, Ananyev, Konstantin wrote:
> > >
> > >>> diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
> > >>> new file mode 100644
> > >>> index 000000000..f927a82bf
> > >>> --- /dev/null
> > >>> +++ b/lib/librte_ipsec/sa.c
> > >>> @@ -0,0 +1,327 @@
> > >>> +/* SPDX-License-Identifier: BSD-3-Clause
> > >>> + * Copyright(c) 2018 Intel Corporation
> > >>> + */
> > >>> +
> > >>> +#include <rte_ipsec_sa.h>
> > >>> +#include <rte_esp.h>
> > >>> +#include <rte_ip.h>
> > >>> +#include <rte_errno.h>
> > >>> +
> > >>> +#include "sa.h"
> > >>> +#include "ipsec_sqn.h"
> > >>> +
> > >>> +/* some helper structures */
> > >>> +struct crypto_xform {
> > >>> +	struct rte_crypto_auth_xform *auth;
> > >>> +	struct rte_crypto_cipher_xform *cipher;
> > >>> +	struct rte_crypto_aead_xform *aead;
> > >>> +};
> > >> shouldn't this be union as aead cannot be with cipher and auth cases.
> > > That's used internally to collect/analyze xforms provided by prm->crypto_xform.
> >
> > >
> > >
> > >> extra line
> > >>> +
> > >>> +
> > >>> +static int
> > >>> +check_crypto_xform(struct crypto_xform *xform)
> > >>> +{
> > >>> +	uintptr_t p;
> > >>> +
> > >>> +	p = (uintptr_t)xform->auth | (uintptr_t)xform->cipher;
> > >> what is the intent of this?
> > > It is used below to check that if aead is present both cipher and auth
> > > are  not.
> > >
> > >>> +
> > >>> +	/* either aead or both auth and cipher should be not NULLs */
> > >>> +	if (xform->aead) {
> > >>> +		if (p)
> > >>> +			return -EINVAL;
> > >>> +	} else if (p == (uintptr_t)xform->auth) {
> > >>> +		return -EINVAL;
> > >>> +	}
> > >> This function does not look good. It will miss the case of cipher only
> > > Cipher only is not supported right now and  I am not aware about plans
> > > to support it in future.
> > > If someone would like to add cipher onl,then yes he/she probably would
> > > have to update this function.
> > I know that cipher_only is not supported and nobody will support it in
> > case of ipsec.
> > My point is if somebody gives only auth or only cipher xform, then this
> > function would not be able to detect that case and will not return error.
> 
> fill_crypto_xform() (the function below) will detect it and return an error:
> +		if (xf->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
> +			if (xform->auth != NULL)
> +				return -EINVAL;


Please ignore the comment above - was thinking about different thing.
Yes extra check is needed for case when only cipher xform is provided.


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide
  2018-12-20 13:06           ` Ananyev, Konstantin
@ 2018-12-21 12:58             ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 12:58 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: Iremonger, Bernard



On 12/20/2018 6:36 PM, Ananyev, Konstantin wrote:
>
>>> --- /dev/null
>>> +++ b/doc/guides/prog_guide/ipsec_lib.rst
>>> @@ -0,0 +1,74 @@
>>> +..  SPDX-License-Identifier: BSD-3-Clause
>>> +    Copyright(c) 2018 Intel Corporation.
>>> +
>>> +IPsec Packet Processing Library
>>> +===============================
>>> +
>>> +The DPDK provides a library for IPsec data-path processing.
>>> +The library utilizes existing DPDK crypto-dev and
>>> +security API to provide application with transparent and
>>> +high peromant IPsec packet processing API.
>>> +The library is concentrated on data-path protocols processing
>>> +(ESP and AH), IKE protocol(s) implementation is out of scope
>>> +for that library.
>> I do not see AH processing in the library
> Right now it is not implemented.
> But the whole library code structure allows it to be added (if someone would decide to).
specify this here.
>>> +
>>> +SA level API
>>> +------------
>>> +
>>> +This API operates on IPsec SA level.
>>> +It provides functionality that allows user for given SA to process
>>> +inbound and outbound IPsec packets.
>>> +To be more specific:
>>> +*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
>>> +*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
>>> +*  setup related mbuf felids (ol_flags, tx_offloads, etc.).
>>> +*  initialize/un-initialize given SA based on user provided parameters.
>>> +
>>> +SA-level API is based on top of crypto-dev/security API and relies on
>>> +them to perform actual cipher and integrity checking.
>>> +
>>> +Due to the nature of crypto-dev API (enqueue/deque model) library introduces
>>> +asynchronous API for IPsec packets destined to be processed by crypto-device.
>>> +
>>> +Expected API call sequence for data-path processing would be:
>>> +
>>> +.. code-block:: c
>>> +
>>> +    /* enqueue for processing by crypto-device */
>>> +    rte_ipsec_pkt_crypto_prepare(...);
>>> +    rte_cryptodev_enqueue_burst(...);
>>> +    /* dequeue from crypto-device and do final processing (if any) */
>>> +    rte_cryptodev_dequeue_burst(...);
>>> +    rte_ipsec_pkt_crypto_group(...); /* optional */
>>> +    rte_ipsec_pkt_process(...);
>>> +
>>> +For packets destined for inline processing no extra overhead
>>> +is required and synchronous API call: rte_ipsec_pkt_process()
>>> +is sufficient for that case.
>>> +
>>> +.. note::
>>> +
>>> +    For more details about the IPsec API, please refer to the *DPDK API Reference*.
>>> +
>>> +Current implementation supports all four currently defined rte_security types:
>>> +*  RTE_SECURITY_ACTION_TYPE_NONE
>>> +
>>> +*  RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
>>> +
>>> +*  RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
>>> +
>>> +*  RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
>>> +
>> probably a code flow diagram should be added and explained in detail for
>> each of the action types
> I think it is way above my drawing capabilities :)

I think you can refer to 
http://doc.dpdk.org/guides/prog_guide/rte_security.html
something similar to that would explain it in a better way.
>
>>> +To accommodate future custom implementations function pointers
>>> +model is used for both for *crypto_prepare* and *process*
>>> +impelementations.
>>> +
>>> +Supported features:
>>> +*  ESP protocol tunnel mode.
>>> +
>>> +*  ESP protocol transport mode.
>>> +
>>> +*  ESN and replay window.
>>> +
>>> +*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
>> The supported features should be elaborated further more.
> Ok, anything specific information you think has to be added here?
Probably a few lines to explain the feature(very brief) and how it is 
implemented in ipsec lib and the limitation if any


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/10] ipsec: new library for IPsec data-path processing
  2018-12-14 16:29       ` [dpdk-dev] [PATCH v4 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2018-12-21 13:32         ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 13:32 UTC (permalink / raw)
  To: Konstantin Ananyev, dev; +Cc: Thomas Monjalon

Hi Konstantin,

I am done with the review, will be running the code in early next week 
after I finish the review of changes in ipsec application.
key points for review were
  - some code may be generic and can be moved in appropriate files
  - documentation update
  - spell checks spacing etc.
  - some cases like cipher only need to be looked appropriately
  - test cases for lookaside and inline proto
  - checksum/ttl update

With these comments we cannot make this to RC1, but RC2 can be looked upon.

Thanks,
Akhil

On 12/14/2018 9:59 PM, Konstantin Ananyev wrote:
> This patch series depends on the patch:
> http://patches.dpdk.org/patch/48044/
> to be applied first.
>
> v3 -> v4
>   - Changes to adress Declan comments
>   - Update docs
>
> v2 -> v3
>   - Several fixes for IPv6 support
>   - Extra checks for input parameters in public APi functions
>
> v1 -> v2
>   - Changes to get into account l2_len for outbound transport packets
>     (Qi comments)
>   - Several bug fixes
>   - Some code restructured
>   - Update MAINTAINERS file
>
> RFCv2 -> v1
>   - Changes per Jerin comments
>   - Implement transport mode
>   - Several bug fixes
>   - UT largely reworked and extended
>
> This patch introduces a new library within DPDK: librte_ipsec.
> The aim is to provide DPDK native high performance library for IPsec
> data-path processing.
> The library is supposed to utilize existing DPDK crypto-dev and
> security API to provide application with transparent IPsec
> processing API.
> The library is concentrated on data-path protocols processing
> (ESP and AH), IKE protocol(s) implementation is out of scope
> for that library.
> Current patch introduces SA-level API.
>
> SA (low) level API
> ==================
>
> API described below operates on SA level.
> It provides functionality that allows user for given SA to process
> inbound and outbound IPsec packets.
> To be more specific:
> - for inbound ESP/AH packets perform decryption, authentication,
>    integrity checking, remove ESP/AH related headers
> - for outbound packets perform payload encryption, attach ICV,
>    update/add IP headers, add ESP/AH headers/trailers,
>    setup related mbuf felids (ol_flags, tx_offloads, etc.).
> - initialize/un-initialize given SA based on user provided parameters.
>
> The following functionality:
>    - match inbound/outbound packets to particular SA
>    - manage crypto/security devices
>    - provide SAD/SPD related functionality
>    - determine what crypto/security device has to be used
>      for given packet(s)
> is out of scope for SA-level API.
>
> SA-level API is based on top of crypto-dev/security API and relies on
> them
> to perform actual cipher and integrity checking.
> To have an ability to easily map crypto/security sessions into related
> IPSec SA opaque userdata field was added into
> rte_cryptodev_sym_session and rte_security_session structures.
> That implies ABI change for both librte_crytpodev and librte_security.
>
> Due to the nature of crypto-dev API (enqueue/deque model) we use
> asynchronous API for IPsec packets destined to be processed
> by crypto-device.
> Expected API call sequence would be:
>    /* enqueue for processing by crypto-device */
>    rte_ipsec_pkt_crypto_prepare(...);
>    rte_cryptodev_enqueue_burst(...);
>    /* dequeue from crypto-device and do final processing (if any) */
>    rte_cryptodev_dequeue_burst(...);
>    rte_ipsec_pkt_crypto_group(...); /* optional */
>    rte_ipsec_pkt_process(...);
>
> Though for packets destined for inline processing no extra overhead
> is required and synchronous API call: rte_ipsec_pkt_process()
> is sufficient for that case.
>
> Current implementation supports all four currently defined
> rte_security types.
> Though to accommodate future custom implementations function pointers
> model is used for both for *crypto_prepare* and *process*
> impelementations.
>
> Konstantin Ananyev (10):
>    cryptodev: add opaque userdata pointer into crypto sym session
>    security: add opaque userdata pointer into security session
>    net: add ESP trailer structure definition
>    lib: introduce ipsec library
>    ipsec: add SA data-path API
>    ipsec: implement SA data-path API
>    ipsec: rework SA replay window/SQN for MT environment
>    ipsec: helper functions to group completed crypto-ops
>    test/ipsec: introduce functional test
>    doc: add IPsec library guide
>
>   MAINTAINERS                            |    5 +
>   config/common_base                     |    5 +
>   doc/guides/prog_guide/index.rst        |    1 +
>   doc/guides/prog_guide/ipsec_lib.rst    |   74 +
>   doc/guides/rel_notes/release_19_02.rst |   10 +
>   lib/Makefile                           |    2 +
>   lib/librte_cryptodev/rte_cryptodev.h   |    2 +
>   lib/librte_ipsec/Makefile              |   27 +
>   lib/librte_ipsec/crypto.h              |  123 ++
>   lib/librte_ipsec/iph.h                 |   84 +
>   lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
>   lib/librte_ipsec/meson.build           |   10 +
>   lib/librte_ipsec/pad.h                 |   45 +
>   lib/librte_ipsec/rte_ipsec.h           |  153 ++
>   lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
>   lib/librte_ipsec/rte_ipsec_sa.h        |  172 ++
>   lib/librte_ipsec/rte_ipsec_version.map |   15 +
>   lib/librte_ipsec/sa.c                  | 1407 +++++++++++++++
>   lib/librte_ipsec/sa.h                  |   98 ++
>   lib/librte_ipsec/ses.c                 |   45 +
>   lib/librte_net/rte_esp.h               |   10 +-
>   lib/librte_security/rte_security.h     |    2 +
>   lib/meson.build                        |    2 +
>   mk/rte.app.mk                          |    2 +
>   test/test/Makefile                     |    3 +
>   test/test/meson.build                  |    3 +
>   test/test/test_ipsec.c                 | 2209 ++++++++++++++++++++++++
>   27 files changed, 5002 insertions(+), 1 deletion(-)
>   create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
>   create mode 100644 lib/librte_ipsec/Makefile
>   create mode 100644 lib/librte_ipsec/crypto.h
>   create mode 100644 lib/librte_ipsec/iph.h
>   create mode 100644 lib/librte_ipsec/ipsec_sqn.h
>   create mode 100644 lib/librte_ipsec/meson.build
>   create mode 100644 lib/librte_ipsec/pad.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
>   create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
>   create mode 100644 lib/librte_ipsec/sa.c
>   create mode 100644 lib/librte_ipsec/sa.h
>   create mode 100644 lib/librte_ipsec/ses.c
>   create mode 100644 test/test/test_ipsec.c
>


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-21 12:36             ` Akhil Goyal
@ 2018-12-21 14:27               ` Ananyev, Konstantin
  2018-12-21 14:39                 ` Thomas Monjalon
  2018-12-21 14:51                 ` Akhil Goyal
  0 siblings, 2 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-21 14:27 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul


> >>> + */
> >>> +
> >>> +/*
> >>> + * Move preceding (L3) headers down to remove ESP header and IV.
> >>> + */
> >> why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.
> > We do use rte_mbuf append/trim, etc. adjust mbuf's data_ofs and data_len.
> > But apart from that for transport mode we have to move actual packet headers.
> > Let say for inbound we have to get rid of ESP header (which is after IP header),
> > but preserve IP header, so we moving L2/L3 headers down, overwriting ESP header.
> ok got your point
> >> I believe these adjustments are happening in the mbuf itself.
> >> Moreover these APIs are not specific to esp headers.
> > I didn't get your last sentence: that function is used to remove esp header
> > (see above) - that's why I named it that way.
> These can be used to remove any header and not specifically esp. So this
> API could be generic in rte_mbuf.

That function has nothing to do with mbuf in general.
It just copies bytes between overlapping in certain way buffers
(src.start < dst.start < src.end < dst.end).
Right now it is very primitive - copies on byte at a time in
descending order.
Wrote it just to avoid using memmove(). 
I don't think there is any point to have such dummy function in the lib/eal.

> >
> >>> +static inline void
> >>> +remove_esph(char *np, char *op, uint32_t hlen)
> >>> +{
> >>> +	uint32_t i;
> >>> +
> >>> +	for (i = hlen; i-- != 0; np[i] = op[i])
> >>> +		;
> >>> +}
> >>> +
> >>> +/*
> >
> >>> +
> >>> +/* update original and new ip header fields for tunnel case */
> >>> +static inline void
> >>> +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> >>> +		uint32_t l2len, rte_be16_t pid)
> >>> +{
> >>> +	struct ipv4_hdr *v4h;
> >>> +	struct ipv6_hdr *v6h;
> >>> +
> >>> +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> >>> +		v4h = p;
> >>> +		v4h->packet_id = pid;
> >>> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> >> where are we updating the rest of the fields, like ttl, checksum, ip
> >> addresses, etc
> > TTL, ip addresses and other fileds supposed to be setuped by user
> > and provided via rte_ipsec_sa_init():
> > struct rte_ipsec_sa_prm.tun.hdr  should contain prepared template
> > for L3(and L2 if user wants to) header.
> > Checksum calculation is not done inside the lib right now -
> > it is a user responsibility to caclucate/set it after librte_ipsec
> > finishes processing the packet.
> I believe static fields are updated during sa init but some fields like
> ttl and checksum,
> can be updated in the library itself which is updated for every packet.
> (https://tools.ietf.org/html/rfc1624)

About checksum - there is no point to calculate cksum it in the lib,
as user may choose to use HW chksum offload.
All other libraries ip_frag, GSO, etc. leave it to the user,
I don't see why ipsec should be different here.
About TTL and other fields - I suppose you refer to:
https://tools.ietf.org/html/rfc4301#section-5.1.2
Header Construction for Tunnel Mode
right?
Surely that has to be supported, one way or the other,
but we don't plan to implement it in 19.02.
Current plan to add it in 19.05, if time permits.

> >
> >>> +	} else {
> >>> +		v6h = p;
> >>> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> >>> +				sizeof(*v6h));
> >>> +	}
> >>> +}
> >>> +
> >>> +#endif /* _IPH_H_ */
> >>> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> >>> index 1935f6e30..6e18c34eb 100644
> >>> --- a/lib/librte_ipsec/ipsec_sqn.h
> >>> +++ b/lib/librte_ipsec/ipsec_sqn.h
> >>> @@ -15,6 +15,45 @@
> >>>
> >>>    #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> >>>
> >>> +/*
> >>> + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
> >>> + */
> >>> +static inline rte_be32_t
> >>> +sqn_hi32(rte_be64_t sqn)
> >>> +{
> >>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> >>> +	return (sqn >> 32);
> >>> +#else
> >>> +	return sqn;
> >>> +#endif
> >>> +}
> >>> +
> >>> +/*
> >>> + * gets SQN.low32 bits, SQN supposed to be in network byte order.
> >>> + */
> >>> +static inline rte_be32_t
> >>> +sqn_low32(rte_be64_t sqn)
> >>> +{
> >>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> >>> +	return sqn;
> >>> +#else
> >>> +	return (sqn >> 32);
> >>> +#endif
> >>> +}
> >>> +
> >>> +/*
> >>> + * gets SQN.low16 bits, SQN supposed to be in network byte order.
> >>> + */
> >>> +static inline rte_be16_t
> >>> +sqn_low16(rte_be64_t sqn)
> >>> +{
> >>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> >>> +	return sqn;
> >>> +#else
> >>> +	return (sqn >> 48);
> >>> +#endif
> >>> +}
> >>> +
> >> shouldn't we move these seq number APIs in rte_esp.h and make them generic
> > It could be done, but who will use them except librte_ipsec?
> Whoever uses rte_esp.h and not use ipsec lib. The intent of rte_esp.h is
> just for that only, otherwise we don't need rte_esp.h, we can have the
> content of rte_esp.h in ipsec itself.

Again these functions are used just inside the lib to help avoid
extra byteswapping during crypto-data/packet header constructions.
I don't see how they will be useful in general. 
Sure, if there will be demand from users in future - we can move them,
but right now I don't think that would happen. 
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-21 14:27               ` Ananyev, Konstantin
@ 2018-12-21 14:39                 ` Thomas Monjalon
  2018-12-21 14:51                 ` Akhil Goyal
  1 sibling, 0 replies; 194+ messages in thread
From: Thomas Monjalon @ 2018-12-21 14:39 UTC (permalink / raw)
  To: Ananyev, Konstantin, Akhil Goyal; +Cc: dev, Awal, Mohammad Abdul

21/12/2018 15:27, Ananyev, Konstantin:
> 
> > >>> + */
> > >>> +
> > >>> +/*
> > >>> + * Move preceding (L3) headers down to remove ESP header and IV.
> > >>> + */
> > >> why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.
> > > We do use rte_mbuf append/trim, etc. adjust mbuf's data_ofs and data_len.
> > > But apart from that for transport mode we have to move actual packet headers.
> > > Let say for inbound we have to get rid of ESP header (which is after IP header),
> > > but preserve IP header, so we moving L2/L3 headers down, overwriting ESP header.
> > ok got your point
> > >> I believe these adjustments are happening in the mbuf itself.
> > >> Moreover these APIs are not specific to esp headers.
> > > I didn't get your last sentence: that function is used to remove esp header
> > > (see above) - that's why I named it that way.
> > These can be used to remove any header and not specifically esp. So this
> > API could be generic in rte_mbuf.
> 
> That function has nothing to do with mbuf in general.
> It just copies bytes between overlapping in certain way buffers
> (src.start < dst.start < src.end < dst.end).
> Right now it is very primitive - copies on byte at a time in
> descending order.
> Wrote it just to avoid using memmove(). 
> I don't think there is any point to have such dummy function in the lib/eal.
> 
> > >
> > >>> +static inline void
> > >>> +remove_esph(char *np, char *op, uint32_t hlen)
> > >>> +{
> > >>> +	uint32_t i;
> > >>> +
> > >>> +	for (i = hlen; i-- != 0; np[i] = op[i])
> > >>> +		;
> > >>> +}
> > >>> +
> > >>> +/*
> > >
> > >>> +
> > >>> +/* update original and new ip header fields for tunnel case */
> > >>> +static inline void
> > >>> +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> > >>> +		uint32_t l2len, rte_be16_t pid)
> > >>> +{
> > >>> +	struct ipv4_hdr *v4h;
> > >>> +	struct ipv6_hdr *v6h;
> > >>> +
> > >>> +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> > >>> +		v4h = p;
> > >>> +		v4h->packet_id = pid;
> > >>> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> > >> where are we updating the rest of the fields, like ttl, checksum, ip
> > >> addresses, etc
> > > TTL, ip addresses and other fileds supposed to be setuped by user
> > > and provided via rte_ipsec_sa_init():
> > > struct rte_ipsec_sa_prm.tun.hdr  should contain prepared template
> > > for L3(and L2 if user wants to) header.
> > > Checksum calculation is not done inside the lib right now -
> > > it is a user responsibility to caclucate/set it after librte_ipsec
> > > finishes processing the packet.
> > I believe static fields are updated during sa init but some fields like
> > ttl and checksum,
> > can be updated in the library itself which is updated for every packet.
> > (https://tools.ietf.org/html/rfc1624)
> 
> About checksum - there is no point to calculate cksum it in the lib,
> as user may choose to use HW chksum offload.
> All other libraries ip_frag, GSO, etc. leave it to the user,
> I don't see why ipsec should be different here.
> About TTL and other fields - I suppose you refer to:
> https://tools.ietf.org/html/rfc4301#section-5.1.2
> Header Construction for Tunnel Mode
> right?
> Surely that has to be supported, one way or the other,
> but we don't plan to implement it in 19.02.
> Current plan to add it in 19.05, if time permits.
> 
> > >
> > >>> +	} else {
> > >>> +		v6h = p;
> > >>> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> > >>> +				sizeof(*v6h));
> > >>> +	}
> > >>> +}
> > >>> +
> > >>> +#endif /* _IPH_H_ */
> > >>> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> > >>> index 1935f6e30..6e18c34eb 100644
> > >>> --- a/lib/librte_ipsec/ipsec_sqn.h
> > >>> +++ b/lib/librte_ipsec/ipsec_sqn.h
> > >>> @@ -15,6 +15,45 @@
> > >>>
> > >>>    #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> > >>>
> > >>> +/*
> > >>> + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
> > >>> + */
> > >>> +static inline rte_be32_t
> > >>> +sqn_hi32(rte_be64_t sqn)
> > >>> +{
> > >>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > >>> +	return (sqn >> 32);
> > >>> +#else
> > >>> +	return sqn;
> > >>> +#endif
> > >>> +}
> > >>> +
> > >>> +/*
> > >>> + * gets SQN.low32 bits, SQN supposed to be in network byte order.
> > >>> + */
> > >>> +static inline rte_be32_t
> > >>> +sqn_low32(rte_be64_t sqn)
> > >>> +{
> > >>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > >>> +	return sqn;
> > >>> +#else
> > >>> +	return (sqn >> 32);
> > >>> +#endif
> > >>> +}
> > >>> +
> > >>> +/*
> > >>> + * gets SQN.low16 bits, SQN supposed to be in network byte order.
> > >>> + */
> > >>> +static inline rte_be16_t
> > >>> +sqn_low16(rte_be64_t sqn)
> > >>> +{
> > >>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> > >>> +	return sqn;
> > >>> +#else
> > >>> +	return (sqn >> 48);
> > >>> +#endif
> > >>> +}
> > >>> +
> > >> shouldn't we move these seq number APIs in rte_esp.h and make them generic
> > > It could be done, but who will use them except librte_ipsec?
> > Whoever uses rte_esp.h and not use ipsec lib. The intent of rte_esp.h is
> > just for that only, otherwise we don't need rte_esp.h, we can have the
> > content of rte_esp.h in ipsec itself.
> 
> Again these functions are used just inside the lib to help avoid
> extra byteswapping during crypto-data/packet header constructions.
> I don't see how they will be useful in general. 
> Sure, if there will be demand from users in future - we can move them,
> but right now I don't think that would happen. 

I am not an expert of IPsec, but in general it is better to offer modular
code, so we can use very basic code and allow implementing an alternative
for higher level.
That's why I would be in favor to keep protocol definitions and checksum
in rte_net, as it is done for TCP.
About how much modular we want to be, it is a difficult question,
matter of tradeoff.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-21 14:27               ` Ananyev, Konstantin
  2018-12-21 14:39                 ` Thomas Monjalon
@ 2018-12-21 14:51                 ` Akhil Goyal
  2018-12-21 15:16                   ` Ananyev, Konstantin
  1 sibling, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2018-12-21 14:51 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev
  Cc: Thomas Monjalon, Awal, Mohammad Abdul, olivier.matz



On 12/21/2018 7:57 PM, Ananyev, Konstantin wrote:
>>>>> + */
>>>>> +
>>>>> +/*
>>>>> + * Move preceding (L3) headers down to remove ESP header and IV.
>>>>> + */
>>>> why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.
>>> We do use rte_mbuf append/trim, etc. adjust mbuf's data_ofs and data_len.
>>> But apart from that for transport mode we have to move actual packet headers.
>>> Let say for inbound we have to get rid of ESP header (which is after IP header),
>>> but preserve IP header, so we moving L2/L3 headers down, overwriting ESP header.
>> ok got your point
>>>> I believe these adjustments are happening in the mbuf itself.
>>>> Moreover these APIs are not specific to esp headers.
>>> I didn't get your last sentence: that function is used to remove esp header
>>> (see above) - that's why I named it that way.
>> These can be used to remove any header and not specifically esp. So this
>> API could be generic in rte_mbuf.
> That function has nothing to do with mbuf in general.
> It just copies bytes between overlapping in certain way buffers
> (src.start < dst.start < src.end < dst.end).
> Right now it is very primitive - copies on byte at a time in
> descending order.
> Wrote it just to avoid using memmove().
> I don't think there is any point to have such dummy function in the lib/eal.
If this is better than memmove, then probably it is a candidate to a 
function in lib.
I think Thomas/ Olivier can better comment on this
>
>>>>> +static inline void
>>>>> +remove_esph(char *np, char *op, uint32_t hlen)
>>>>> +{
>>>>> +	uint32_t i;
>>>>> +
>>>>> +	for (i = hlen; i-- != 0; np[i] = op[i])
>>>>> +		;
>>>>> +}
>>>>> +
>>>>> +/*
>>>>> +
>>>>> +/* update original and new ip header fields for tunnel case */
>>>>> +static inline void
>>>>> +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
>>>>> +		uint32_t l2len, rte_be16_t pid)
>>>>> +{
>>>>> +	struct ipv4_hdr *v4h;
>>>>> +	struct ipv6_hdr *v6h;
>>>>> +
>>>>> +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
>>>>> +		v4h = p;
>>>>> +		v4h->packet_id = pid;
>>>>> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
>>>> where are we updating the rest of the fields, like ttl, checksum, ip
>>>> addresses, etc
>>> TTL, ip addresses and other fileds supposed to be setuped by user
>>> and provided via rte_ipsec_sa_init():
>>> struct rte_ipsec_sa_prm.tun.hdr  should contain prepared template
>>> for L3(and L2 if user wants to) header.
>>> Checksum calculation is not done inside the lib right now -
>>> it is a user responsibility to caclucate/set it after librte_ipsec
>>> finishes processing the packet.
>> I believe static fields are updated during sa init but some fields like
>> ttl and checksum,
>> can be updated in the library itself which is updated for every packet.
>> (https://tools.ietf.org/html/rfc1624)
> About checksum - there is no point to calculate cksum it in the lib,
> as user may choose to use HW chksum offload.
> All other libraries ip_frag, GSO, etc. leave it to the user,
> I don't see why ipsec should be different here.
> About TTL and other fields - I suppose you refer to:
> https://tools.ietf.org/html/rfc4301#section-5.1.2
> Header Construction for Tunnel Mode
> right?
> Surely that has to be supported, one way or the other,
> but we don't plan to implement it in 19.02.
> Current plan to add it in 19.05, if time permits.
I am not talking about the outer ip checksum. Sorry the placement of the 
comment was not quite right. But I do not see that happening.
My question is will the function ipip_outbound in ipsec-secgw called 
from the application or will it be moved inside the library.
I believe this should be inside the lib.


>>>>> +	} else {
>>>>> +		v6h = p;
>>>>> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
>>>>> +				sizeof(*v6h));
>>>>> +	}
>>>>> +}
>>>>> +
>>>>> +#endif /* _IPH_H_ */
>>>>> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
>>>>> index 1935f6e30..6e18c34eb 100644
>>>>> --- a/lib/librte_ipsec/ipsec_sqn.h
>>>>> +++ b/lib/librte_ipsec/ipsec_sqn.h
>>>>> @@ -15,6 +15,45 @@
>>>>>
>>>>>     #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
>>>>>
>>>>> +/*
>>>>> + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
>>>>> + */
>>>>> +static inline rte_be32_t
>>>>> +sqn_hi32(rte_be64_t sqn)
>>>>> +{
>>>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>>>>> +	return (sqn >> 32);
>>>>> +#else
>>>>> +	return sqn;
>>>>> +#endif
>>>>> +}
>>>>> +
>>>>> +/*
>>>>> + * gets SQN.low32 bits, SQN supposed to be in network byte order.
>>>>> + */
>>>>> +static inline rte_be32_t
>>>>> +sqn_low32(rte_be64_t sqn)
>>>>> +{
>>>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>>>>> +	return sqn;
>>>>> +#else
>>>>> +	return (sqn >> 32);
>>>>> +#endif
>>>>> +}
>>>>> +
>>>>> +/*
>>>>> + * gets SQN.low16 bits, SQN supposed to be in network byte order.
>>>>> + */
>>>>> +static inline rte_be16_t
>>>>> +sqn_low16(rte_be64_t sqn)
>>>>> +{
>>>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
>>>>> +	return sqn;
>>>>> +#else
>>>>> +	return (sqn >> 48);
>>>>> +#endif
>>>>> +}
>>>>> +
>>>> shouldn't we move these seq number APIs in rte_esp.h and make them generic
>>> It could be done, but who will use them except librte_ipsec?
>> Whoever uses rte_esp.h and not use ipsec lib. The intent of rte_esp.h is
>> just for that only, otherwise we don't need rte_esp.h, we can have the
>> content of rte_esp.h in ipsec itself.
> Again these functions are used just inside the lib to help avoid
> extra byteswapping during crypto-data/packet header constructions.
Agreed, my point is why adding a new file for managing seq numbering in 
esp headers, when this can be easily moved to rte_esp.h.

> I don't see how they will be useful in general.
> Sure, if there will be demand from users in future - we can move them,
> but right now I don't think that would happen.
In that case we can get away with esp.h as well and move that in this 
new file and see if users need it separately, then we move it.
> Konstantin


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
  2018-12-21 14:51                 ` Akhil Goyal
@ 2018-12-21 15:16                   ` Ananyev, Konstantin
  0 siblings, 0 replies; 194+ messages in thread
From: Ananyev, Konstantin @ 2018-12-21 15:16 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: Thomas Monjalon, Awal, Mohammad Abdul, olivier.matz



> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, December 21, 2018 2:51 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; dev@dpdk.org
> Cc: Thomas Monjalon <thomas@monjalon.net>; Awal, Mohammad Abdul <mohammad.abdul.awal@intel.com>; olivier.matz@6wind.com
> Subject: Re: [dpdk-dev] [PATCH v4 06/10] ipsec: implement SA data-path API
> 
> 
> 
> On 12/21/2018 7:57 PM, Ananyev, Konstantin wrote:
> >>>>> + */
> >>>>> +
> >>>>> +/*
> >>>>> + * Move preceding (L3) headers down to remove ESP header and IV.
> >>>>> + */
> >>>> why cant we use rte_mbuf APIs to append/prepend/trim/adjust lengths.
> >>> We do use rte_mbuf append/trim, etc. adjust mbuf's data_ofs and data_len.
> >>> But apart from that for transport mode we have to move actual packet headers.
> >>> Let say for inbound we have to get rid of ESP header (which is after IP header),
> >>> but preserve IP header, so we moving L2/L3 headers down, overwriting ESP header.
> >> ok got your point
> >>>> I believe these adjustments are happening in the mbuf itself.
> >>>> Moreover these APIs are not specific to esp headers.
> >>> I didn't get your last sentence: that function is used to remove esp header
> >>> (see above) - that's why I named it that way.
> >> These can be used to remove any header and not specifically esp. So this
> >> API could be generic in rte_mbuf.
> > That function has nothing to do with mbuf in general.
> > It just copies bytes between overlapping in certain way buffers
> > (src.start < dst.start < src.end < dst.end).
> > Right now it is very primitive - copies on byte at a time in
> > descending order.
> > Wrote it just to avoid using memmove().
> > I don't think there is any point to have such dummy function in the lib/eal.
> If this is better than memmove, then probably it is a candidate to a
> function in lib.

If it would be something really smart - I would try to push it into the EAL myself.
But it is a dumb for() loop, nothing more.

> I think Thomas/ Olivier can better comment on this
> >
> >>>>> +static inline void
> >>>>> +remove_esph(char *np, char *op, uint32_t hlen)
> >>>>> +{
> >>>>> +	uint32_t i;
> >>>>> +
> >>>>> +	for (i = hlen; i-- != 0; np[i] = op[i])
> >>>>> +		;
> >>>>> +}
> >>>>> +
> >>>>> +/*
> >>>>> +
> >>>>> +/* update original and new ip header fields for tunnel case */
> >>>>> +static inline void
> >>>>> +update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
> >>>>> +		uint32_t l2len, rte_be16_t pid)
> >>>>> +{
> >>>>> +	struct ipv4_hdr *v4h;
> >>>>> +	struct ipv6_hdr *v6h;
> >>>>> +
> >>>>> +	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
> >>>>> +		v4h = p;
> >>>>> +		v4h->packet_id = pid;
> >>>>> +		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
> >>>> where are we updating the rest of the fields, like ttl, checksum, ip
> >>>> addresses, etc
> >>> TTL, ip addresses and other fileds supposed to be setuped by user
> >>> and provided via rte_ipsec_sa_init():
> >>> struct rte_ipsec_sa_prm.tun.hdr  should contain prepared template
> >>> for L3(and L2 if user wants to) header.
> >>> Checksum calculation is not done inside the lib right now -
> >>> it is a user responsibility to caclucate/set it after librte_ipsec
> >>> finishes processing the packet.
> >> I believe static fields are updated during sa init but some fields like
> >> ttl and checksum,
> >> can be updated in the library itself which is updated for every packet.
> >> (https://tools.ietf.org/html/rfc1624)
> > About checksum - there is no point to calculate cksum it in the lib,
> > as user may choose to use HW chksum offload.
> > All other libraries ip_frag, GSO, etc. leave it to the user,
> > I don't see why ipsec should be different here.
> > About TTL and other fields - I suppose you refer to:
> > https://tools.ietf.org/html/rfc4301#section-5.1.2
> > Header Construction for Tunnel Mode
> > right?
> > Surely that has to be supported, one way or the other,
> > but we don't plan to implement it in 19.02.
> > Current plan to add it in 19.05, if time permits.
> I am not talking about the outer ip checksum.
> Sorry the placement of the
> comment was not quite right. But I do not see that happening.
> My question is will the function ipip_outbound in ipsec-secgw called
> from the application or will it be moved inside the library.
> I believe this should be inside the lib.

I think the same - we probably need to support all described in RFC
header updates inside the process/prepare lib functions,
or at least provide a separate function for the user to perform them.
Though as I said above it is definitely not in 19.02 scope.

> 
> 
> >>>>> +	} else {
> >>>>> +		v6h = p;
> >>>>> +		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
> >>>>> +				sizeof(*v6h));
> >>>>> +	}
> >>>>> +}
> >>>>> +
> >>>>> +#endif /* _IPH_H_ */
> >>>>> diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
> >>>>> index 1935f6e30..6e18c34eb 100644
> >>>>> --- a/lib/librte_ipsec/ipsec_sqn.h
> >>>>> +++ b/lib/librte_ipsec/ipsec_sqn.h
> >>>>> @@ -15,6 +15,45 @@
> >>>>>
> >>>>>     #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
> >>>>>
> >>>>> +/*
> >>>>> + * gets SQN.hi32 bits, SQN supposed to be in network byte order.
> >>>>> + */
> >>>>> +static inline rte_be32_t
> >>>>> +sqn_hi32(rte_be64_t sqn)
> >>>>> +{
> >>>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> >>>>> +	return (sqn >> 32);
> >>>>> +#else
> >>>>> +	return sqn;
> >>>>> +#endif
> >>>>> +}
> >>>>> +
> >>>>> +/*
> >>>>> + * gets SQN.low32 bits, SQN supposed to be in network byte order.
> >>>>> + */
> >>>>> +static inline rte_be32_t
> >>>>> +sqn_low32(rte_be64_t sqn)
> >>>>> +{
> >>>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> >>>>> +	return sqn;
> >>>>> +#else
> >>>>> +	return (sqn >> 32);
> >>>>> +#endif
> >>>>> +}
> >>>>> +
> >>>>> +/*
> >>>>> + * gets SQN.low16 bits, SQN supposed to be in network byte order.
> >>>>> + */
> >>>>> +static inline rte_be16_t
> >>>>> +sqn_low16(rte_be64_t sqn)
> >>>>> +{
> >>>>> +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
> >>>>> +	return sqn;
> >>>>> +#else
> >>>>> +	return (sqn >> 48);
> >>>>> +#endif
> >>>>> +}
> >>>>> +
> >>>> shouldn't we move these seq number APIs in rte_esp.h and make them generic
> >>> It could be done, but who will use them except librte_ipsec?
> >> Whoever uses rte_esp.h and not use ipsec lib. The intent of rte_esp.h is
> >> just for that only, otherwise we don't need rte_esp.h, we can have the
> >> content of rte_esp.h in ipsec itself.
> > Again these functions are used just inside the lib to help avoid
> > extra byteswapping during crypto-data/packet header constructions.
> Agreed, my point is why adding a new file for managing seq numbering in
> esp headers, when this can be easily moved to rte_esp.h.
> 
> > I don't see how they will be useful in general.
> > Sure, if there will be demand from users in future - we can move them,
> > but right now I don't think that would happen.
> In that case we can get away with esp.h as well and move that in this
> new file and see if users need it separately, then we move it.

esp.h already exists and is used in several other places: 
find lib drivers -type f | xargs grep '<rte_esp.h>' | grep include
lib/librte_pipeline/rte_table_action.c:#include <rte_esp.h>
lib/librte_ethdev/rte_flow.h:#include <rte_esp.h>

Konstantin


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition
  2018-12-19  9:32         ` Akhil Goyal
@ 2018-12-27 10:13           ` Olivier Matz
  0 siblings, 0 replies; 194+ messages in thread
From: Olivier Matz @ 2018-12-27 10:13 UTC (permalink / raw)
  To: Akhil Goyal; +Cc: Konstantin Ananyev, dev

Hi,

On Wed, Dec 19, 2018 at 09:32:09AM +0000, Akhil Goyal wrote:
> 
> 
> On 12/14/2018 9:53 PM, Konstantin Ananyev wrote:
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> > Acked-by: Declan Doherty <declan.doherty@intel.com>
> > ---
> >   lib/librte_net/rte_esp.h | 10 +++++++++-
> >   1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
> > index f77ec2eb2..8e1b3d2dd 100644
> > --- a/lib/librte_net/rte_esp.h
> > +++ b/lib/librte_net/rte_esp.h
> > @@ -11,7 +11,7 @@
> >    * ESP-related defines
> >    */
> >   
> > -#include <stdint.h>
> > +#include <rte_byteorder.h>
> >   
> >   #ifdef __cplusplus
> >   extern "C" {
> > @@ -25,6 +25,14 @@ struct esp_hdr {
> >   	rte_be32_t seq;  /**< packet sequence number */
> >   } __attribute__((__packed__));
> >   
> > +/**
> > + * ESP Trailer
> > + */
> > +struct esp_tail {
> > +	uint8_t pad_len;     /**< number of pad bytes (0-255) */
> > +	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
> > +} __attribute__((__packed__));
> > +
> >   #ifdef __cplusplus
> >   }
> >   #endif
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

Is there a reason to pack the structure? I think it has no impact since
it is only composed of uint8_t, so it can be removed.

Thanks,
Olivier

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 00/10] ipsec: new library for IPsec data-path processing
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
  2018-12-19  9:26         ` Akhil Goyal
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                           ` (9 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

v4 -> v5
 - Fix issue with SQN overflows
 - Address Akhil comments:
     documentation update
     spell checks spacing etc.
     fix input crypto_xform check/prepcess
     test cases for lookaside and inline proto

v3 -> v4
 - Changes to adress Declan comments
 - Update docs

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions 

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec
processing API.
The library is concentrated on data-path protocols processing
(ESP and AH), IKE protocol(s) implementation is out of scope
for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined
rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process* impelementations.

Konstantin Ananyev (10):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test
  doc: add IPsec library guide

 MAINTAINERS                            |    8 +-
 config/common_base                     |    5 +
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/prog_guide/ipsec_lib.rst    |  168 ++
 doc/guides/rel_notes/release_19_02.rst |   11 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  154 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  174 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1527 ++++++++++++++
 lib/librte_ipsec/sa.h                  |  106 +
 lib/librte_ipsec/ses.c                 |   45 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2555 ++++++++++++++++++++++++
 27 files changed, 5576 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
  2018-12-19  9:26         ` Akhil Goyal
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                             ` (10 more replies)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
                           ` (8 subsequent siblings)
  11 siblings, 11 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 02/10] security: add opaque userdata pointer into security session
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (2 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 03/10] net: add ESP trailer structure definition Konstantin Ananyev
                           ` (7 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 718147e00..c8e438fdd 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -317,6 +317,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 03/10] net: add ESP trailer structure definition
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (3 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 04/10] lib: introduce ipsec library Konstantin Ananyev
                           ` (6 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 04/10] lib: introduce ipsec library
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (4 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 03/10] net: add ESP trailer structure definition Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 05/10] ipsec: add SA data-path API Konstantin Ananyev
                           ` (5 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                            |   8 +-
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 141 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 335 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  85 +++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 671 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 470f36b9c..9ce636be6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1036,6 +1036,13 @@ M: Jiayu Hu <jiayu.hu@intel.com>
 F: lib/librte_gso/
 F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+T: git://dpdk.org/next/dpdk-next-crypto
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
+
 Flow Classify - EXPERIMENTAL
 M: Bernard Iremonger <bernard.iremonger@intel.com>
 F: lib/librte_flow_classify/
@@ -1077,7 +1084,6 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
-
 Packet Framework
 ----------------
 M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
diff --git a/config/common_base b/config/common_base
index 0e3f900c5..14ad0b7bf 100644
--- a/config/common_base
+++ b/config/common_base
@@ -934,6 +934,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index 8dbdc9bca..d6239d27c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -107,6 +107,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..0e2868d26
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_net -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..1935f6e30
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequence number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..d99028c2c
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	/** crypto session configuration */
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode related parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode related parameters */
+	};
+
+	/**
+	 * window size to enable sequence replay attack handling.
+	 * replay checking is disabled if the window size is 0.
+	 */
+	uint32_t replay_win_sz;
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG2_IPV,
+	RTE_SATP_LOG2_PROTO,
+	RTE_SATP_LOG2_DIR,
+	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG2_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG2_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG2_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate required SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..f5c893875
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+/*
+ * helper routine, fills internal crypto_xform structure.
+ */
+static int
+fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf, *xfn;
+
+	memset(xform, 0, sizeof(*xform));
+
+	xf = prm->crypto_xform;
+	if (xf == NULL)
+		return -EINVAL;
+
+	xfn = xf->next;
+
+	/* for AEAD just one xform required */
+	if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		if (xfn != NULL)
+			return -EINVAL;
+		xform->aead = &xf->aead;
+	/*
+	 * CIPHER+AUTH xforms are expected in strict order,
+	 * depending on SA direction:
+	 * inbound: AUTH+CIPHER
+	 * outbound: CIPHER+AUTH
+	 */
+	} else if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/* wrong order or no cipher */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+
+		xform->auth = &xf->auth;
+		xform->cipher = &xfn->cipher;
+
+	} else {
+
+		/* wrong order or no auth */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+
+		xform->cipher = &xf->cipher;
+		xform->auth = &xfn->auth;
+	}
+
+	return 0;
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static int
+fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else if (prm->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+		else
+			return -EINVAL;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else if (prm->ipsec_xform.mode ==
+			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else
+		return -EINVAL;
+
+	*type = tp;
+	return 0;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = IPSEC_PAD_AES_GCM;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = IPSEC_PAD_NULL;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_PAD_AES_CBC;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+	int32_t rc;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, type, prm);
+	if (rc != 0)
+		return rc;
+
+	/* initialize SA */
+
+	memset(sa, 0, sz);
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..492521930
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* padding alignment for different algorithms */
+enum {
+	IPSEC_PAD_DEFAULT = 4,
+	IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
+	IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+	IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+};
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index a2dd52e17..179c2ef37 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 02e8b6f05..3fcfa58f7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 05/10] ipsec: add SA data-path API
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (5 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 04/10] lib: introduce ipsec library Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 06/10] ipsec: implement " Konstantin Ananyev
                           ` (4 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 152 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  45 ++++++++
 7 files changed, 228 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 0e2868d26..71e39df0b 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..93e4df1bd
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * Expects that all fields except IPsec processing function pointers
+ * (*pkt_func*) will be filled correctly by caller.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..4d4f46e4f 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,10 +1,13 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_session_prepare;
 
 	local: *;
 };
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index f5c893875..5465198ac 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -333,3 +333,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 492521930..616cf1b9f 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -82,4 +82,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..562c1423e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else if (ss->security.ses == NULL || ss->security.ctx == NULL)
+		return -EINVAL;
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 06/10] ipsec: implement SA data-path API
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (6 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 05/10] ipsec: add SA data-path API Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                           ` (3 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1133 +++++++++++++++++++++++++++++++++-
 5 files changed, 1569 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..58930cf18
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv4/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for transport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 1935f6e30..6e18c34eb 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequence number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 5465198ac..d263e7bcf 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -101,6 +105,9 @@ rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
 	memset(sa, 0, sa->size);
 }
 
+/*
+ * Determine expected SA type based on input parameters.
+ */
 static int
 fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 {
@@ -155,6 +162,9 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	return 0;
 }
 
+/*
+ * Init ESP inbound specific things.
+ */
 static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
@@ -165,6 +175,9 @@ esp_inb_init(struct rte_ipsec_sa *sa)
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
 }
 
+/*
+ * Init ESP inbound tunnel specific things.
+ */
 static void
 esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -172,6 +185,9 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_inb_init(sa);
 }
 
+/*
+ * Init ESP outbound specific things.
+ */
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
@@ -190,6 +206,9 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 	}
 }
 
+/*
+ * Init ESP outbound tunnel specific things.
+ */
 static void
 esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -201,6 +220,9 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_outb_init(sa, sa->hdr_len);
 }
 
+/*
+ * helper function, init SA structure.
+ */
 static int
 esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	const struct crypto_xform *cxf)
@@ -212,6 +234,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -334,18 +357,1124 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+/*
+ * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices.
+ */
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP outbound packet.
+ */
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound tunnel case.
+ */
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound tunnel case.
+ */
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound transport case.
+ */
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound transport case.
+ */
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP inbound tunnel packet.
+ */
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP inbound tunnel case.
+ */
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP inbound case.
+ */
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ *  setup crypto ops for LOOKASIDE_PROTO type of devices.
+ */
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+/*
+ *  setup packets and crypto ops for LOOKASIDE_PROTO type of devices.
+ *  Note that for LOOKASIDE_PROTO all packet modifications will be
+ *  performed by PMD/HW.
+ *  SW has only to prepare crypto op.
+ */
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+/*
+ * process ESP inbound tunnel packet.
+ */
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * process ESP inbound transport packet.
+ */
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * for group of ESP inbound packets perform SQN check and update.
+ */
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound tunnel packets.
+ */
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound transport packets.
+ */
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is already done by HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+/*
+ * process group of ESP outbound tunnel packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * process group of ESP outbound transport packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+/*
+ * Select packet processing function for session on LOOKASIDE_NONE
+ * type of device.
+ */
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for session on INLINE_CRYPTO
+ * type of device.
+ */
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for given session based on SA parameters
+ * and type of associated with the session device.
+ */
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 07/10] ipsec: rework SA replay window/SQN for MT environment
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (7 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 06/10] ipsec: implement " Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
                           ` (2 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  33 ++++++++++
 lib/librte_ipsec/sa.c           |  80 +++++++++++++++++-----
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 6e18c34eb..7de10bef5 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index d99028c2c..7802da3b1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -55,6 +55,27 @@ struct rte_ipsec_sa_prm {
 	uint32_t replay_win_sz;
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -62,6 +83,8 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
+ * - ESN enabled/disabled
  * ...
  */
 
@@ -70,6 +93,8 @@ enum {
 	RTE_SATP_LOG2_PROTO,
 	RTE_SATP_LOG2_DIR,
 	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+	RTE_SATP_LOG2_ESN,
 	RTE_SATP_LOG2_NUM
 };
 
@@ -90,6 +115,14 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG2_SQN)
+
+#define RTE_IPSEC_SATP_ESN_MASK		(1ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_DISABLE	(0ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_ENABLE	(1ULL << RTE_SATP_LOG2_ESN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index d263e7bcf..8d4ce1ac6 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -80,21 +80,37 @@ rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
 }
 
 static int32_t
-ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket)
 {
-	uint32_t n, sz;
+	uint32_t n, sz, wsz;
 
+	wsz = *wnd_sz;
 	n = 0;
-	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
-			RTE_IPSEC_SATP_DIR_IB)
-		n = replay_num_bucket(wsz);
+
+	if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/*
+		 * RFC 4303 recommends 64 as minimum window size.
+		 * there is no point to use ESN mode without SQN window,
+		 * so make sure we have at least 64 window when ESN is enalbed.
+		 */
+		wsz = ((type & RTE_IPSEC_SATP_ESN_MASK) ==
+			RTE_IPSEC_SATP_ESN_DISABLE) ?
+			wsz : RTE_MAX(wsz, (uint32_t)WINDOW_BUCKET_SIZE);
+		if (wsz != 0)
+			n = replay_num_bucket(wsz);
+	}
 
 	if (n > WINDOW_BUCKET_MAX)
 		return -EINVAL;
 
+	*wnd_sz = wsz;
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -158,6 +174,18 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	} else
 		return -EINVAL;
 
+	/* check for ESN flag */
+	if (prm->ipsec_xform.options.esn == 0)
+		tp |= RTE_IPSEC_SATP_ESN_DISABLE;
+	else
+		tp |= RTE_IPSEC_SATP_ESN_ENABLE;
+
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	*type = tp;
 	return 0;
 }
@@ -191,7 +219,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -277,11 +305,26 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return 0;
 }
 
+/*
+ * helper function, init SA replay structure.
+ */
+static void
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+{
+	sa->replay.win_sz = wnd_sz;
+	sa->replay.nb_bucket = nb_bucket;
+	sa->replay.bucket_index_mask = nb_bucket - 1;
+	sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+	if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+			((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+}
+
 int __rte_experimental
 rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 {
 	uint64_t type;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	int32_t rc;
 
 	if (prm == NULL)
@@ -293,7 +336,8 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 		return rc;
 
 	/* determine required size */
-	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	return ipsec_sa_size(type, &wsz, &nb);
 }
 
 int __rte_experimental
@@ -301,7 +345,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	uint32_t size)
 {
 	int32_t rc, sz;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	uint64_t type;
 	struct crypto_xform cxf;
 
@@ -314,7 +358,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		return rc;
 
 	/* determine required size */
-	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	sz = ipsec_sa_size(type, &wsz, &nb);
 	if (sz < 0)
 		return sz;
 	else if (size < (uint32_t)sz)
@@ -347,12 +392,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		rte_ipsec_sa_fini(sa);
 
 	/* fill replay window related fields */
-	if (nb != 0) {
-		sa->replay.win_sz = prm->replay_win_sz;
-		sa->replay.nb_bucket = nb;
-		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
-	}
+	if (nb != 0)
+		fill_sa_replay(sa, wsz, nb);
 
 	return sz;
 }
@@ -877,7 +918,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -896,6 +937,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -1058,7 +1101,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -1068,6 +1111,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 616cf1b9f..392e8fd7b 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -36,7 +38,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -74,10 +80,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 08/10] ipsec: helper functions to group completed crypto-ops
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (8 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 09/10] test/ipsec: introduce functional test Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 10/10] doc: add IPsec library guide Konstantin Ananyev
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 71e39df0b..77506d6ad 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 93e4df1bd..ff1ec801e 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -145,6 +145,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..696ed277a
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recommended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finalize it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 4d4f46e4f..ee9f1961b 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,12 +1,14 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 	rte_ipsec_session_prepare;
 
 	local: *;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 09/10] test/ipsec: introduce functional test
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (9 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 10/10] doc: add IPsec library guide Konstantin Ananyev
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev
  Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.
Note that the test requires null crypto pmd to pass successfully.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2555 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2561 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 5a4816fed..9e45baf7a 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -50,6 +50,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -117,6 +118,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -182,6 +184,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..d1625af1f
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2555 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+
+	if (ut_params->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		ut_params->crypto_xforms = &ut_params->auth_xform;
+		ut_params->auth_xform.next = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = NULL;
+	} else {
+		ut_params->crypto_xforms = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = &ut_params->auth_xform;
+		ut_params->auth_xform.next = NULL;
+	}
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+	if (rc == 0)
+		rc = rte_ipsec_session_prepare(&ut->ss[j]);
+
+	return rc;
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+lksd_proto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+
+	/* check crypto ops */
+	for (i = 0; i != num_pkts; i++) {
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->type,
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			"%s: invalid crypto op type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->status,
+			RTE_CRYPTO_OP_STATUS_NOT_PROCESSED,
+			"%s: invalid crypto op status for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sess_type,
+			RTE_CRYPTO_OP_SECURITY_SESSION,
+			"%s: invalid crypto op sess_type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sym->m_src,
+			ut_params->ibuf[i],
+			"%s: invalid crypto op m_src for %u-th packet\n",
+			__func__, i);
+	}
+
+	/* update crypto ops, pretend all finished ok */
+	for (i = 0; i != num_pkts; i++)
+		ut_params->cop[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%" PRIx64 "\n", test_cfg[i].flags);
+	printf("	pkt_sz %zu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+
+		/* check mbuf ol_flags */
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
+			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_encrypted_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = lksd_proto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "%s failed, cfg %d\n",
+				__func__, i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v5 10/10] doc: add IPsec library guide
  2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
                           ` (10 preceding siblings ...)
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 09/10] test/ipsec: introduce functional test Konstantin Ananyev
@ 2018-12-28 15:17         ` Konstantin Ananyev
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2018-12-28 15:17 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Bernard Iremonger

Add IPsec library guide and update release notes.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/prog_guide/index.rst        |   1 +
 doc/guides/prog_guide/ipsec_lib.rst    | 168 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst |  11 ++
 3 files changed, 180 insertions(+)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst

diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ba8c1f6ad..6726b1e8d 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,6 +54,7 @@ Programmer's Guide
     vhost_lib
     metrics_lib
     bpf_lib
+    ipsec_lib
     source_org
     dev_kit_build_system
     dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
new file mode 100644
index 000000000..e50d357c8
--- /dev/null
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -0,0 +1,168 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+IPsec Packet Processing Library
+===============================
+
+DPDK provides a library for IPsec data-path processing.
+The library utilizes the existing DPDK crypto-dev and
+security API to provide the application with a transparent and
+high performant IPsec packet processing API.
+The library is concentrated on data-path protocols processing
+(ESP and AH), IKE protocol(s) implementation is out of scope
+for this library.
+
+SA level API
+------------
+
+This API operates on the IPsec Security Association (SA) level.
+It provides functionality that allows user for given SA to process
+inbound and outbound IPsec packets.
+
+To be more specific:
+
+*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
+*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
+*  setup related mbuf fields (ol_flags, tx_offloads, etc.).
+*  initialize/un-initialize given SA based on user provided parameters.
+
+The SA level API is based on top of crypto-dev/security API and relies on
+them to perform actual cipher and integrity checking.
+
+Due to the nature of the crypto-dev API (enqueue/dequeue model) the library
+introduces an asynchronous API for IPsec packets destined to be processed by
+the crypto-device.
+
+The expected API call sequence for data-path processing would be:
+
+.. code-block:: c
+
+    /* enqueue for processing by crypto-device */
+    rte_ipsec_pkt_crypto_prepare(...);
+    rte_cryptodev_enqueue_burst(...);
+    /* dequeue from crypto-device and do final processing (if any) */
+    rte_cryptodev_dequeue_burst(...);
+    rte_ipsec_pkt_crypto_group(...); /* optional */
+    rte_ipsec_pkt_process(...);
+
+For packets destined for inline processing no extra overhead
+is required and the synchronous API call: rte_ipsec_pkt_process()
+is sufficient for that case.
+
+.. note::
+
+    For more details about the IPsec API, please refer to the *DPDK API Reference*.
+
+The current implementation supports all four currently defined
+rte_security types:
+
+RTE_SECURITY_ACTION_TYPE_NONE
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - check SQN
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security *
+    device completed successfully
+  - check SQN
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security *
+    device completed successfully
+
+* for outbound packets:
+
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+
+* for outbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+To accommodate future custom implementations function pointers
+model is used for both *crypto_prepare* and *process* implementations.
+
+
+Supported features
+------------------
+
+*  ESP protocol tunnel mode both IPv4/IPv6.
+
+*  ESP protocol transport mode both IPv4/IPv6.
+
+*  ESN and replay window.
+
+*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
+
+
+Limitations
+-----------
+
+The following features are not properly supported in the current version:
+
+*  ESP transport mode for IPv6 packets with extension headers.
+*  Multi-segment packets.
+*  Updates of the fields in inner IP header for tunnel mode
+   (as described in RFC 4301, section 5.1.2).
+*  Hard/soft limit for SA lifetime (time interval/byte count).
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 22c2dff4e..1a9885c44 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -105,6 +105,17 @@ New Features
   Added a new performance test tool to test the compressdev PMD. The tool tests
   compression ratio and compression throughput.
 
+* **Added IPsec Library.**
+
+  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
+  transport support for IPv4 and IPv6 packets.
+
+  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
+  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
+  planned to add more algorithms in future releases.
+
+  See :doc:`../prog_guide/ipsec_lib` for more information.
+
 
 Removed Items
 -------------
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-11  1:09             ` Xu, Yanjie
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (9 subsequent siblings)
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

v5 -> v6
 - Fix issues reported by Akhil:
     rte_ipsec_session_prepare() fails for lookaside-proto

v4 -> v5
 - Fix issue with SQN overflows
 - Address Akhil comments:
     documentation update
     spell checks spacing etc.
     fix input crypto_xform check/prepcess
     test cases for lookaside and inline proto

v3 -> v4
 - Changes to adress Declan comments
 - Update docs

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions 

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec
processing API.
The library is concentrated on data-path protocols processing
(ESP and AH), IKE protocol(s) implementation is out of scope
for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed by
crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined
rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process*
impelementations.

Konstantin Ananyev (10):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test
  doc: add IPsec library guide

 MAINTAINERS                            |    8 +-
 config/common_base                     |    5 +
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/prog_guide/ipsec_lib.rst    |  168 ++
 doc/guides/rel_notes/release_19_02.rst |   11 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  154 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  174 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1527 ++++++++++++++
 lib/librte_ipsec/sa.h                  |  106 +
 lib/librte_ipsec/ses.c                 |   52 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2555 ++++++++++++++++++++++++
 27 files changed, 5583 insertions(+), 2 deletions(-)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-04  0:25             ` Stephen Hemminger
                               ` (11 more replies)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
                             ` (8 subsequent siblings)
  10 siblings, 12 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_cryptodev/rte_cryptodev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 02/10] security: add opaque userdata pointer into security session
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 03/10] net: add ESP trailer structure definition Konstantin Ananyev
                             ` (7 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_security/rte_security.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 718147e00..c8e438fdd 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -317,6 +317,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 03/10] net: add ESP trailer structure definition
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (2 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 04/10] lib: introduce ipsec library Konstantin Ananyev
                             ` (6 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

define esp_tail structure.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 04/10] lib: introduce ipsec library
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (3 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 03/10] net: add ESP trailer structure definition Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 05/10] ipsec: add SA data-path API Konstantin Ananyev
                             ` (5 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                            |   8 +-
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 141 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 335 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  85 +++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 671 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 470f36b9c..9ce636be6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1036,6 +1036,13 @@ M: Jiayu Hu <jiayu.hu@intel.com>
 F: lib/librte_gso/
 F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+T: git://dpdk.org/next/dpdk-next-crypto
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
+
 Flow Classify - EXPERIMENTAL
 M: Bernard Iremonger <bernard.iremonger@intel.com>
 F: lib/librte_flow_classify/
@@ -1077,7 +1084,6 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
-
 Packet Framework
 ----------------
 M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
diff --git a/config/common_base b/config/common_base
index 0e3f900c5..14ad0b7bf 100644
--- a/config/common_base
+++ b/config/common_base
@@ -934,6 +934,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index 8dbdc9bca..d6239d27c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -107,6 +107,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..0e2868d26
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_net -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..1935f6e30
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequence number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..d99028c2c
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	/** crypto session configuration */
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode related parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode related parameters */
+	};
+
+	/**
+	 * window size to enable sequence replay attack handling.
+	 * replay checking is disabled if the window size is 0.
+	 */
+	uint32_t replay_win_sz;
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG2_IPV,
+	RTE_SATP_LOG2_PROTO,
+	RTE_SATP_LOG2_DIR,
+	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG2_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG2_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG2_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate required SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..f5c893875
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+/*
+ * helper routine, fills internal crypto_xform structure.
+ */
+static int
+fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf, *xfn;
+
+	memset(xform, 0, sizeof(*xform));
+
+	xf = prm->crypto_xform;
+	if (xf == NULL)
+		return -EINVAL;
+
+	xfn = xf->next;
+
+	/* for AEAD just one xform required */
+	if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		if (xfn != NULL)
+			return -EINVAL;
+		xform->aead = &xf->aead;
+	/*
+	 * CIPHER+AUTH xforms are expected in strict order,
+	 * depending on SA direction:
+	 * inbound: AUTH+CIPHER
+	 * outbound: CIPHER+AUTH
+	 */
+	} else if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/* wrong order or no cipher */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+
+		xform->auth = &xf->auth;
+		xform->cipher = &xfn->cipher;
+
+	} else {
+
+		/* wrong order or no auth */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+
+		xform->cipher = &xf->cipher;
+		xform->auth = &xfn->auth;
+	}
+
+	return 0;
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static int
+fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else if (prm->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+		else
+			return -EINVAL;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else if (prm->ipsec_xform.mode ==
+			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else
+		return -EINVAL;
+
+	*type = tp;
+	return 0;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = IPSEC_PAD_AES_GCM;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = IPSEC_PAD_NULL;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_PAD_AES_CBC;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+	int32_t rc;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, type, prm);
+	if (rc != 0)
+		return rc;
+
+	/* initialize SA */
+
+	memset(sa, 0, sz);
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..492521930
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* padding alignment for different algorithms */
+enum {
+	IPSEC_PAD_DEFAULT = 4,
+	IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
+	IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+	IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+};
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index a2dd52e17..179c2ef37 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 02e8b6f05..3fcfa58f7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 05/10] ipsec: add SA data-path API
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (4 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 04/10] lib: introduce ipsec library Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 06/10] ipsec: implement " Konstantin Ananyev
                             ` (4 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 152 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  52 +++++++++
 7 files changed, 235 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 0e2868d26..71e39df0b 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..93e4df1bd
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * Expects that all fields except IPsec processing function pointers
+ * (*pkt_func*) will be filled correctly by caller.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..4d4f46e4f 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,10 +1,13 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_session_prepare;
 
 	local: *;
 };
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index f5c893875..5465198ac 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -333,3 +333,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 492521930..616cf1b9f 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -82,4 +82,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..11580970e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else {
+		if (ss->security.ses == NULL)
+			return -EINVAL;
+		if ((ss->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+				ss->type ==
+				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) &&
+				ss->security.ctx == NULL)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 06/10] ipsec: implement SA data-path API
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (5 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 05/10] ipsec: add SA data-path API Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                             ` (3 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1133 +++++++++++++++++++++++++++++++++-
 5 files changed, 1569 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..58930cf18
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv4/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for transport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 1935f6e30..6e18c34eb 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequence number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 5465198ac..d263e7bcf 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -101,6 +105,9 @@ rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
 	memset(sa, 0, sa->size);
 }
 
+/*
+ * Determine expected SA type based on input parameters.
+ */
 static int
 fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 {
@@ -155,6 +162,9 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	return 0;
 }
 
+/*
+ * Init ESP inbound specific things.
+ */
 static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
@@ -165,6 +175,9 @@ esp_inb_init(struct rte_ipsec_sa *sa)
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
 }
 
+/*
+ * Init ESP inbound tunnel specific things.
+ */
 static void
 esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -172,6 +185,9 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_inb_init(sa);
 }
 
+/*
+ * Init ESP outbound specific things.
+ */
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
@@ -190,6 +206,9 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 	}
 }
 
+/*
+ * Init ESP outbound tunnel specific things.
+ */
 static void
 esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -201,6 +220,9 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_outb_init(sa, sa->hdr_len);
 }
 
+/*
+ * helper function, init SA structure.
+ */
 static int
 esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	const struct crypto_xform *cxf)
@@ -212,6 +234,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -334,18 +357,1124 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+/*
+ * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices.
+ */
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP outbound packet.
+ */
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound tunnel case.
+ */
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound tunnel case.
+ */
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound transport case.
+ */
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound transport case.
+ */
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP inbound tunnel packet.
+ */
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP inbound tunnel case.
+ */
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP inbound case.
+ */
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ *  setup crypto ops for LOOKASIDE_PROTO type of devices.
+ */
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+/*
+ *  setup packets and crypto ops for LOOKASIDE_PROTO type of devices.
+ *  Note that for LOOKASIDE_PROTO all packet modifications will be
+ *  performed by PMD/HW.
+ *  SW has only to prepare crypto op.
+ */
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+/*
+ * process ESP inbound tunnel packet.
+ */
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * process ESP inbound transport packet.
+ */
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * for group of ESP inbound packets perform SQN check and update.
+ */
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound tunnel packets.
+ */
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound transport packets.
+ */
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is already done by HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+/*
+ * process group of ESP outbound tunnel packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * process group of ESP outbound transport packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+/*
+ * Select packet processing function for session on LOOKASIDE_NONE
+ * type of device.
+ */
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for session on INLINE_CRYPTO
+ * type of device.
+ */
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for given session based on SA parameters
+ * and type of associated with the session device.
+ */
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 07/10] ipsec: rework SA replay window/SQN for MT environment
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (6 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 06/10] ipsec: implement " Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
                             ` (2 subsequent siblings)
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  33 ++++++++++
 lib/librte_ipsec/sa.c           |  80 +++++++++++++++++-----
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 6e18c34eb..7de10bef5 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index d99028c2c..7802da3b1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -55,6 +55,27 @@ struct rte_ipsec_sa_prm {
 	uint32_t replay_win_sz;
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -62,6 +83,8 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
+ * - ESN enabled/disabled
  * ...
  */
 
@@ -70,6 +93,8 @@ enum {
 	RTE_SATP_LOG2_PROTO,
 	RTE_SATP_LOG2_DIR,
 	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+	RTE_SATP_LOG2_ESN,
 	RTE_SATP_LOG2_NUM
 };
 
@@ -90,6 +115,14 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG2_SQN)
+
+#define RTE_IPSEC_SATP_ESN_MASK		(1ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_DISABLE	(0ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_ENABLE	(1ULL << RTE_SATP_LOG2_ESN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index d263e7bcf..8d4ce1ac6 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -80,21 +80,37 @@ rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
 }
 
 static int32_t
-ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket)
 {
-	uint32_t n, sz;
+	uint32_t n, sz, wsz;
 
+	wsz = *wnd_sz;
 	n = 0;
-	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
-			RTE_IPSEC_SATP_DIR_IB)
-		n = replay_num_bucket(wsz);
+
+	if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/*
+		 * RFC 4303 recommends 64 as minimum window size.
+		 * there is no point to use ESN mode without SQN window,
+		 * so make sure we have at least 64 window when ESN is enalbed.
+		 */
+		wsz = ((type & RTE_IPSEC_SATP_ESN_MASK) ==
+			RTE_IPSEC_SATP_ESN_DISABLE) ?
+			wsz : RTE_MAX(wsz, (uint32_t)WINDOW_BUCKET_SIZE);
+		if (wsz != 0)
+			n = replay_num_bucket(wsz);
+	}
 
 	if (n > WINDOW_BUCKET_MAX)
 		return -EINVAL;
 
+	*wnd_sz = wsz;
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -158,6 +174,18 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	} else
 		return -EINVAL;
 
+	/* check for ESN flag */
+	if (prm->ipsec_xform.options.esn == 0)
+		tp |= RTE_IPSEC_SATP_ESN_DISABLE;
+	else
+		tp |= RTE_IPSEC_SATP_ESN_ENABLE;
+
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	*type = tp;
 	return 0;
 }
@@ -191,7 +219,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -277,11 +305,26 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return 0;
 }
 
+/*
+ * helper function, init SA replay structure.
+ */
+static void
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+{
+	sa->replay.win_sz = wnd_sz;
+	sa->replay.nb_bucket = nb_bucket;
+	sa->replay.bucket_index_mask = nb_bucket - 1;
+	sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+	if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+			((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+}
+
 int __rte_experimental
 rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 {
 	uint64_t type;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	int32_t rc;
 
 	if (prm == NULL)
@@ -293,7 +336,8 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 		return rc;
 
 	/* determine required size */
-	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	return ipsec_sa_size(type, &wsz, &nb);
 }
 
 int __rte_experimental
@@ -301,7 +345,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	uint32_t size)
 {
 	int32_t rc, sz;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	uint64_t type;
 	struct crypto_xform cxf;
 
@@ -314,7 +358,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		return rc;
 
 	/* determine required size */
-	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	sz = ipsec_sa_size(type, &wsz, &nb);
 	if (sz < 0)
 		return sz;
 	else if (size < (uint32_t)sz)
@@ -347,12 +392,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		rte_ipsec_sa_fini(sa);
 
 	/* fill replay window related fields */
-	if (nb != 0) {
-		sa->replay.win_sz = prm->replay_win_sz;
-		sa->replay.nb_bucket = nb;
-		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
-	}
+	if (nb != 0)
+		fill_sa_replay(sa, wsz, nb);
 
 	return sz;
 }
@@ -877,7 +918,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -896,6 +937,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -1058,7 +1101,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -1068,6 +1111,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 616cf1b9f..392e8fd7b 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -36,7 +38,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -74,10 +80,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 08/10] ipsec: helper functions to group completed crypto-ops
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (7 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 09/10] test/ipsec: introduce functional test Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 10/10] doc: add IPsec library guide Konstantin Ananyev
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 71e39df0b..77506d6ad 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 93e4df1bd..ff1ec801e 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -145,6 +145,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..696ed277a
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recommended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finalize it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 4d4f46e4f..ee9f1961b 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,12 +1,14 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 	rte_ipsec_session_prepare;
 
 	local: *;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 09/10] test/ipsec: introduce functional test
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (8 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 10/10] doc: add IPsec library guide Konstantin Ananyev
  10 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev
  Cc: akhil.goyal, Konstantin Ananyev, Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.
Note that the test requires null crypto pmd to pass successfully.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2555 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2561 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 5a4816fed..9e45baf7a 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -50,6 +50,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -117,6 +118,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -182,6 +184,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..d1625af1f
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2555 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+
+	if (ut_params->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		ut_params->crypto_xforms = &ut_params->auth_xform;
+		ut_params->auth_xform.next = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = NULL;
+	} else {
+		ut_params->crypto_xforms = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = &ut_params->auth_xform;
+		ut_params->auth_xform.next = NULL;
+	}
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+	if (rc == 0)
+		rc = rte_ipsec_session_prepare(&ut->ss[j]);
+
+	return rc;
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+lksd_proto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+
+	/* check crypto ops */
+	for (i = 0; i != num_pkts; i++) {
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->type,
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			"%s: invalid crypto op type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->status,
+			RTE_CRYPTO_OP_STATUS_NOT_PROCESSED,
+			"%s: invalid crypto op status for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sess_type,
+			RTE_CRYPTO_OP_SECURITY_SESSION,
+			"%s: invalid crypto op sess_type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sym->m_src,
+			ut_params->ibuf[i],
+			"%s: invalid crypto op m_src for %u-th packet\n",
+			__func__, i);
+	}
+
+	/* update crypto ops, pretend all finished ok */
+	for (i = 0; i != num_pkts; i++)
+		ut_params->cop[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%" PRIx64 "\n", test_cfg[i].flags);
+	printf("	pkt_sz %zu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+
+		/* check mbuf ol_flags */
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
+			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_encrypted_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = lksd_proto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "%s failed, cfg %d\n",
+				__func__, i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v6 10/10] doc: add IPsec library guide
  2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                             ` (9 preceding siblings ...)
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 09/10] test/ipsec: introduce functional test Konstantin Ananyev
@ 2019-01-03 20:16           ` Konstantin Ananyev
  2019-01-10  8:35             ` Thomas Monjalon
  10 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-03 20:16 UTC (permalink / raw)
  To: dev, dev; +Cc: akhil.goyal, Konstantin Ananyev, Bernard Iremonger

Add IPsec library guide and update release notes.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/prog_guide/index.rst        |   1 +
 doc/guides/prog_guide/ipsec_lib.rst    | 168 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst |  11 ++
 3 files changed, 180 insertions(+)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst

diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ba8c1f6ad..6726b1e8d 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,6 +54,7 @@ Programmer's Guide
     vhost_lib
     metrics_lib
     bpf_lib
+    ipsec_lib
     source_org
     dev_kit_build_system
     dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
new file mode 100644
index 000000000..e50d357c8
--- /dev/null
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -0,0 +1,168 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+IPsec Packet Processing Library
+===============================
+
+DPDK provides a library for IPsec data-path processing.
+The library utilizes the existing DPDK crypto-dev and
+security API to provide the application with a transparent and
+high performant IPsec packet processing API.
+The library is concentrated on data-path protocols processing
+(ESP and AH), IKE protocol(s) implementation is out of scope
+for this library.
+
+SA level API
+------------
+
+This API operates on the IPsec Security Association (SA) level.
+It provides functionality that allows user for given SA to process
+inbound and outbound IPsec packets.
+
+To be more specific:
+
+*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
+*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
+*  setup related mbuf fields (ol_flags, tx_offloads, etc.).
+*  initialize/un-initialize given SA based on user provided parameters.
+
+The SA level API is based on top of crypto-dev/security API and relies on
+them to perform actual cipher and integrity checking.
+
+Due to the nature of the crypto-dev API (enqueue/dequeue model) the library
+introduces an asynchronous API for IPsec packets destined to be processed by
+the crypto-device.
+
+The expected API call sequence for data-path processing would be:
+
+.. code-block:: c
+
+    /* enqueue for processing by crypto-device */
+    rte_ipsec_pkt_crypto_prepare(...);
+    rte_cryptodev_enqueue_burst(...);
+    /* dequeue from crypto-device and do final processing (if any) */
+    rte_cryptodev_dequeue_burst(...);
+    rte_ipsec_pkt_crypto_group(...); /* optional */
+    rte_ipsec_pkt_process(...);
+
+For packets destined for inline processing no extra overhead
+is required and the synchronous API call: rte_ipsec_pkt_process()
+is sufficient for that case.
+
+.. note::
+
+    For more details about the IPsec API, please refer to the *DPDK API Reference*.
+
+The current implementation supports all four currently defined
+rte_security types:
+
+RTE_SECURITY_ACTION_TYPE_NONE
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - check SQN
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security *
+    device completed successfully
+  - check SQN
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security *
+    device completed successfully
+
+* for outbound packets:
+
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+
+* for outbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+To accommodate future custom implementations function pointers
+model is used for both *crypto_prepare* and *process* implementations.
+
+
+Supported features
+------------------
+
+*  ESP protocol tunnel mode both IPv4/IPv6.
+
+*  ESP protocol transport mode both IPv4/IPv6.
+
+*  ESN and replay window.
+
+*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
+
+
+Limitations
+-----------
+
+The following features are not properly supported in the current version:
+
+*  ESP transport mode for IPv6 packets with extension headers.
+*  Multi-segment packets.
+*  Updates of the fields in inner IP header for tunnel mode
+   (as described in RFC 4301, section 5.1.2).
+*  Hard/soft limit for SA lifetime (time interval/byte count).
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 22c2dff4e..1a9885c44 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -105,6 +105,17 @@ New Features
   Added a new performance test tool to test the compressdev PMD. The tool tests
   compression ratio and compression throughput.
 
+* **Added IPsec Library.**
+
+  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
+  transport support for IPv4 and IPv6 packets.
+
+  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
+  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
+  planned to add more algorithms in future releases.
+
+  See :doc:`../prog_guide/ipsec_lib` for more information.
+
 
 Removed Items
 -------------
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2019-01-04  0:25             ` Stephen Hemminger
  2019-01-04  9:29               ` Ananyev, Konstantin
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                               ` (10 subsequent siblings)
  11 siblings, 1 reply; 194+ messages in thread
From: Stephen Hemminger @ 2019-01-04  0:25 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev, akhil.goyal

On Thu,  3 Jan 2019 20:16:17 +0000
Konstantin Ananyev <konstantin.ananyev@intel.com> wrote:

> Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> That allows upper layer to easily associate some user defined
> data with the session.
> 
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
> Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> Acked-by: Declan Doherty <declan.doherty@intel.com>
> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
> ---
>  lib/librte_cryptodev/rte_cryptodev.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> index 4099823f1..009860e7b 100644
> --- a/lib/librte_cryptodev/rte_cryptodev.h
> +++ b/lib/librte_cryptodev/rte_cryptodev.h
> @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
>   * has a fixed algo, key, op-type, digest_len etc.
>   */
>  struct rte_cryptodev_sym_session {
> +	uint64_t opaque_data;
> +	/**< Opaque user defined data */
>  	__extension__ void *sess_private_data[0];
>  	/**< Private symmetric session material */
>  };

This will cause ABI breakage.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2019-01-04  0:25             ` Stephen Hemminger
@ 2019-01-04  9:29               ` Ananyev, Konstantin
  2019-01-09 23:41                 ` Thomas Monjalon
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2019-01-04  9:29 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, akhil.goyal



> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Friday, January 4, 2019 12:26 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; akhil.goyal@nxp.com
> Subject: Re: [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session
> 
> On Thu,  3 Jan 2019 20:16:17 +0000
> Konstantin Ananyev <konstantin.ananyev@intel.com> wrote:
> 
> > Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> > That allows upper layer to easily associate some user defined
> > data with the session.
> >
> > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > Acked-by: Fiona Trahe <fiona.trahe@intel.com>
> > Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> > Acked-by: Declan Doherty <declan.doherty@intel.com>
> > Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
> > ---
> >  lib/librte_cryptodev/rte_cryptodev.h | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> > index 4099823f1..009860e7b 100644
> > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> >   * has a fixed algo, key, op-type, digest_len etc.
> >   */
> >  struct rte_cryptodev_sym_session {
> > +	uint64_t opaque_data;
> > +	/**< Opaque user defined data */
> >  	__extension__ void *sess_private_data[0];
> >  	/**< Private symmetric session material */
> >  };
> 
> This will cause ABI breakage.

Yes, it surely would.
That's why we submitted deprecation notice in 18.11 and got 3 acks for it.
Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2019-01-04  9:29               ` Ananyev, Konstantin
@ 2019-01-09 23:41                 ` Thomas Monjalon
  0 siblings, 0 replies; 194+ messages in thread
From: Thomas Monjalon @ 2019-01-09 23:41 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: dev, Stephen Hemminger, akhil.goyal

04/01/2019 10:29, Ananyev, Konstantin:
> 
> > -----Original Message-----
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Friday, January 4, 2019 12:26 AM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> > Cc: dev@dpdk.org; akhil.goyal@nxp.com
> > Subject: Re: [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session
> > 
> > On Thu,  3 Jan 2019 20:16:17 +0000
> > Konstantin Ananyev <konstantin.ananyev@intel.com> wrote:
> > 
> > > Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
> > > That allows upper layer to easily associate some user defined
> > > data with the session.
> > >
> > > Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > Acked-by: Fiona Trahe <fiona.trahe@intel.com>
> > > Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
> > > Acked-by: Declan Doherty <declan.doherty@intel.com>
> > > Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
> > > ---
> > >  lib/librte_cryptodev/rte_cryptodev.h | 2 ++
> > >  1 file changed, 2 insertions(+)
> > >
> > > diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
> > > index 4099823f1..009860e7b 100644
> > > --- a/lib/librte_cryptodev/rte_cryptodev.h
> > > +++ b/lib/librte_cryptodev/rte_cryptodev.h
> > > @@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
> > >   * has a fixed algo, key, op-type, digest_len etc.
> > >   */
> > >  struct rte_cryptodev_sym_session {
> > > +	uint64_t opaque_data;
> > > +	/**< Opaque user defined data */
> > >  	__extension__ void *sess_private_data[0];
> > >  	/**< Private symmetric session material */
> > >  };
> > 
> > This will cause ABI breakage.
> 
> Yes, it surely would.
> That's why we submitted deprecation notice in 18.11 and got 3 acks for it.

So you should remove the deprecation notice in this patch,
and bump the ABI version,
and update the release notes for ABI + version changes.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v6 10/10] doc: add IPsec library guide
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 10/10] doc: add IPsec library guide Konstantin Ananyev
@ 2019-01-10  8:35             ` Thomas Monjalon
  0 siblings, 0 replies; 194+ messages in thread
From: Thomas Monjalon @ 2019-01-10  8:35 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev, akhil.goyal, Bernard Iremonger

03/01/2019 21:16, Konstantin Ananyev:
> Add IPsec library guide and update release notes.
> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  doc/guides/prog_guide/index.rst        |   1 +
>  doc/guides/prog_guide/ipsec_lib.rst    | 168 +++++++++++++++++++++++++
>  doc/guides/rel_notes/release_19_02.rst |  11 ++
>  3 files changed, 180 insertions(+)
>  create mode 100644 doc/guides/prog_guide/ipsec_lib.rst

There are some warnings:

doc/guides/prog_guide/ipsec_lib.rst:91: WARNING:
	Inline emphasis start-string without end-string.
doc/guides/prog_guide/ipsec_lib.rst:116: WARNING:
	Inline emphasis start-string without end-string.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2019-01-04  0:25             ` Stephen Hemminger
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:25               ` Thomas Monjalon
  2019-01-10 14:51               ` Akhil Goyal
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (9 subsequent siblings)
  11 siblings, 2 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

v6 -> v7
- Changes to address Thomas comments:
    bump ABI version
    remove related deprecation notice
    update release notes, ABI changes section

v5 -> v6
 - Fix issues reported by Akhil:
     rte_ipsec_session_prepare() fails for lookaside-proto

v4 -> v5
 - Fix issue with SQN overflows
 - Address Akhil comments:
     documentation update
     spell checks spacing etc.
     fix input crypto_xform check/prepcess
     test cases for lookaside and inline proto

v3 -> v4
 - Changes to address Declan comments
 - Update docs

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec
processing API.
The library is concentrated on data-path protocols processing
(ESP and AH), IKE protocol(s) implementation is out of scope
for that library.
Current patch introduces SA-level API.

SA level API
============

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed by
crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined
rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process* impelementations.

Konstantin Ananyev (10):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test
  doc: add IPsec library guide

 MAINTAINERS                            |    8 +-
 config/common_base                     |    5 +
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/prog_guide/ipsec_lib.rst    |  168 ++
 doc/guides/rel_notes/deprecation.rst   |    4 -
 doc/guides/rel_notes/release_19_02.rst |   20 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/Makefile          |    4 +-
 lib/librte_cryptodev/meson.build       |    4 +-
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  154 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  174 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1527 ++++++++++++++
 lib/librte_ipsec/sa.h                  |  106 +
 lib/librte_ipsec/ses.c                 |   52 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/Makefile           |    4 +-
 lib/librte_security/meson.build        |    3 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2555 ++++++++++++++++++++++++
 32 files changed, 5600 insertions(+), 13 deletions(-)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2019-01-04  0:25             ` Stephen Hemminger
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
                                 ` (9 more replies)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
                               ` (8 subsequent siblings)
  11 siblings, 10 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_cryptodev_sym_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 doc/guides/rel_notes/release_19_02.rst | 5 +++++
 lib/librte_cryptodev/Makefile          | 4 ++--
 lib/librte_cryptodev/meson.build       | 4 ++--
 lib/librte_cryptodev/rte_cryptodev.h   | 2 ++
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 22c2dff4e..9aa482a84 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -178,6 +178,11 @@ ABI Changes
 * mbuf: The format of the sched field of ``rte_mbuf`` has been changed
   to include the following fields: ``queue ID``, ``traffic class``, ``color``.
 
+* cryptodev: New field ``uint64_t opaque_data`` is added into
+  ``rte_cryptodev_sym_session`` structure. That would allow upper layer to
+  easily associate/de-associate some user defined data with the
+  cryptodev session.
+
 
 Shared Library Versions
 -----------------------
diff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile
index a8f94c097..e38018183 100644
--- a/lib/librte_cryptodev/Makefile
+++ b/lib/librte_cryptodev/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015 Intel Corporation
+# Copyright(c) 2015-2019 Intel Corporation
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
@@ -7,7 +7,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_cryptodev.a
 
 # library version
-LIBABIVER := 5
+LIBABIVER := 6
 
 # build flags
 CFLAGS += -O3
diff --git a/lib/librte_cryptodev/meson.build b/lib/librte_cryptodev/meson.build
index 990dd3d44..44bd83212 100644
--- a/lib/librte_cryptodev/meson.build
+++ b/lib/librte_cryptodev/meson.build
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
 
-version = 5
+version = 6
 sources = files('rte_cryptodev.c', 'rte_cryptodev_pmd.c')
 headers = files('rte_cryptodev.h',
 	'rte_cryptodev_pmd.h',
diff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h
index 4099823f1..009860e7b 100644
--- a/lib/librte_cryptodev/rte_cryptodev.h
+++ b/lib/librte_cryptodev/rte_cryptodev.h
@@ -954,6 +954,8 @@ rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
  * has a fixed algo, key, op-type, digest_len etc.
  */
 struct rte_cryptodev_sym_session {
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 	__extension__ void *sess_private_data[0];
 	/**< Private symmetric session material */
 };
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 02/10] security: add opaque userdata pointer into security session
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (2 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 03/10] net: add ESP trailer structure definition Konstantin Ananyev
                               ` (7 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 doc/guides/rel_notes/deprecation.rst   | 4 ----
 doc/guides/rel_notes/release_19_02.rst | 4 ++++
 lib/librte_security/Makefile           | 4 ++--
 lib/librte_security/meson.build        | 3 ++-
 lib/librte_security/rte_security.h     | 2 ++
 5 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d4aea4b46..2ea1b86bc 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -71,10 +71,6 @@ Deprecation Notices
   - Member ``uint16_t min_mtu`` the minimum MTU allowed.
   - Member ``uint16_t max_mtu`` the maximum MTU allowed.
 
-* security: New field ``uint64_t opaque_data`` is planned to be added into
-  ``rte_security_session`` structure. That would allow upper layer to easily
-  associate/de-associate some user defined data with the security session.
-
 * cryptodev: several API and ABI changes are planned for rte_cryptodev
   in v19.02:
 
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 9aa482a84..fafed0416 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -183,6 +183,10 @@ ABI Changes
   easily associate/de-associate some user defined data with the
   cryptodev session.
 
+* security: New field ``uint64_t opaque_data`` is added into
+  ``rte_security_session`` structure. That would allow upper layer to easily
+  associate/de-associate some user defined data with the security session.
+
 
 Shared Library Versions
 -----------------------
diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile
index bd92343bd..6708effdb 100644
--- a/lib/librte_security/Makefile
+++ b/lib/librte_security/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
@@ -7,7 +7,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_security.a
 
 # library version
-LIBABIVER := 1
+LIBABIVER := 2
 
 # build flags
 CFLAGS += -O3
diff --git a/lib/librte_security/meson.build b/lib/librte_security/meson.build
index 532953fcc..a5130d2f6 100644
--- a/lib/librte_security/meson.build
+++ b/lib/librte_security/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
 
+version = 2
 sources = files('rte_security.c')
 headers = files('rte_security.h', 'rte_security_driver.h')
 deps += ['mempool', 'cryptodev']
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 718147e00..c8e438fdd 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -317,6 +317,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 03/10] net: add ESP trailer structure definition
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (3 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 04/10] lib: introduce ipsec library Konstantin Ananyev
                               ` (6 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

define esp_tail structure.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 04/10] lib: introduce ipsec library
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (4 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 03/10] net: add ESP trailer structure definition Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 05/10] ipsec: add SA data-path API Konstantin Ananyev
                               ` (5 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 MAINTAINERS                            |   8 +-
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 141 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 335 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  85 +++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 671 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 470f36b9c..9ce636be6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1036,6 +1036,13 @@ M: Jiayu Hu <jiayu.hu@intel.com>
 F: lib/librte_gso/
 F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+T: git://dpdk.org/next/dpdk-next-crypto
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
+
 Flow Classify - EXPERIMENTAL
 M: Bernard Iremonger <bernard.iremonger@intel.com>
 F: lib/librte_flow_classify/
@@ -1077,7 +1084,6 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
-
 Packet Framework
 ----------------
 M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
diff --git a/config/common_base b/config/common_base
index 964a6956e..6fbd1d86b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -934,6 +934,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index 8dbdc9bca..d6239d27c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -107,6 +107,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..0e2868d26
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_net -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..1935f6e30
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequence number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..d99028c2c
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	/** crypto session configuration */
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode related parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode related parameters */
+	};
+
+	/**
+	 * window size to enable sequence replay attack handling.
+	 * replay checking is disabled if the window size is 0.
+	 */
+	uint32_t replay_win_sz;
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG2_IPV,
+	RTE_SATP_LOG2_PROTO,
+	RTE_SATP_LOG2_DIR,
+	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG2_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG2_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG2_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate required SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..f5c893875
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+/*
+ * helper routine, fills internal crypto_xform structure.
+ */
+static int
+fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf, *xfn;
+
+	memset(xform, 0, sizeof(*xform));
+
+	xf = prm->crypto_xform;
+	if (xf == NULL)
+		return -EINVAL;
+
+	xfn = xf->next;
+
+	/* for AEAD just one xform required */
+	if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		if (xfn != NULL)
+			return -EINVAL;
+		xform->aead = &xf->aead;
+	/*
+	 * CIPHER+AUTH xforms are expected in strict order,
+	 * depending on SA direction:
+	 * inbound: AUTH+CIPHER
+	 * outbound: CIPHER+AUTH
+	 */
+	} else if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/* wrong order or no cipher */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+
+		xform->auth = &xf->auth;
+		xform->cipher = &xfn->cipher;
+
+	} else {
+
+		/* wrong order or no auth */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+
+		xform->cipher = &xf->cipher;
+		xform->auth = &xfn->auth;
+	}
+
+	return 0;
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static int
+fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else if (prm->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+		else
+			return -EINVAL;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else if (prm->ipsec_xform.mode ==
+			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else
+		return -EINVAL;
+
+	*type = tp;
+	return 0;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = IPSEC_PAD_AES_GCM;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = IPSEC_PAD_NULL;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_PAD_AES_CBC;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+	int32_t rc;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, type, prm);
+	if (rc != 0)
+		return rc;
+
+	/* initialize SA */
+
+	memset(sa, 0, sz);
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..492521930
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* padding alignment for different algorithms */
+enum {
+	IPSEC_PAD_DEFAULT = 4,
+	IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
+	IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+	IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+};
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index a2dd52e17..179c2ef37 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 02e8b6f05..3fcfa58f7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 05/10] ipsec: add SA data-path API
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (5 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 04/10] lib: introduce ipsec library Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 06/10] ipsec: implement " Konstantin Ananyev
                               ` (4 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 152 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  52 +++++++++
 7 files changed, 235 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 0e2868d26..71e39df0b 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..93e4df1bd
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * Expects that all fields except IPsec processing function pointers
+ * (*pkt_func*) will be filled correctly by caller.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..4d4f46e4f 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,10 +1,13 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_session_prepare;
 
 	local: *;
 };
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index f5c893875..5465198ac 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -333,3 +333,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 492521930..616cf1b9f 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -82,4 +82,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..11580970e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else {
+		if (ss->security.ses == NULL)
+			return -EINVAL;
+		if ((ss->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+				ss->type ==
+				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) &&
+				ss->security.ctx == NULL)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 06/10] ipsec: implement SA data-path API
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (6 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 05/10] ipsec: add SA data-path API Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                               ` (3 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1133 +++++++++++++++++++++++++++++++++-
 5 files changed, 1569 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..58930cf18
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv4/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for transport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 1935f6e30..6e18c34eb 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequence number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 5465198ac..d263e7bcf 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -101,6 +105,9 @@ rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
 	memset(sa, 0, sa->size);
 }
 
+/*
+ * Determine expected SA type based on input parameters.
+ */
 static int
 fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 {
@@ -155,6 +162,9 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	return 0;
 }
 
+/*
+ * Init ESP inbound specific things.
+ */
 static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
@@ -165,6 +175,9 @@ esp_inb_init(struct rte_ipsec_sa *sa)
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
 }
 
+/*
+ * Init ESP inbound tunnel specific things.
+ */
 static void
 esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -172,6 +185,9 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_inb_init(sa);
 }
 
+/*
+ * Init ESP outbound specific things.
+ */
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
@@ -190,6 +206,9 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 	}
 }
 
+/*
+ * Init ESP outbound tunnel specific things.
+ */
 static void
 esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -201,6 +220,9 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_outb_init(sa, sa->hdr_len);
 }
 
+/*
+ * helper function, init SA structure.
+ */
 static int
 esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	const struct crypto_xform *cxf)
@@ -212,6 +234,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -334,18 +357,1124 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+/*
+ * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices.
+ */
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP outbound packet.
+ */
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound tunnel case.
+ */
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound tunnel case.
+ */
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound transport case.
+ */
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound transport case.
+ */
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP inbound tunnel packet.
+ */
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP inbound tunnel case.
+ */
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP inbound case.
+ */
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ *  setup crypto ops for LOOKASIDE_PROTO type of devices.
+ */
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+/*
+ *  setup packets and crypto ops for LOOKASIDE_PROTO type of devices.
+ *  Note that for LOOKASIDE_PROTO all packet modifications will be
+ *  performed by PMD/HW.
+ *  SW has only to prepare crypto op.
+ */
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+/*
+ * process ESP inbound tunnel packet.
+ */
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * process ESP inbound transport packet.
+ */
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * for group of ESP inbound packets perform SQN check and update.
+ */
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound tunnel packets.
+ */
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound transport packets.
+ */
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is already done by HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+/*
+ * process group of ESP outbound tunnel packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * process group of ESP outbound transport packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+/*
+ * Select packet processing function for session on LOOKASIDE_NONE
+ * type of device.
+ */
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for session on INLINE_CRYPTO
+ * type of device.
+ */
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for given session based on SA parameters
+ * and type of associated with the session device.
+ */
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 07/10] ipsec: rework SA replay window/SQN for MT environment
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (7 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 06/10] ipsec: implement " Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
                               ` (2 subsequent siblings)
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  33 ++++++++++
 lib/librte_ipsec/sa.c           |  80 +++++++++++++++++-----
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 6e18c34eb..7de10bef5 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index d99028c2c..7802da3b1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -55,6 +55,27 @@ struct rte_ipsec_sa_prm {
 	uint32_t replay_win_sz;
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -62,6 +83,8 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
+ * - ESN enabled/disabled
  * ...
  */
 
@@ -70,6 +93,8 @@ enum {
 	RTE_SATP_LOG2_PROTO,
 	RTE_SATP_LOG2_DIR,
 	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+	RTE_SATP_LOG2_ESN,
 	RTE_SATP_LOG2_NUM
 };
 
@@ -90,6 +115,14 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG2_SQN)
+
+#define RTE_IPSEC_SATP_ESN_MASK		(1ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_DISABLE	(0ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_ENABLE	(1ULL << RTE_SATP_LOG2_ESN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index d263e7bcf..8d4ce1ac6 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -80,21 +80,37 @@ rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
 }
 
 static int32_t
-ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket)
 {
-	uint32_t n, sz;
+	uint32_t n, sz, wsz;
 
+	wsz = *wnd_sz;
 	n = 0;
-	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
-			RTE_IPSEC_SATP_DIR_IB)
-		n = replay_num_bucket(wsz);
+
+	if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/*
+		 * RFC 4303 recommends 64 as minimum window size.
+		 * there is no point to use ESN mode without SQN window,
+		 * so make sure we have at least 64 window when ESN is enalbed.
+		 */
+		wsz = ((type & RTE_IPSEC_SATP_ESN_MASK) ==
+			RTE_IPSEC_SATP_ESN_DISABLE) ?
+			wsz : RTE_MAX(wsz, (uint32_t)WINDOW_BUCKET_SIZE);
+		if (wsz != 0)
+			n = replay_num_bucket(wsz);
+	}
 
 	if (n > WINDOW_BUCKET_MAX)
 		return -EINVAL;
 
+	*wnd_sz = wsz;
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -158,6 +174,18 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	} else
 		return -EINVAL;
 
+	/* check for ESN flag */
+	if (prm->ipsec_xform.options.esn == 0)
+		tp |= RTE_IPSEC_SATP_ESN_DISABLE;
+	else
+		tp |= RTE_IPSEC_SATP_ESN_ENABLE;
+
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	*type = tp;
 	return 0;
 }
@@ -191,7 +219,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -277,11 +305,26 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return 0;
 }
 
+/*
+ * helper function, init SA replay structure.
+ */
+static void
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+{
+	sa->replay.win_sz = wnd_sz;
+	sa->replay.nb_bucket = nb_bucket;
+	sa->replay.bucket_index_mask = nb_bucket - 1;
+	sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+	if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+			((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+}
+
 int __rte_experimental
 rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 {
 	uint64_t type;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	int32_t rc;
 
 	if (prm == NULL)
@@ -293,7 +336,8 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 		return rc;
 
 	/* determine required size */
-	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	return ipsec_sa_size(type, &wsz, &nb);
 }
 
 int __rte_experimental
@@ -301,7 +345,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	uint32_t size)
 {
 	int32_t rc, sz;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	uint64_t type;
 	struct crypto_xform cxf;
 
@@ -314,7 +358,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		return rc;
 
 	/* determine required size */
-	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	sz = ipsec_sa_size(type, &wsz, &nb);
 	if (sz < 0)
 		return sz;
 	else if (size < (uint32_t)sz)
@@ -347,12 +392,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		rte_ipsec_sa_fini(sa);
 
 	/* fill replay window related fields */
-	if (nb != 0) {
-		sa->replay.win_sz = prm->replay_win_sz;
-		sa->replay.nb_bucket = nb;
-		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
-	}
+	if (nb != 0)
+		fill_sa_replay(sa, wsz, nb);
 
 	return sz;
 }
@@ -877,7 +918,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -896,6 +937,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -1058,7 +1101,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -1068,6 +1111,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 616cf1b9f..392e8fd7b 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -36,7 +38,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -74,10 +80,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 08/10] ipsec: helper functions to group completed crypto-ops
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (8 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 09/10] test/ipsec: introduce functional test Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 10/10] doc: add IPsec library guide Konstantin Ananyev
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 71e39df0b..77506d6ad 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 93e4df1bd..ff1ec801e 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -145,6 +145,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..696ed277a
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recommended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finalize it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 4d4f46e4f..ee9f1961b 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,12 +1,14 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 	rte_ipsec_session_prepare;
 
 	local: *;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 09/10] test/ipsec: introduce functional test
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (9 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 10/10] doc: add IPsec library guide Konstantin Ananyev
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.
Note that the test requires null crypto pmd to pass successfully.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2555 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2561 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 5a4816fed..9e45baf7a 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -50,6 +50,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -117,6 +118,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -182,6 +184,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..d1625af1f
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2555 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_mempool *session_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	unsigned int session_size =
+		rte_cryptodev_sym_get_private_session_size(dev_id);
+
+	/*
+	 * Create mempool with maximum number of sessions * 2,
+	 * to include the session headers
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->session_mpool = rte_mempool_create(
+				"test_sess_mp",
+				MAX_NB_SESSIONS * 2,
+				session_size,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id),
+		ts_params->session_mpool),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->session_mpool != NULL) {
+		rte_mempool_free(ts_params->session_mpool);
+		ts_params->session_mpool = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, pool);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(pool);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, pool);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_mempool *pool, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, pool, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, pool, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+
+	if (ut_params->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		ut_params->crypto_xforms = &ut_params->auth_xform;
+		ut_params->auth_xform.next = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = NULL;
+	} else {
+		ut_params->crypto_xforms = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = &ut_params->auth_xform;
+		ut_params->auth_xform.next = NULL;
+	}
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, ts->session_mpool, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+	if (rc == 0)
+		rc = rte_ipsec_session_prepare(&ut->ss[j]);
+
+	return rc;
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+lksd_proto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+
+	/* check crypto ops */
+	for (i = 0; i != num_pkts; i++) {
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->type,
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			"%s: invalid crypto op type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->status,
+			RTE_CRYPTO_OP_STATUS_NOT_PROCESSED,
+			"%s: invalid crypto op status for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sess_type,
+			RTE_CRYPTO_OP_SECURITY_SESSION,
+			"%s: invalid crypto op sess_type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sym->m_src,
+			ut_params->ibuf[i],
+			"%s: invalid crypto op m_src for %u-th packet\n",
+			__func__, i);
+	}
+
+	/* update crypto ops, pretend all finished ok */
+	for (i = 0; i != num_pkts; i++)
+		ut_params->cop[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%" PRIx64 "\n", test_cfg[i].flags);
+	printf("	pkt_sz %zu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+
+		/* check mbuf ol_flags */
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
+			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_encrypted_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = lksd_proto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "%s failed, cfg %d\n",
+				__func__, i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v7 10/10] doc: add IPsec library guide
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                               ` (10 preceding siblings ...)
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 09/10] test/ipsec: introduce functional test Konstantin Ananyev
@ 2019-01-10 14:20             ` Konstantin Ananyev
  11 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 14:20 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Bernard Iremonger

Add IPsec library guide and update release notes.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/prog_guide/index.rst        |   1 +
 doc/guides/prog_guide/ipsec_lib.rst    | 168 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst |  11 ++
 3 files changed, 180 insertions(+)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst

diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ba8c1f6ad..6726b1e8d 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,6 +54,7 @@ Programmer's Guide
     vhost_lib
     metrics_lib
     bpf_lib
+    ipsec_lib
     source_org
     dev_kit_build_system
     dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
new file mode 100644
index 000000000..992fdf46b
--- /dev/null
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -0,0 +1,168 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+IPsec Packet Processing Library
+===============================
+
+DPDK provides a library for IPsec data-path processing.
+The library utilizes the existing DPDK crypto-dev and
+security API to provide the application with a transparent and
+high performant IPsec packet processing API.
+The library is concentrated on data-path protocols processing
+(ESP and AH), IKE protocol(s) implementation is out of scope
+for this library.
+
+SA level API
+------------
+
+This API operates on the IPsec Security Association (SA) level.
+It provides functionality that allows user for given SA to process
+inbound and outbound IPsec packets.
+
+To be more specific:
+
+*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
+*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
+*  setup related mbuf fields (ol_flags, tx_offloads, etc.).
+*  initialize/un-initialize given SA based on user provided parameters.
+
+The SA level API is based on top of crypto-dev/security API and relies on
+them to perform actual cipher and integrity checking.
+
+Due to the nature of the crypto-dev API (enqueue/dequeue model) the library
+introduces an asynchronous API for IPsec packets destined to be processed by
+the crypto-device.
+
+The expected API call sequence for data-path processing would be:
+
+.. code-block:: c
+
+    /* enqueue for processing by crypto-device */
+    rte_ipsec_pkt_crypto_prepare(...);
+    rte_cryptodev_enqueue_burst(...);
+    /* dequeue from crypto-device and do final processing (if any) */
+    rte_cryptodev_dequeue_burst(...);
+    rte_ipsec_pkt_crypto_group(...); /* optional */
+    rte_ipsec_pkt_process(...);
+
+For packets destined for inline processing no extra overhead
+is required and the synchronous API call: rte_ipsec_pkt_process()
+is sufficient for that case.
+
+.. note::
+
+    For more details about the IPsec API, please refer to the *DPDK API Reference*.
+
+The current implementation supports all four currently defined
+rte_security types:
+
+RTE_SECURITY_ACTION_TYPE_NONE
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - check SQN
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security*
+    device completed successfully
+  - check SQN
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security*
+    device completed successfully
+
+* for outbound packets:
+
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+
+* for outbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+To accommodate future custom implementations function pointers
+model is used for both *crypto_prepare* and *process* implementations.
+
+
+Supported features
+------------------
+
+*  ESP protocol tunnel mode both IPv4/IPv6.
+
+*  ESP protocol transport mode both IPv4/IPv6.
+
+*  ESN and replay window.
+
+*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
+
+
+Limitations
+-----------
+
+The following features are not properly supported in the current version:
+
+*  ESP transport mode for IPv6 packets with extension headers.
+*  Multi-segment packets.
+*  Updates of the fields in inner IP header for tunnel mode
+   (as described in RFC 4301, section 5.1.2).
+*  Hard/soft limit for SA lifetime (time interval/byte count).
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index fafed0416..43346123b 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -105,6 +105,17 @@ New Features
   Added a new performance test tool to test the compressdev PMD. The tool tests
   compression ratio and compression throughput.
 
+* **Added IPsec Library.**
+
+  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
+  transport support for IPv4 and IPv6 packets.
+
+  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
+  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
+  planned to add more algorithms in future releases.
+
+  See :doc:`../prog_guide/ipsec_lib` for more information.
+
 
 Removed Items
 -------------
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2019-01-10 14:25               ` Thomas Monjalon
  2019-01-10 14:40                 ` De Lara Guarch, Pablo
  2019-01-10 14:52                 ` Ananyev, Konstantin
  2019-01-10 14:51               ` Akhil Goyal
  1 sibling, 2 replies; 194+ messages in thread
From: Thomas Monjalon @ 2019-01-10 14:25 UTC (permalink / raw)
  To: Konstantin Ananyev; +Cc: dev, akhil.goyal, pablo.de.lara.guarch

10/01/2019 15:20, Konstantin Ananyev:
> v6 -> v7
> - Changes to address Thomas comments:
>     bump ABI version
>     remove related deprecation notice
>     update release notes, ABI changes section

You did not update the lib versions in the release notes.
I think you missed a deprecation notice removal in patch 1.
Have you checked the doxygen warnings in last patch?

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:25               ` Thomas Monjalon
@ 2019-01-10 14:40                 ` De Lara Guarch, Pablo
  2019-01-10 14:52                 ` Ananyev, Konstantin
  1 sibling, 0 replies; 194+ messages in thread
From: De Lara Guarch, Pablo @ 2019-01-10 14:40 UTC (permalink / raw)
  To: Thomas Monjalon, Ananyev, Konstantin; +Cc: dev, akhil.goyal

Hi Thomas,

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, January 10, 2019 2:25 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; akhil.goyal@nxp.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>
> Subject: Re: [PATCH v7 00/10] ipsec: new library for IPsec data-path
> processing
> 
> 10/01/2019 15:20, Konstantin Ananyev:
> > v6 -> v7
> > - Changes to address Thomas comments:
> >     bump ABI version
> >     remove related deprecation notice
> >     update release notes, ABI changes section
> 
> You did not update the lib versions in the release notes.
> I think you missed a deprecation notice removal in patch 1.
> Have you checked the doxygen warnings in last patch?
> 
> 

It doesn't matter. Fan is modifying that structure too and he is making these changes, so I will drop this patch after applying his patchset. Therefore, this won't be relevant anymore.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  2019-01-10 14:25               ` Thomas Monjalon
@ 2019-01-10 14:51               ` Akhil Goyal
  1 sibling, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2019-01-10 14:51 UTC (permalink / raw)
  To: Konstantin Ananyev, dev, pablo.de.lara.guarch; +Cc: thomas



On 1/10/2019 7:50 PM, Konstantin Ananyev wrote:
> v6 -> v7
> - Changes to address Thomas comments:
>      bump ABI version
>      remove related deprecation notice
>      update release notes, ABI changes section
>
As Pablo suggested patch 1 can be dropped as Fan's patchset already 
takes care of that.
Apart from that
Series Acked-by: Akhil Goyal <akhil.goyal@nxp.com>

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:25               ` Thomas Monjalon
  2019-01-10 14:40                 ` De Lara Guarch, Pablo
@ 2019-01-10 14:52                 ` Ananyev, Konstantin
  2019-01-10 14:54                   ` Thomas Monjalon
  1 sibling, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2019-01-10 14:52 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, akhil.goyal, De Lara Guarch, Pablo



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, January 10, 2019 2:25 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; akhil.goyal@nxp.com; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Subject: Re: [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
> 
> 10/01/2019 15:20, Konstantin Ananyev:
> > v6 -> v7
> > - Changes to address Thomas comments:
> >     bump ABI version
> >     remove related deprecation notice
> >     update release notes, ABI changes section
> 
> You did not update the lib versions in the release notes.

For 'security: add opaque userdata pointer into security session':
1) removed deprecation notice
2) add ABI change into release note:
+* security: New field ``uint64_t opaque_data`` is added into
+  ``rte_security_session`` structure. That would allow upper layer to easily
+  associate/de-associate some user defined data with the security session.
+
3) Bumbed version in Makefile and meson.build

What else needs to be done here?

> I think you missed a deprecation notice removal in patch 1.
As Pablo noticed that would happen in "cryptodev: update symmetric session",
this patch will be just dropped from the series.

> Have you checked the doxygen warnings in last patch?
I think I fixed that in v7, are still seeing them?

Konstantin

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:52                 ` Ananyev, Konstantin
@ 2019-01-10 14:54                   ` Thomas Monjalon
  2019-01-10 14:58                     ` Ananyev, Konstantin
  0 siblings, 1 reply; 194+ messages in thread
From: Thomas Monjalon @ 2019-01-10 14:54 UTC (permalink / raw)
  To: Ananyev, Konstantin; +Cc: dev, akhil.goyal, De Lara Guarch, Pablo

10/01/2019 15:52, Ananyev, Konstantin:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 10/01/2019 15:20, Konstantin Ananyev:
> > > v6 -> v7
> > > - Changes to address Thomas comments:
> > >     bump ABI version
> > >     remove related deprecation notice
> > >     update release notes, ABI changes section
> > 
> > You did not update the lib versions in the release notes.
> 
> For 'security: add opaque userdata pointer into security session':
> 1) removed deprecation notice
> 2) add ABI change into release note:
> +* security: New field ``uint64_t opaque_data`` is added into
> +  ``rte_security_session`` structure. That would allow upper layer to easily
> +  associate/de-associate some user defined data with the security session.
> +
> 3) Bumbed version in Makefile and meson.build
> 
> What else needs to be done here?

Like I said, "update the lib versions in the release notes".
Please check at the bottom of the page, there is a list of libraries.

> > I think you missed a deprecation notice removal in patch 1.
> As Pablo noticed that would happen in "cryptodev: update symmetric session",
> this patch will be just dropped from the series.

OK

> > Have you checked the doxygen warnings in last patch?
> I think I fixed that in v7, are still seeing them?

OK
I did not check. It was just a question because it is not in the changelog.

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:54                   ` Thomas Monjalon
@ 2019-01-10 14:58                     ` Ananyev, Konstantin
  2019-01-10 15:00                       ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Ananyev, Konstantin @ 2019-01-10 14:58 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, akhil.goyal, De Lara Guarch, Pablo



> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, January 10, 2019 2:55 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Cc: dev@dpdk.org; akhil.goyal@nxp.com; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Subject: Re: [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
> 
> 10/01/2019 15:52, Ananyev, Konstantin:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > 10/01/2019 15:20, Konstantin Ananyev:
> > > > v6 -> v7
> > > > - Changes to address Thomas comments:
> > > >     bump ABI version
> > > >     remove related deprecation notice
> > > >     update release notes, ABI changes section
> > >
> > > You did not update the lib versions in the release notes.
> >
> > For 'security: add opaque userdata pointer into security session':
> > 1) removed deprecation notice
> > 2) add ABI change into release note:
> > +* security: New field ``uint64_t opaque_data`` is added into
> > +  ``rte_security_session`` structure. That would allow upper layer to easily
> > +  associate/de-associate some user defined data with the security session.
> > +
> > 3) Bumbed version in Makefile and meson.build
> >
> > What else needs to be done here?
> 
> Like I said, "update the lib versions in the release notes".
> Please check at the bottom of the page, there is a list of libraries.

Ah, ok.
Will submit v8 then.

> 
> > > I think you missed a deprecation notice removal in patch 1.
> > As Pablo noticed that would happen in "cryptodev: update symmetric session",
> > this patch will be just dropped from the series.
> 
> OK
> 
> > > Have you checked the doxygen warnings in last patch?
> > I think I fixed that in v7, are still seeing them?
> 
> OK
> I did not check. It was just a question because it is not in the changelog.
> 
> 
> 

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 14:58                     ` Ananyev, Konstantin
@ 2019-01-10 15:00                       ` Akhil Goyal
  2019-01-10 15:09                         ` Akhil Goyal
  0 siblings, 1 reply; 194+ messages in thread
From: Akhil Goyal @ 2019-01-10 15:00 UTC (permalink / raw)
  To: Ananyev, Konstantin, Thomas Monjalon; +Cc: dev, De Lara Guarch, Pablo



On 1/10/2019 8:28 PM, Ananyev, Konstantin wrote:
>
>> -----Original Message-----
>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>> Sent: Thursday, January 10, 2019 2:55 PM
>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
>> Cc: dev@dpdk.org; akhil.goyal@nxp.com; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
>> Subject: Re: [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
>>
>> 10/01/2019 15:52, Ananyev, Konstantin:
>>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>>>> 10/01/2019 15:20, Konstantin Ananyev:
>>>>> v6 -> v7
>>>>> - Changes to address Thomas comments:
>>>>>      bump ABI version
>>>>>      remove related deprecation notice
>>>>>      update release notes, ABI changes section
>>>> You did not update the lib versions in the release notes.
>>> For 'security: add opaque userdata pointer into security session':
>>> 1) removed deprecation notice
>>> 2) add ABI change into release note:
>>> +* security: New field ``uint64_t opaque_data`` is added into
>>> +  ``rte_security_session`` structure. That would allow upper layer to easily
>>> +  associate/de-associate some user defined data with the security session.
>>> +
>>> 3) Bumbed version in Makefile and meson.build
>>>
>>> What else needs to be done here?
>> Like I said, "update the lib versions in the release notes".
>> Please check at the bottom of the page, there is a list of libraries.
> Ah, ok.
> Will submit v8 then.
Wait a for Pablo to apply the Fan's patchset. I think he would be 
applying soon.
There will be conflict in 2-3 patches. You can rebase it and then send it


^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
  2019-01-10 15:00                       ` Akhil Goyal
@ 2019-01-10 15:09                         ` Akhil Goyal
  0 siblings, 0 replies; 194+ messages in thread
From: Akhil Goyal @ 2019-01-10 15:09 UTC (permalink / raw)
  To: Ananyev, Konstantin, Thomas Monjalon; +Cc: dev, De Lara Guarch, Pablo



On 1/10/2019 8:30 PM, Akhil Goyal wrote:
>
> On 1/10/2019 8:28 PM, Ananyev, Konstantin wrote:
>>> -----Original Message-----
>>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>>> Sent: Thursday, January 10, 2019 2:55 PM
>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>
>>> Cc: dev@dpdk.org; akhil.goyal@nxp.com; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>
>>> Subject: Re: [PATCH v7 00/10] ipsec: new library for IPsec data-path processing
>>>
>>> 10/01/2019 15:52, Ananyev, Konstantin:
>>>> From: Thomas Monjalon [mailto:thomas@monjalon.net]
>>>>> 10/01/2019 15:20, Konstantin Ananyev:
>>>>>> v6 -> v7
>>>>>> - Changes to address Thomas comments:
>>>>>>       bump ABI version
>>>>>>       remove related deprecation notice
>>>>>>       update release notes, ABI changes section
>>>>> You did not update the lib versions in the release notes.
>>>> For 'security: add opaque userdata pointer into security session':
>>>> 1) removed deprecation notice
>>>> 2) add ABI change into release note:
>>>> +* security: New field ``uint64_t opaque_data`` is added into
>>>> +  ``rte_security_session`` structure. That would allow upper layer to easily
>>>> +  associate/de-associate some user defined data with the security session.
>>>> +
>>>> 3) Bumbed version in Makefile and meson.build
>>>>
>>>> What else needs to be done here?
>>> Like I said, "update the lib versions in the release notes".
>>> Please check at the bottom of the page, there is a list of libraries.
>> Ah, ok.
>> Will submit v8 then.
> Wait a for Pablo to apply the Fan's patchset. I think he would be
> applying soon.
> There will be conflict in 2-3 patches. You can rebase it and then send it
You can also fix the warnings of the check-git-log as well as you are 
sending another version for app and lib



^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 23:59                 ` De Lara Guarch, Pablo
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 1/9] security: add opaque userdata pointer into security session Konstantin Ananyev
                                 ` (8 subsequent siblings)
  9 siblings, 1 reply; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

v7 -> v8
- update release notes with new version for librte_security
- rebase on top of crypto-next

v6 -> v7
- Changes to address Thomas comments:
    bump ABI version
    remove related deprecation notice
    update release notes, ABI changes section

v5 -> v6
 - Fix issues reported by Akhil:
     rte_ipsec_session_prepare() fails for lookaside-proto

v4 -> v5
 - Fix issue with SQN overflows
 - Address Akhil comments:
     documentation update
     spell checks spacing etc.
     fix input crypto_xform check/prepcess
     test cases for lookaside and inline proto

v3 -> v4
 - Changes to address Declan comments
 - Update docs

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec
data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec
processing API.
The library is concentrated on data-path protocols processing
(ESP and AH), IKE protocol(s) implementation is out of scope
for that library.
Current patch introduces SA-level API.

SA level API
============

API described below operates on SA level.
It provides functionality that allows user for given SA to process
inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on
them
to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related
IPSec SA opaque userdata field was added into
rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use
asynchronous API for IPsec packets destined to be processed by
crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead
is required and synchronous API call: rte_ipsec_pkt_process()
is sufficient for that case.

Current implementation supports all four currently defined
rte_security types.
Though to accommodate future custom implementations function pointers
model is used for both for *crypto_prepare* and *process*
impelementations.

Konstantin Ananyev (9):
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test
  doc: add IPsec library guide

 MAINTAINERS                            |    8 +-
 config/common_base                     |    5 +
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/prog_guide/ipsec_lib.rst    |  168 ++
 doc/guides/rel_notes/deprecation.rst   |    4 -
 doc/guides/rel_notes/release_19_02.rst |   17 +-
 lib/Makefile                           |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  154 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  174 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1527 ++++++++++++++
 lib/librte_ipsec/sa.h                  |  106 +
 lib/librte_ipsec/ses.c                 |   52 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/Makefile           |    4 +-
 lib/librte_security/meson.build        |    3 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2565 ++++++++++++++++++++++++
 29 files changed, 5600 insertions(+), 10 deletions(-)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/pad.h
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h
 create mode 100644 lib/librte_ipsec/ses.c
 create mode 100644 test/test/test_ipsec.c

-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 1/9] security: add opaque userdata pointer into security session
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 2/9] net: add ESP trailer structure definition Konstantin Ananyev
                                 ` (7 subsequent siblings)
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

Add 'uint64_t opaque_data' inside struct rte_security_session.
That allows upper layer to easily associate some user defined
data with the session.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 doc/guides/rel_notes/deprecation.rst   | 4 ----
 doc/guides/rel_notes/release_19_02.rst | 6 +++++-
 lib/librte_security/Makefile           | 4 ++--
 lib/librte_security/meson.build        | 3 ++-
 lib/librte_security/rte_security.h     | 2 ++
 5 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 07a5b4cea..bab82865f 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -71,9 +71,5 @@ Deprecation Notices
   - Member ``uint16_t min_mtu`` the minimum MTU allowed.
   - Member ``uint16_t max_mtu`` the maximum MTU allowed.
 
-* security: New field ``uint64_t opaque_data`` is planned to be added into
-  ``rte_security_session`` structure. That would allow upper layer to easily
-  associate/de-associate some user defined data with the security session.
-
 * crypto/aesni_mb: the minimum supported intel-ipsec-mb library version will be
   changed from 0.49.0 to 0.52.0.
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 86880275a..1aebd27c7 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -212,6 +212,10 @@ ABI Changes
   ``rte_cryptodev_sym_session`` has been updated to contain more information
   to ensure safely accessing the session and session private data.
 
+* security: New field ``uint64_t opaque_data`` is added into
+  ``rte_security_session`` structure. That would allow upper layer to easily
+  associate/de-associate some user defined data with the security session.
+
 
 Shared Library Versions
 -----------------------
@@ -282,7 +286,7 @@ The libraries prepended with a plus sign were incremented in this version.
      librte_reorder.so.1
      librte_ring.so.2
    + librte_sched.so.2
-     librte_security.so.1
+   + librte_security.so.2
      librte_table.so.3
      librte_timer.so.1
      librte_vhost.so.4
diff --git a/lib/librte_security/Makefile b/lib/librte_security/Makefile
index bd92343bd..6708effdb 100644
--- a/lib/librte_security/Makefile
+++ b/lib/librte_security/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
 
 include $(RTE_SDK)/mk/rte.vars.mk
 
@@ -7,7 +7,7 @@ include $(RTE_SDK)/mk/rte.vars.mk
 LIB = librte_security.a
 
 # library version
-LIBABIVER := 1
+LIBABIVER := 2
 
 # build flags
 CFLAGS += -O3
diff --git a/lib/librte_security/meson.build b/lib/librte_security/meson.build
index 532953fcc..a5130d2f6 100644
--- a/lib/librte_security/meson.build
+++ b/lib/librte_security/meson.build
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
 
+version = 2
 sources = files('rte_security.c')
 headers = files('rte_security.h', 'rte_security_driver.h')
 deps += ['mempool', 'cryptodev']
diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
index 718147e00..c8e438fdd 100644
--- a/lib/librte_security/rte_security.h
+++ b/lib/librte_security/rte_security.h
@@ -317,6 +317,8 @@ struct rte_security_session_conf {
 struct rte_security_session {
 	void *sess_private_data;
 	/**< Private session material */
+	uint64_t opaque_data;
+	/**< Opaque user defined data */
 };
 
 /**
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 2/9] net: add ESP trailer structure definition
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 1/9] security: add opaque userdata pointer into security session Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 3/9] lib: introduce ipsec library Konstantin Ananyev
                                 ` (6 subsequent siblings)
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

define esp_tail structure.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_net/rte_esp.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/lib/librte_net/rte_esp.h b/lib/librte_net/rte_esp.h
index f77ec2eb2..8e1b3d2dd 100644
--- a/lib/librte_net/rte_esp.h
+++ b/lib/librte_net/rte_esp.h
@@ -11,7 +11,7 @@
  * ESP-related defines
  */
 
-#include <stdint.h>
+#include <rte_byteorder.h>
 
 #ifdef __cplusplus
 extern "C" {
@@ -25,6 +25,14 @@ struct esp_hdr {
 	rte_be32_t seq;  /**< packet sequence number */
 } __attribute__((__packed__));
 
+/**
+ * ESP Trailer
+ */
+struct esp_tail {
+	uint8_t pad_len;     /**< number of pad bytes (0-255) */
+	uint8_t next_proto;  /**< IPv4 or IPv6 or next layer header */
+} __attribute__((__packed__));
+
 #ifdef __cplusplus
 }
 #endif
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 3/9] lib: introduce ipsec library
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (2 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 2/9] net: add ESP trailer structure definition Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 4/9] ipsec: add SA data-path API Konstantin Ananyev
                                 ` (5 subsequent siblings)
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal

Introduce librte_ipsec library.
The library is supposed to utilize existing DPDK crypto-dev and
security API to provide application with transparent IPsec processing API.
That initial commit provides some base API to manage
IPsec Security Association (SA) object.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 MAINTAINERS                            |   8 +-
 config/common_base                     |   5 +
 lib/Makefile                           |   2 +
 lib/librte_ipsec/Makefile              |  24 ++
 lib/librte_ipsec/ipsec_sqn.h           |  48 ++++
 lib/librte_ipsec/meson.build           |  10 +
 lib/librte_ipsec/rte_ipsec_sa.h        | 141 +++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |  10 +
 lib/librte_ipsec/sa.c                  | 335 +++++++++++++++++++++++++
 lib/librte_ipsec/sa.h                  |  85 +++++++
 lib/meson.build                        |   2 +
 mk/rte.app.mk                          |   2 +
 12 files changed, 671 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/Makefile
 create mode 100644 lib/librte_ipsec/ipsec_sqn.h
 create mode 100644 lib/librte_ipsec/meson.build
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c
 create mode 100644 lib/librte_ipsec/sa.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 470f36b9c..9ce636be6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1036,6 +1036,13 @@ M: Jiayu Hu <jiayu.hu@intel.com>
 F: lib/librte_gso/
 F: doc/guides/prog_guide/generic_segmentation_offload_lib.rst
 
+IPsec - EXPERIMENTAL
+M: Konstantin Ananyev <konstantin.ananyev@intel.com>
+T: git://dpdk.org/next/dpdk-next-crypto
+F: lib/librte_ipsec/
+M: Bernard Iremonger <bernard.iremonger@intel.com>
+F: test/test/test_ipsec.c
+
 Flow Classify - EXPERIMENTAL
 M: Bernard Iremonger <bernard.iremonger@intel.com>
 F: lib/librte_flow_classify/
@@ -1077,7 +1084,6 @@ F: doc/guides/prog_guide/pdump_lib.rst
 F: app/pdump/
 F: doc/guides/tools/pdump.rst
 
-
 Packet Framework
 ----------------
 M: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
diff --git a/config/common_base b/config/common_base
index 964a6956e..6fbd1d86b 100644
--- a/config/common_base
+++ b/config/common_base
@@ -934,6 +934,11 @@ CONFIG_RTE_LIBRTE_BPF=y
 # allow load BPF from ELF files (requires libelf)
 CONFIG_RTE_LIBRTE_BPF_ELF=n
 
+#
+# Compile librte_ipsec
+#
+CONFIG_RTE_LIBRTE_IPSEC=y
+
 #
 # Compile the test application
 #
diff --git a/lib/Makefile b/lib/Makefile
index 8dbdc9bca..d6239d27c 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -107,6 +107,8 @@ DEPDIRS-librte_gso := librte_eal librte_mbuf librte_ethdev librte_net
 DEPDIRS-librte_gso += librte_mempool
 DIRS-$(CONFIG_RTE_LIBRTE_BPF) += librte_bpf
 DEPDIRS-librte_bpf := librte_eal librte_mempool librte_mbuf librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
+DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
 DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
 DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
 
diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
new file mode 100644
index 000000000..0e2868d26
--- /dev/null
+++ b/lib/librte_ipsec/Makefile
@@ -0,0 +1,24 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_ipsec.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal -lrte_mbuf -lrte_net -lrte_cryptodev -lrte_security
+
+EXPORT_MAP := rte_ipsec_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+
+# install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
new file mode 100644
index 000000000..1935f6e30
--- /dev/null
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPSEC_SQN_H_
+#define _IPSEC_SQN_H_
+
+#define WINDOW_BUCKET_BITS		6 /* uint64_t */
+#define WINDOW_BUCKET_SIZE		(1 << WINDOW_BUCKET_BITS)
+#define WINDOW_BIT_LOC_MASK		(WINDOW_BUCKET_SIZE - 1)
+
+/* minimum number of bucket, power of 2*/
+#define WINDOW_BUCKET_MIN		2
+#define WINDOW_BUCKET_MAX		(INT16_MAX + 1)
+
+#define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
+
+/*
+ * for given size, calculate required number of buckets.
+ */
+static uint32_t
+replay_num_bucket(uint32_t wsz)
+{
+	uint32_t nb;
+
+	nb = rte_align32pow2(RTE_ALIGN_MUL_CEIL(wsz, WINDOW_BUCKET_SIZE) /
+		WINDOW_BUCKET_SIZE);
+	nb = RTE_MAX(nb, (uint32_t)WINDOW_BUCKET_MIN);
+
+	return nb;
+}
+
+/**
+ * Based on number of buckets calculated required size for the
+ * structure that holds replay window and sequence number (RSN) information.
+ */
+static size_t
+rsn_size(uint32_t nb_bucket)
+{
+	size_t sz;
+	struct replay_sqn *rsn;
+
+	sz = sizeof(*rsn) + nb_bucket * sizeof(rsn->window[0]);
+	sz = RTE_ALIGN_CEIL(sz, RTE_CACHE_LINE_SIZE);
+	return sz;
+}
+
+#endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
new file mode 100644
index 000000000..52c78eaeb
--- /dev/null
+++ b/lib/librte_ipsec/meson.build
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2018 Intel Corporation
+
+allow_experimental_apis = true
+
+sources=files('sa.c')
+
+install_headers = files('rte_ipsec_sa.h')
+
+deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
new file mode 100644
index 000000000..d99028c2c
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -0,0 +1,141 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_SA_H_
+#define _RTE_IPSEC_SA_H_
+
+/**
+ * @file rte_ipsec_sa.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Defines API to manage IPsec Security Association (SA) objects.
+ */
+
+#include <rte_common.h>
+#include <rte_cryptodev.h>
+#include <rte_security.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * An opaque structure to represent Security Association (SA).
+ */
+struct rte_ipsec_sa;
+
+/**
+ * SA initialization parameters.
+ */
+struct rte_ipsec_sa_prm {
+
+	uint64_t userdata; /**< provided and interpreted by user */
+	uint64_t flags;  /**< see RTE_IPSEC_SAFLAG_* below */
+	/** ipsec configuration */
+	struct rte_security_ipsec_xform ipsec_xform;
+	/** crypto session configuration */
+	struct rte_crypto_sym_xform *crypto_xform;
+	union {
+		struct {
+			uint8_t hdr_len;     /**< tunnel header len */
+			uint8_t hdr_l3_off;  /**< offset for IPv4/IPv6 header */
+			uint8_t next_proto;  /**< next header protocol */
+			const void *hdr;     /**< tunnel header template */
+		} tun; /**< tunnel mode related parameters */
+		struct {
+			uint8_t proto;  /**< next header protocol */
+		} trs; /**< transport mode related parameters */
+	};
+
+	/**
+	 * window size to enable sequence replay attack handling.
+	 * replay checking is disabled if the window size is 0.
+	 */
+	uint32_t replay_win_sz;
+};
+
+/**
+ * SA type is an 64-bit value that contain the following information:
+ * - IP version (IPv4/IPv6)
+ * - IPsec proto (ESP/AH)
+ * - inbound/outbound
+ * - mode (TRANSPORT/TUNNEL)
+ * - for TUNNEL outer IP version (IPv4/IPv6)
+ * ...
+ */
+
+enum {
+	RTE_SATP_LOG2_IPV,
+	RTE_SATP_LOG2_PROTO,
+	RTE_SATP_LOG2_DIR,
+	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_NUM
+};
+
+#define RTE_IPSEC_SATP_IPV_MASK		(1ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV4		(0ULL << RTE_SATP_LOG2_IPV)
+#define RTE_IPSEC_SATP_IPV6		(1ULL << RTE_SATP_LOG2_IPV)
+
+#define RTE_IPSEC_SATP_PROTO_MASK	(1ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_AH		(0ULL << RTE_SATP_LOG2_PROTO)
+#define RTE_IPSEC_SATP_PROTO_ESP	(1ULL << RTE_SATP_LOG2_PROTO)
+
+#define RTE_IPSEC_SATP_DIR_MASK		(1ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_IB		(0ULL << RTE_SATP_LOG2_DIR)
+#define RTE_IPSEC_SATP_DIR_OB		(1ULL << RTE_SATP_LOG2_DIR)
+
+#define RTE_IPSEC_SATP_MODE_MASK	(3ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TRANS	(0ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
+#define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
+
+/**
+ * get type of given SA
+ * @return
+ *   SA type value.
+ */
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa);
+
+/**
+ * Calculate required SA size based on provided input parameters.
+ * @param prm
+ *   Parameters that wil be used to initialise SA object.
+ * @return
+ *   - Actual size required for SA with given parameters.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm);
+
+/**
+ * initialise SA based on provided input parameters.
+ * @param sa
+ *   SA object to initialise.
+ * @param prm
+ *   Parameters used to initialise given SA object.
+ * @param size
+ *   size of the provided buffer for SA.
+ * @return
+ *   - Actual size of SA object if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ *   - -ENOSPC if the size of the provided buffer is not big enough.
+ */
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size);
+
+/**
+ * cleanup SA
+ * @param sa
+ *   Pointer to SA object to de-initialize.
+ */
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_SA_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
new file mode 100644
index 000000000..1a66726b8
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -0,0 +1,10 @@
+EXPERIMENTAL {
+	global:
+
+	rte_ipsec_sa_fini;
+	rte_ipsec_sa_init;
+	rte_ipsec_sa_size;
+	rte_ipsec_sa_type;
+
+	local: *;
+};
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
new file mode 100644
index 000000000..f5c893875
--- /dev/null
+++ b/lib/librte_ipsec/sa.c
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_esp.h>
+#include <rte_ip.h>
+#include <rte_errno.h>
+
+#include "sa.h"
+#include "ipsec_sqn.h"
+
+/* some helper structures */
+struct crypto_xform {
+	struct rte_crypto_auth_xform *auth;
+	struct rte_crypto_cipher_xform *cipher;
+	struct rte_crypto_aead_xform *aead;
+};
+
+/*
+ * helper routine, fills internal crypto_xform structure.
+ */
+static int
+fill_crypto_xform(struct crypto_xform *xform, uint64_t type,
+	const struct rte_ipsec_sa_prm *prm)
+{
+	struct rte_crypto_sym_xform *xf, *xfn;
+
+	memset(xform, 0, sizeof(*xform));
+
+	xf = prm->crypto_xform;
+	if (xf == NULL)
+		return -EINVAL;
+
+	xfn = xf->next;
+
+	/* for AEAD just one xform required */
+	if (xf->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
+		if (xfn != NULL)
+			return -EINVAL;
+		xform->aead = &xf->aead;
+	/*
+	 * CIPHER+AUTH xforms are expected in strict order,
+	 * depending on SA direction:
+	 * inbound: AUTH+CIPHER
+	 * outbound: CIPHER+AUTH
+	 */
+	} else if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/* wrong order or no cipher */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return -EINVAL;
+
+		xform->auth = &xf->auth;
+		xform->cipher = &xfn->cipher;
+
+	} else {
+
+		/* wrong order or no auth */
+		if (xfn == NULL || xf->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
+				xfn->type != RTE_CRYPTO_SYM_XFORM_AUTH)
+			return -EINVAL;
+
+		xform->cipher = &xf->cipher;
+		xform->auth = &xfn->auth;
+	}
+
+	return 0;
+}
+
+uint64_t __rte_experimental
+rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
+{
+	return sa->type;
+}
+
+static int32_t
+ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+{
+	uint32_t n, sz;
+
+	n = 0;
+	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
+			RTE_IPSEC_SATP_DIR_IB)
+		n = replay_num_bucket(wsz);
+
+	if (n > WINDOW_BUCKET_MAX)
+		return -EINVAL;
+
+	*nb_bucket = n;
+
+	sz = rsn_size(n);
+	sz += sizeof(struct rte_ipsec_sa);
+	return sz;
+}
+
+void __rte_experimental
+rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
+{
+	memset(sa, 0, sa->size);
+}
+
+static int
+fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
+{
+	uint64_t tp;
+
+	tp = 0;
+
+	if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
+		tp |= RTE_IPSEC_SATP_PROTO_AH;
+	else if (prm->ipsec_xform.proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		tp |= RTE_IPSEC_SATP_PROTO_ESP;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_OB;
+	else if (prm->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+		tp |= RTE_IPSEC_SATP_DIR_IB;
+	else
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+		if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV4)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV4;
+		else if (prm->ipsec_xform.tunnel.type ==
+				RTE_SECURITY_IPSEC_TUNNEL_IPV6)
+			tp |= RTE_IPSEC_SATP_MODE_TUNLV6;
+		else
+			return -EINVAL;
+
+		if (prm->tun.next_proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->tun.next_proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else if (prm->ipsec_xform.mode ==
+			RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT) {
+		tp |= RTE_IPSEC_SATP_MODE_TRANS;
+		if (prm->trs.proto == IPPROTO_IPIP)
+			tp |= RTE_IPSEC_SATP_IPV4;
+		else if (prm->trs.proto == IPPROTO_IPV6)
+			tp |= RTE_IPSEC_SATP_IPV6;
+		else
+			return -EINVAL;
+	} else
+		return -EINVAL;
+
+	*type = tp;
+	return 0;
+}
+
+static void
+esp_inb_init(struct rte_ipsec_sa *sa)
+{
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = 0;
+	sa->ctp.auth.length = sa->icv_len - sa->sqh_len;
+	sa->ctp.cipher.offset = sizeof(struct esp_hdr) + sa->iv_len;
+	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
+}
+
+static void
+esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	esp_inb_init(sa);
+}
+
+static void
+esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
+{
+	sa->sqn.outb = 1;
+
+	/* these params may differ with new algorithms support */
+	sa->ctp.auth.offset = hlen;
+	sa->ctp.auth.length = sizeof(struct esp_hdr) + sa->iv_len + sa->sqh_len;
+	if (sa->aad_len != 0) {
+		sa->ctp.cipher.offset = hlen + sizeof(struct esp_hdr) +
+			sa->iv_len;
+		sa->ctp.cipher.length = 0;
+	} else {
+		sa->ctp.cipher.offset = sa->hdr_len + sizeof(struct esp_hdr);
+		sa->ctp.cipher.length = sa->iv_len;
+	}
+}
+
+static void
+esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
+{
+	sa->proto = prm->tun.next_proto;
+	sa->hdr_len = prm->tun.hdr_len;
+	sa->hdr_l3_off = prm->tun.hdr_l3_off;
+	memcpy(sa->hdr, prm->tun.hdr, sa->hdr_len);
+
+	esp_outb_init(sa, sa->hdr_len);
+}
+
+static int
+esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	const struct crypto_xform *cxf)
+{
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+				RTE_IPSEC_SATP_MODE_MASK;
+
+	if (cxf->aead != NULL) {
+		/* RFC 4106 */
+		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
+			return -EINVAL;
+		sa->icv_len = cxf->aead->digest_length;
+		sa->iv_ofs = cxf->aead->iv.offset;
+		sa->iv_len = sizeof(uint64_t);
+		sa->pad_align = IPSEC_PAD_AES_GCM;
+	} else {
+		sa->icv_len = cxf->auth->digest_length;
+		sa->iv_ofs = cxf->cipher->iv.offset;
+		sa->sqh_len = IS_ESN(sa) ? sizeof(uint32_t) : 0;
+		if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_NULL) {
+			sa->pad_align = IPSEC_PAD_NULL;
+			sa->iv_len = 0;
+		} else if (cxf->cipher->algo == RTE_CRYPTO_CIPHER_AES_CBC) {
+			sa->pad_align = IPSEC_PAD_AES_CBC;
+			sa->iv_len = IPSEC_MAX_IV_SIZE;
+		} else
+			return -EINVAL;
+	}
+
+	sa->udata = prm->userdata;
+	sa->spi = rte_cpu_to_be_32(prm->ipsec_xform.spi);
+	sa->salt = prm->ipsec_xform.salt;
+
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_inb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_inb_init(sa);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		esp_outb_tun_init(sa, prm);
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		esp_outb_init(sa, 0);
+		break;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
+{
+	uint64_t type;
+	uint32_t nb;
+	int32_t rc;
+
+	if (prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+}
+
+int __rte_experimental
+rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
+	uint32_t size)
+{
+	int32_t rc, sz;
+	uint32_t nb;
+	uint64_t type;
+	struct crypto_xform cxf;
+
+	if (sa == NULL || prm == NULL)
+		return -EINVAL;
+
+	/* determine SA type */
+	rc = fill_sa_type(prm, &type);
+	if (rc != 0)
+		return rc;
+
+	/* determine required size */
+	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	if (sz < 0)
+		return sz;
+	else if (size < (uint32_t)sz)
+		return -ENOSPC;
+
+	/* only esp is supported right now */
+	if (prm->ipsec_xform.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP)
+		return -EINVAL;
+
+	if (prm->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL &&
+			prm->tun.hdr_len > sizeof(sa->hdr))
+		return -EINVAL;
+
+	rc = fill_crypto_xform(&cxf, type, prm);
+	if (rc != 0)
+		return rc;
+
+	/* initialize SA */
+
+	memset(sa, 0, sz);
+	sa->type = type;
+	sa->size = sz;
+
+	/* check for ESN flag */
+	sa->sqn_mask = (prm->ipsec_xform.options.esn == 0) ?
+		UINT32_MAX : UINT64_MAX;
+
+	rc = esp_sa_init(sa, prm, &cxf);
+	if (rc != 0)
+		rte_ipsec_sa_fini(sa);
+
+	/* fill replay window related fields */
+	if (nb != 0) {
+		sa->replay.win_sz = prm->replay_win_sz;
+		sa->replay.nb_bucket = nb;
+		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
+		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
+	}
+
+	return sz;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
new file mode 100644
index 000000000..492521930
--- /dev/null
+++ b/lib/librte_ipsec/sa.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _SA_H_
+#define _SA_H_
+
+#define IPSEC_MAX_HDR_SIZE	64
+#define IPSEC_MAX_IV_SIZE	16
+#define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
+
+/* padding alignment for different algorithms */
+enum {
+	IPSEC_PAD_DEFAULT = 4,
+	IPSEC_PAD_AES_CBC = IPSEC_MAX_IV_SIZE,
+	IPSEC_PAD_AES_GCM = IPSEC_PAD_DEFAULT,
+	IPSEC_PAD_NULL = IPSEC_PAD_DEFAULT,
+};
+
+/* these definitions probably has to be in rte_crypto_sym.h */
+union sym_op_ofslen {
+	uint64_t raw;
+	struct {
+		uint32_t offset;
+		uint32_t length;
+	};
+};
+
+union sym_op_data {
+#ifdef __SIZEOF_INT128__
+	__uint128_t raw;
+#endif
+	struct {
+		uint8_t *va;
+		rte_iova_t pa;
+	};
+};
+
+struct replay_sqn {
+	uint64_t sqn;
+	__extension__ uint64_t window[0];
+};
+
+struct rte_ipsec_sa {
+	uint64_t type;     /* type of given SA */
+	uint64_t udata;    /* user defined */
+	uint32_t size;     /* size of given sa object */
+	uint32_t spi;
+	/* sqn calculations related */
+	uint64_t sqn_mask;
+	struct {
+		uint32_t win_sz;
+		uint16_t nb_bucket;
+		uint16_t bucket_index_mask;
+	} replay;
+	/* template for crypto op fields */
+	struct {
+		union sym_op_ofslen cipher;
+		union sym_op_ofslen auth;
+	} ctp;
+	uint32_t salt;
+	uint8_t proto;    /* next proto */
+	uint8_t aad_len;
+	uint8_t hdr_len;
+	uint8_t hdr_l3_off;
+	uint8_t icv_len;
+	uint8_t sqh_len;
+	uint8_t iv_ofs; /* offset for algo-specific IV inside crypto op */
+	uint8_t iv_len;
+	uint8_t pad_align;
+
+	/* template for tunnel header */
+	uint8_t hdr[IPSEC_MAX_HDR_SIZE];
+
+	/*
+	 * sqn and replay window
+	 */
+	union {
+		uint64_t outb;
+		struct replay_sqn *inb;
+	} sqn;
+
+} __rte_cache_aligned;
+
+#endif /* _SA_H_ */
diff --git a/lib/meson.build b/lib/meson.build
index a2dd52e17..179c2ef37 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,6 +22,8 @@ libraries = [ 'compat', # just a header, used for versioning
 	'kni', 'latencystats', 'lpm', 'member',
 	'power', 'pdump', 'rawdev',
 	'reorder', 'sched', 'security', 'vhost',
+	#ipsec lib depends on crypto and security
+	'ipsec',
 	# add pkt framework libs which use other libs from above
 	'port', 'table', 'pipeline',
 	# flow_classify lib depends on pkt framework table lib
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 02e8b6f05..3fcfa58f7 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -67,6 +67,8 @@ ifeq ($(CONFIG_RTE_LIBRTE_BPF_ELF),y)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_BPF)            += -lelf
 endif
 
+_LDLIBS-$(CONFIG_RTE_LIBRTE_IPSEC)            += -lrte_ipsec
+
 _LDLIBS-y += --whole-archive
 
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CFGFILE)        += -lrte_cfgfile
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 4/9] ipsec: add SA data-path API
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (3 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 3/9] lib: introduce ipsec library Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 5/9] ipsec: implement " Konstantin Ananyev
                                 ` (4 subsequent siblings)
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal

Introduce Security Association (SA-level) data-path API
Operates at SA level, provides functions to:
    - initialize/teardown SA object
    - process inbound/outbound ESP/AH packets associated with the given SA
      (decrypt/encrypt, authenticate, check integrity,
      add/remove ESP/AH related headers and data, etc.).

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_ipsec/Makefile              |   2 +
 lib/librte_ipsec/meson.build           |   4 +-
 lib/librte_ipsec/rte_ipsec.h           | 152 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   3 +
 lib/librte_ipsec/sa.c                  |  21 +++-
 lib/librte_ipsec/sa.h                  |   4 +
 lib/librte_ipsec/ses.c                 |  52 +++++++++
 7 files changed, 235 insertions(+), 3 deletions(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec.h
 create mode 100644 lib/librte_ipsec/ses.c

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 0e2868d26..71e39df0b 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 1
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += sa.c
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 52c78eaeb..6e8c6fabe 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -3,8 +3,8 @@
 
 allow_experimental_apis = true
 
-sources=files('sa.c')
+sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
new file mode 100644
index 000000000..93e4df1bd
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_H_
+#define _RTE_IPSEC_H_
+
+/**
+ * @file rte_ipsec.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * librte_ipsec provides a framework for data-path IPsec protocol
+ * processing (ESP/AH).
+ */
+
+#include <rte_ipsec_sa.h>
+#include <rte_mbuf.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+struct rte_ipsec_session;
+
+/**
+ * IPsec session specific functions that will be used to:
+ * - prepare - for input mbufs and given IPsec session prepare crypto ops
+ *   that can be enqueued into the cryptodev associated with given session
+ *   (see *rte_ipsec_pkt_crypto_prepare* below for more details).
+ * - process - finalize processing of packets after crypto-dev finished
+ *   with them or process packets that are subjects to inline IPsec offload
+ *   (see rte_ipsec_pkt_process for more details).
+ */
+struct rte_ipsec_sa_pkt_func {
+	uint16_t (*prepare)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				struct rte_crypto_op *cop[],
+				uint16_t num);
+	uint16_t (*process)(const struct rte_ipsec_session *ss,
+				struct rte_mbuf *mb[],
+				uint16_t num);
+};
+
+/**
+ * rte_ipsec_session is an aggregate structure that defines particular
+ * IPsec Security Association IPsec (SA) on given security/crypto device:
+ * - pointer to the SA object
+ * - security session action type
+ * - pointer to security/crypto session, plus other related data
+ * - session/device specific functions to prepare/process IPsec packets.
+ */
+struct rte_ipsec_session {
+	/**
+	 * SA that session belongs to.
+	 * Note that multiple sessions can belong to the same SA.
+	 */
+	struct rte_ipsec_sa *sa;
+	/** session action type */
+	enum rte_security_session_action_type type;
+	/** session and related data */
+	union {
+		struct {
+			struct rte_cryptodev_sym_session *ses;
+		} crypto;
+		struct {
+			struct rte_security_session *ses;
+			struct rte_security_ctx *ctx;
+			uint32_t ol_flags;
+		} security;
+	};
+	/** functions to prepare/process IPsec packets */
+	struct rte_ipsec_sa_pkt_func pkt_func;
+} __rte_cache_aligned;
+
+/**
+ * Checks that inside given rte_ipsec_session crypto/security fields
+ * are filled correctly and setups function pointers based on these values.
+ * Expects that all fields except IPsec processing function pointers
+ * (*pkt_func*) will be filled correctly by caller.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object
+ * @return
+ *   - Zero if operation completed successfully.
+ *   - -EINVAL if the parameters are invalid.
+ */
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss);
+
+/**
+ * For input mbufs and given IPsec session prepare crypto ops that can be
+ * enqueued into the cryptodev associated with given session.
+ * expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param cop
+ *   The address of an array of *num* pointers to the output *rte_crypto_op*
+ *   structures.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	return ss->pkt_func.prepare(ss, mb, cop, num);
+}
+
+/**
+ * Finalise processing of packets after crypto-dev finished with them or
+ * process packets that are subjects to inline IPsec offload.
+ * Expects that for each input packet:
+ *      - l2_len, l3_len are setup correctly
+ * Output mbufs will be:
+ * inbound - decrypted & authenticated, ESP(AH) related headers removed,
+ * *l2_len* and *l3_len* fields are updated.
+ * outbound - appropriate mbuf fields (ol_flags, tx_offloads, etc.)
+ * properly setup, if necessary - IP headers updated, ESP(AH) fields added,
+ * Note that erroneous mbufs are not freed by the function,
+ * but are placed beyond last valid mbuf in the *mb* array.
+ * It is a user responsibility to handle them further.
+ * @param ss
+ *   Pointer to the *rte_ipsec_session* object the packets belong to.
+ * @param mb
+ *   The address of an array of *num* pointers to *rte_mbuf* structures
+ *   which contain the input packets.
+ * @param num
+ *   The maximum number of packets to process.
+ * @return
+ *   Number of successfully processed packets, with error code set in rte_errno.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	return ss->pkt_func.process(ss, mb, num);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 1a66726b8..4d4f46e4f 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,10 +1,13 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_prepare;
+	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_session_prepare;
 
 	local: *;
 };
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index f5c893875..5465198ac 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -2,7 +2,7 @@
  * Copyright(c) 2018 Intel Corporation
  */
 
-#include <rte_ipsec_sa.h>
+#include <rte_ipsec.h>
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
@@ -333,3 +333,22 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 
 	return sz;
 }
+
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	RTE_SET_USED(sa);
+
+	rc = 0;
+	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
+
+	switch (ss->type) {
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 492521930..616cf1b9f 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -82,4 +82,8 @@ struct rte_ipsec_sa {
 
 } __rte_cache_aligned;
 
+int
+ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
+	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf);
+
 #endif /* _SA_H_ */
diff --git a/lib/librte_ipsec/ses.c b/lib/librte_ipsec/ses.c
new file mode 100644
index 000000000..11580970e
--- /dev/null
+++ b/lib/librte_ipsec/ses.c
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <rte_ipsec.h>
+#include "sa.h"
+
+static int
+session_check(struct rte_ipsec_session *ss)
+{
+	if (ss == NULL || ss->sa == NULL)
+		return -EINVAL;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE) {
+		if (ss->crypto.ses == NULL)
+			return -EINVAL;
+	} else {
+		if (ss->security.ses == NULL)
+			return -EINVAL;
+		if ((ss->type == RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||
+				ss->type ==
+				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) &&
+				ss->security.ctx == NULL)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+int __rte_experimental
+rte_ipsec_session_prepare(struct rte_ipsec_session *ss)
+{
+	int32_t rc;
+	struct rte_ipsec_sa_pkt_func fp;
+
+	rc = session_check(ss);
+	if (rc != 0)
+		return rc;
+
+	rc = ipsec_sa_pkt_func_select(ss, ss->sa, &fp);
+	if (rc != 0)
+		return rc;
+
+	ss->pkt_func = fp;
+
+	if (ss->type == RTE_SECURITY_ACTION_TYPE_NONE)
+		ss->crypto.ses->opaque_data = (uintptr_t)ss;
+	else
+		ss->security.ses->opaque_data = (uintptr_t)ss;
+
+	return 0;
+}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 5/9] ipsec: implement SA data-path API
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (4 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 4/9] ipsec: add SA data-path API Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 6/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
                                 ` (3 subsequent siblings)
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal

Provide implementation for rte_ipsec_pkt_crypto_prepare() and
rte_ipsec_pkt_process().
Current implementation:
 - supports ESP protocol tunnel mode.
 - supports ESP protocol transport mode.
 - supports ESN and replay window.
 - supports algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
 - covers all currently defined security session types:
        - RTE_SECURITY_ACTION_TYPE_NONE
        - RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
        - RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
        - RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL

For first two types SQN check/update is done by SW (inside the library).
For last two type it is HW/PMD responsibility.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_ipsec/crypto.h    |  123 ++++
 lib/librte_ipsec/iph.h       |   84 +++
 lib/librte_ipsec/ipsec_sqn.h |  186 ++++++
 lib/librte_ipsec/pad.h       |   45 ++
 lib/librte_ipsec/sa.c        | 1133 +++++++++++++++++++++++++++++++++-
 5 files changed, 1569 insertions(+), 2 deletions(-)
 create mode 100644 lib/librte_ipsec/crypto.h
 create mode 100644 lib/librte_ipsec/iph.h
 create mode 100644 lib/librte_ipsec/pad.h

diff --git a/lib/librte_ipsec/crypto.h b/lib/librte_ipsec/crypto.h
new file mode 100644
index 000000000..61f5c1433
--- /dev/null
+++ b/lib/librte_ipsec/crypto.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _CRYPTO_H_
+#define _CRYPTO_H_
+
+/**
+ * @file crypto.h
+ * Contains crypto specific functions/structures/macros used internally
+ * by ipsec library.
+ */
+
+ /*
+  * AES-GCM devices have some specific requirements for IV and AAD formats.
+  * Ideally that to be done by the driver itself.
+  */
+
+struct aead_gcm_iv {
+	uint32_t salt;
+	uint64_t iv;
+	uint32_t cnt;
+} __attribute__((packed));
+
+struct aead_gcm_aad {
+	uint32_t spi;
+	/*
+	 * RFC 4106, section 5:
+	 * Two formats of the AAD are defined:
+	 * one for 32-bit sequence numbers, and one for 64-bit ESN.
+	 */
+	union {
+		uint32_t u32[2];
+		uint64_t u64;
+	} sqn;
+	uint32_t align0; /* align to 16B boundary */
+} __attribute__((packed));
+
+struct gcm_esph_iv {
+	struct esp_hdr esph;
+	uint64_t iv;
+} __attribute__((packed));
+
+
+static inline void
+aead_gcm_iv_fill(struct aead_gcm_iv *gcm, uint64_t iv, uint32_t salt)
+{
+	gcm->salt = salt;
+	gcm->iv = iv;
+	gcm->cnt = rte_cpu_to_be_32(1);
+}
+
+/*
+ * RFC 4106, 5 AAD Construction
+ * spi and sqn should already be converted into network byte order.
+ * Make sure that not used bytes are zeroed.
+ */
+static inline void
+aead_gcm_aad_fill(struct aead_gcm_aad *aad, rte_be32_t spi, rte_be64_t sqn,
+	int esn)
+{
+	aad->spi = spi;
+	if (esn)
+		aad->sqn.u64 = sqn;
+	else {
+		aad->sqn.u32[0] = sqn_low32(sqn);
+		aad->sqn.u32[1] = 0;
+	}
+	aad->align0 = 0;
+}
+
+static inline void
+gen_iv(uint64_t iv[IPSEC_MAX_IV_QWORD], rte_be64_t sqn)
+{
+	iv[0] = sqn;
+	iv[1] = 0;
+}
+
+/*
+ * from RFC 4303 3.3.2.1.4:
+ * If the ESN option is enabled for the SA, the high-order 32
+ * bits of the sequence number are appended after the Next Header field
+ * for purposes of this computation, but are not transmitted.
+ */
+
+/*
+ * Helper function that moves ICV by 4B below, and inserts SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+insert_sqh(uint32_t sqh, void *picv, uint32_t icv_len)
+{
+	uint32_t *icv;
+	int32_t i;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = icv_len; i-- != 0; icv[i] = icv[i - 1])
+		;
+
+	icv[i] = sqh;
+}
+
+/*
+ * Helper function that moves ICV by 4B up, and removes SQN.hibits.
+ * icv parameter points to the new start of ICV.
+ */
+static inline void
+remove_sqh(void *picv, uint32_t icv_len)
+{
+	uint32_t i, *icv;
+
+	RTE_ASSERT(icv_len % sizeof(uint32_t) == 0);
+
+	icv = picv;
+	icv_len = icv_len / sizeof(uint32_t);
+	for (i = 0; i != icv_len; i++)
+		icv[i] = icv[i + 1];
+}
+
+#endif /* _CRYPTO_H_ */
diff --git a/lib/librte_ipsec/iph.h b/lib/librte_ipsec/iph.h
new file mode 100644
index 000000000..58930cf18
--- /dev/null
+++ b/lib/librte_ipsec/iph.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _IPH_H_
+#define _IPH_H_
+
+/**
+ * @file iph.h
+ * Contains functions/structures/macros to manipulate IPv4/IPv6 headers
+ * used internally by ipsec library.
+ */
+
+/*
+ * Move preceding (L3) headers down to remove ESP header and IV.
+ */
+static inline void
+remove_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = hlen; i-- != 0; np[i] = op[i])
+		;
+}
+
+/*
+ * Move preceding (L3) headers up to free space for ESP header and IV.
+ */
+static inline void
+insert_esph(char *np, char *op, uint32_t hlen)
+{
+	uint32_t i;
+
+	for (i = 0; i != hlen; i++)
+		np[i] = op[i];
+}
+
+/* update original ip header fields for transport case */
+static inline int
+update_trs_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, uint32_t l3len, uint8_t proto)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+	int32_t rc;
+
+	if ((sa->type & RTE_IPSEC_SATP_IPV_MASK) == RTE_IPSEC_SATP_IPV4) {
+		v4h = p;
+		rc = v4h->next_proto_id;
+		v4h->next_proto_id = proto;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else if (l3len == sizeof(*v6h)) {
+		v6h = p;
+		rc = v6h->proto;
+		v6h->proto = proto;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	/* need to add support for IPv6 with options */
+	} else
+		rc = -ENOTSUP;
+
+	return rc;
+}
+
+/* update original and new ip header fields for tunnel case */
+static inline void
+update_tun_l3hdr(const struct rte_ipsec_sa *sa, void *p, uint32_t plen,
+		uint32_t l2len, rte_be16_t pid)
+{
+	struct ipv4_hdr *v4h;
+	struct ipv6_hdr *v6h;
+
+	if (sa->type & RTE_IPSEC_SATP_MODE_TUNLV4) {
+		v4h = p;
+		v4h->packet_id = pid;
+		v4h->total_length = rte_cpu_to_be_16(plen - l2len);
+	} else {
+		v6h = p;
+		v6h->payload_len = rte_cpu_to_be_16(plen - l2len -
+				sizeof(*v6h));
+	}
+}
+
+#endif /* _IPH_H_ */
diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 1935f6e30..6e18c34eb 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,45 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+/*
+ * gets SQN.hi32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_hi32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return (sqn >> 32);
+#else
+	return sqn;
+#endif
+}
+
+/*
+ * gets SQN.low32 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be32_t
+sqn_low32(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 32);
+#endif
+}
+
+/*
+ * gets SQN.low16 bits, SQN supposed to be in network byte order.
+ */
+static inline rte_be16_t
+sqn_low16(rte_be64_t sqn)
+{
+#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN
+	return sqn;
+#else
+	return (sqn >> 48);
+#endif
+}
+
 /*
  * for given size, calculate required number of buckets.
  */
@@ -30,6 +69,153 @@ replay_num_bucket(uint32_t wsz)
 	return nb;
 }
 
+/*
+ * According to RFC4303 A2.1, determine the high-order bit of sequence number.
+ * use 32bit arithmetic inside, return uint64_t.
+ */
+static inline uint64_t
+reconstruct_esn(uint64_t t, uint32_t sqn, uint32_t w)
+{
+	uint32_t th, tl, bl;
+
+	tl = t;
+	th = t >> 32;
+	bl = tl - w + 1;
+
+	/* case A: window is within one sequence number subspace */
+	if (tl >= (w - 1))
+		th += (sqn < bl);
+	/* case B: window spans two sequence number subspaces */
+	else if (th != 0)
+		th -= (sqn >= bl);
+
+	/* return constructed sequence with proper high-order bits */
+	return (uint64_t)th << 32 | sqn;
+}
+
+/**
+ * Perform the replay checking.
+ *
+ * struct rte_ipsec_sa contains the window and window related parameters,
+ * such as the window size, bitmask, and the last acknowledged sequence number.
+ *
+ * Based on RFC 6479.
+ * Blocks are 64 bits unsigned integers
+ */
+static inline int32_t
+esn_inb_check_sqn(const struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* seq is larger than lastseq */
+	if (sqn > rsn->sqn)
+		return 0;
+
+	/* seq is outside window */
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* seq is inside the window */
+	bit = sqn & WINDOW_BIT_LOC_MASK;
+	bucket = (sqn >> WINDOW_BUCKET_BITS) & sa->replay.bucket_index_mask;
+
+	/* already seen packet */
+	if (rsn->window[bucket] & ((uint64_t)1 << bit))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * For outbound SA perform the sequence number update.
+ */
+static inline uint64_t
+esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
+{
+	uint64_t n, s, sqn;
+
+	n = *num;
+	sqn = sa->sqn.outb + n;
+	sa->sqn.outb = sqn;
+
+	/* overflow */
+	if (sqn > sa->sqn_mask) {
+		s = sqn - sa->sqn_mask;
+		*num = (s < n) ?  n - s : 0;
+	}
+
+	return sqn - n;
+}
+
+/**
+ * For inbound SA perform the sequence number and replay window update.
+ */
+static inline int32_t
+esn_inb_update_sqn(struct replay_sqn *rsn, const struct rte_ipsec_sa *sa,
+	uint64_t sqn)
+{
+	uint32_t bit, bucket, last_bucket, new_bucket, diff, i;
+
+	/* replay not enabled */
+	if (sa->replay.win_sz == 0)
+		return 0;
+
+	/* handle ESN */
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	/* seq is outside window*/
+	if (sqn == 0 || sqn + sa->replay.win_sz < rsn->sqn)
+		return -EINVAL;
+
+	/* update the bit */
+	bucket = (sqn >> WINDOW_BUCKET_BITS);
+
+	/* check if the seq is within the range */
+	if (sqn > rsn->sqn) {
+		last_bucket = rsn->sqn >> WINDOW_BUCKET_BITS;
+		diff = bucket - last_bucket;
+		/* seq is way after the range of WINDOW_SIZE */
+		if (diff > sa->replay.nb_bucket)
+			diff = sa->replay.nb_bucket;
+
+		for (i = 0; i != diff; i++) {
+			new_bucket = (i + last_bucket + 1) &
+				sa->replay.bucket_index_mask;
+			rsn->window[new_bucket] = 0;
+		}
+		rsn->sqn = sqn;
+	}
+
+	bucket &= sa->replay.bucket_index_mask;
+	bit = (uint64_t)1 << (sqn & WINDOW_BIT_LOC_MASK);
+
+	/* already seen packet */
+	if (rsn->window[bucket] & bit)
+		return -EINVAL;
+
+	rsn->window[bucket] |= bit;
+	return 0;
+}
+
+/**
+ * To achieve ability to do multiple readers single writer for
+ * SA replay window information and sequence number (RSN)
+ * basic RCU schema is used:
+ * SA have 2 copies of RSN (one for readers, another for writers).
+ * Each RSN contains a rwlock that has to be grabbed (for read/write)
+ * to avoid races between readers and writer.
+ * Writer is responsible to make a copy or reader RSN, update it
+ * and mark newly updated RSN as readers one.
+ * That approach is intended to minimize contention and cache sharing
+ * between writer and readers.
+ */
+
 /**
  * Based on number of buckets calculated required size for the
  * structure that holds replay window and sequence number (RSN) information.
diff --git a/lib/librte_ipsec/pad.h b/lib/librte_ipsec/pad.h
new file mode 100644
index 000000000..2f5ccd00e
--- /dev/null
+++ b/lib/librte_ipsec/pad.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _PAD_H_
+#define _PAD_H_
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+#endif /* _PAD_H_ */
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index 5465198ac..d263e7bcf 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -6,9 +6,13 @@
 #include <rte_esp.h>
 #include <rte_ip.h>
 #include <rte_errno.h>
+#include <rte_cryptodev.h>
 
 #include "sa.h"
 #include "ipsec_sqn.h"
+#include "crypto.h"
+#include "iph.h"
+#include "pad.h"
 
 /* some helper structures */
 struct crypto_xform {
@@ -101,6 +105,9 @@ rte_ipsec_sa_fini(struct rte_ipsec_sa *sa)
 	memset(sa, 0, sa->size);
 }
 
+/*
+ * Determine expected SA type based on input parameters.
+ */
 static int
 fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 {
@@ -155,6 +162,9 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	return 0;
 }
 
+/*
+ * Init ESP inbound specific things.
+ */
 static void
 esp_inb_init(struct rte_ipsec_sa *sa)
 {
@@ -165,6 +175,9 @@ esp_inb_init(struct rte_ipsec_sa *sa)
 	sa->ctp.cipher.length = sa->icv_len + sa->ctp.cipher.offset;
 }
 
+/*
+ * Init ESP inbound tunnel specific things.
+ */
 static void
 esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -172,6 +185,9 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_inb_init(sa);
 }
 
+/*
+ * Init ESP outbound specific things.
+ */
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
@@ -190,6 +206,9 @@ esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 	}
 }
 
+/*
+ * Init ESP outbound tunnel specific things.
+ */
 static void
 esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 {
@@ -201,6 +220,9 @@ esp_outb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 	esp_outb_init(sa, sa->hdr_len);
 }
 
+/*
+ * helper function, init SA structure.
+ */
 static int
 esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	const struct crypto_xform *cxf)
@@ -212,6 +234,7 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		/* RFC 4106 */
 		if (cxf->aead->algo != RTE_CRYPTO_AEAD_AES_GCM)
 			return -EINVAL;
+		sa->aad_len = sizeof(struct aead_gcm_aad);
 		sa->icv_len = cxf->aead->digest_length;
 		sa->iv_ofs = cxf->aead->iv.offset;
 		sa->iv_len = sizeof(uint64_t);
@@ -334,18 +357,1124 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return sz;
 }
 
+static inline void
+mbuf_bulk_copy(struct rte_mbuf *dst[], struct rte_mbuf * const src[],
+	uint32_t num)
+{
+	uint32_t i;
+
+	for (i = 0; i != num; i++)
+		dst[i] = src[i];
+}
+
+/*
+ * setup crypto ops for LOOKASIDE_NONE (pure crypto) type of devices.
+ */
+static inline void
+lksd_none_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_WITH_SESSION;
+		sop->m_src = mb[i];
+		__rte_crypto_sym_op_attach_sym_session(sop, ss->crypto.ses);
+	}
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP outbound packet.
+ */
+static inline void
+esp_outb_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, const uint64_t ivp[IPSEC_MAX_IV_QWORD],
+	const union sym_op_data *icv, uint32_t hlen, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->aead.data.length = sa->ctp.cipher.length + plen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = sa->ctp.cipher.offset + hlen;
+		sop->cipher.data.length = sa->ctp.cipher.length + plen;
+		sop->auth.data.offset = sa->ctp.auth.offset + hlen;
+		sop->auth.data.length = sa->ctp.auth.length + plen;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound tunnel case.
+ */
+static inline int32_t
+esp_outb_tun_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	union sym_op_data *icv)
+{
+	uint32_t clen, hlen, l2len, pdlen, pdofs, plen, tlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	/* calculate extra header space required */
+	hlen = sa->hdr_len + sa->iv_len + sizeof(*esph);
+
+	/* size of ipsec protected data */
+	l2len = mb->l2_len;
+	plen = mb->pkt_len - mb->l2_len;
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and prepend */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend header */
+	ph = rte_pktmbuf_prepend(mb, hlen - l2len);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* update pkt l2/l3 len */
+	mb->l2_len = sa->hdr_l3_off;
+	mb->l3_len = sa->hdr_len - sa->hdr_l3_off;
+
+	/* copy tunnel pkt header */
+	rte_memcpy(ph, sa->hdr, sa->hdr_len);
+
+	/* update original and new ip header fields */
+	update_tun_l3hdr(sa, ph + sa->hdr_l3_off, mb->pkt_len, sa->hdr_l3_off,
+			sqn_low16(sqc));
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + sa->hdr_len);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = sa->proto;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+outb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	uint32_t *psqh;
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0) {
+		psqh = (uint32_t *)(icv->va - sa->sqh_len);
+		psqh[0] = sqn_hi32(sqc);
+	}
+
+	/*
+	 * fill IV and AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM .
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound tunnel case.
+ */
+static uint16_t
+outb_tun_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, 0, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	 /* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup/update packet data and metadata for ESP outbound transport case.
+ */
+static inline int32_t
+esp_outb_trs_pkt_prepare(struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const uint64_t ivp[IPSEC_MAX_IV_QWORD], struct rte_mbuf *mb,
+	uint32_t l2len, uint32_t l3len, union sym_op_data *icv)
+{
+	uint8_t np;
+	uint32_t clen, hlen, pdlen, pdofs, plen, tlen, uhlen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	char *ph, *pt;
+	uint64_t *iv;
+
+	uhlen = l2len + l3len;
+	plen = mb->pkt_len - uhlen;
+
+	/* calculate extra header space required */
+	hlen = sa->iv_len + sizeof(*esph);
+
+	/* number of bytes to encrypt */
+	clen = plen + sizeof(*espt);
+	clen = RTE_ALIGN_CEIL(clen, sa->pad_align);
+
+	/* pad length + esp tail */
+	pdlen = clen - plen;
+	tlen = pdlen + sa->icv_len;
+
+	/* do append and insert */
+	ml = rte_pktmbuf_lastseg(mb);
+	if (tlen + sa->sqh_len + sa->aad_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	/* prepend space for ESP header */
+	ph = rte_pktmbuf_prepend(mb, hlen);
+	if (ph == NULL)
+		return -ENOSPC;
+
+	/* append tail */
+	pdofs = ml->data_len;
+	ml->data_len += tlen;
+	mb->pkt_len += tlen;
+	pt = rte_pktmbuf_mtod_offset(ml, typeof(pt), pdofs);
+
+	/* shift L2/L3 headers */
+	insert_esph(ph, ph + hlen, uhlen);
+
+	/* update ip  header fields */
+	np = update_trs_l3hdr(sa, ph + l2len, mb->pkt_len, l2len, l3len,
+			IPPROTO_ESP);
+
+	/* update spi, seqn and iv */
+	esph = (struct esp_hdr *)(ph + uhlen);
+	iv = (uint64_t *)(esph + 1);
+	rte_memcpy(iv, ivp, sa->iv_len);
+
+	esph->spi = sa->spi;
+	esph->seq = sqn_low32(sqc);
+
+	/* offset for ICV */
+	pdofs += pdlen + sa->sqh_len;
+
+	/* pad length */
+	pdlen -= sizeof(*espt);
+
+	/* copy padding data */
+	rte_memcpy(pt, esp_pad_bytes, pdlen);
+
+	/* update esp trailer */
+	espt = (struct esp_tail *)(pt + pdlen);
+	espt->pad_len = pdlen;
+	espt->next_proto = np;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, pdofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, pdofs);
+
+	return clen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP outbound transport case.
+ */
+static uint16_t
+outb_trs_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, setup crypto op */
+		if (rc >= 0) {
+			mb[k] = mb[i];
+			outb_pkt_xprepare(sa, sqc, &icv);
+			esp_outb_cop_prepare(cop[k], sa, iv, &icv, l2 + l3, rc);
+			k++;
+		/* failure, put packet into the death-row */
+		} else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * setup crypto op and crypto sym op for ESP inbound tunnel packet.
+ */
+static inline int32_t
+esp_inb_tun_cop_prepare(struct rte_crypto_op *cop,
+	const struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	const union sym_op_data *icv, uint32_t pofs, uint32_t plen)
+{
+	struct rte_crypto_sym_op *sop;
+	struct aead_gcm_iv *gcm;
+	uint64_t *ivc, *ivp;
+	uint32_t clen;
+
+	clen = plen - sa->ctp.cipher.length;
+	if ((int32_t)clen < 0 || (clen & (sa->pad_align - 1)) != 0)
+		return -EINVAL;
+
+	/* fill sym op fields */
+	sop = cop->sym;
+
+	/* AEAD (AES_GCM) case */
+	if (sa->aad_len != 0) {
+		sop->aead.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->aead.data.length = clen;
+		sop->aead.digest.data = icv->va;
+		sop->aead.digest.phys_addr = icv->pa;
+		sop->aead.aad.data = icv->va + sa->icv_len;
+		sop->aead.aad.phys_addr = icv->pa + sa->icv_len;
+
+		/* fill AAD IV (located inside crypto op) */
+		gcm = rte_crypto_op_ctod_offset(cop, struct aead_gcm_iv *,
+			sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		aead_gcm_iv_fill(gcm, ivp[0], sa->salt);
+	/* CRYPT+AUTH case */
+	} else {
+		sop->cipher.data.offset = pofs + sa->ctp.cipher.offset;
+		sop->cipher.data.length = clen;
+		sop->auth.data.offset = pofs + sa->ctp.auth.offset;
+		sop->auth.data.length = plen - sa->ctp.auth.length;
+		sop->auth.digest.data = icv->va;
+		sop->auth.digest.phys_addr = icv->pa;
+
+		/* copy iv from the input packet to the cop */
+		ivc = rte_crypto_op_ctod_offset(cop, uint64_t *, sa->iv_ofs);
+		ivp = rte_pktmbuf_mtod_offset(mb, uint64_t *,
+			pofs + sizeof(struct esp_hdr));
+		rte_memcpy(ivc, ivp, sa->iv_len);
+	}
+	return 0;
+}
+
+/*
+ * for pure cryptodev (lookaside none) depending on SA settings,
+ * we might have to write some extra data to the packet.
+ */
+static inline void
+inb_pkt_xprepare(const struct rte_ipsec_sa *sa, rte_be64_t sqc,
+	const union sym_op_data *icv)
+{
+	struct aead_gcm_aad *aad;
+
+	/* insert SQN.hi between ESP trailer and ICV */
+	if (sa->sqh_len != 0)
+		insert_sqh(sqn_hi32(sqc), icv->va, sa->icv_len);
+
+	/*
+	 * fill AAD fields, if any (aad fields are placed after icv),
+	 * right now we support only one AEAD algorithm: AES-GCM.
+	 */
+	if (sa->aad_len != 0) {
+		aad = (struct aead_gcm_aad *)(icv->va + sa->icv_len);
+		aead_gcm_aad_fill(aad, sa->spi, sqc, IS_ESN(sa));
+	}
+}
+
+/*
+ * setup/update packet data and metadata for ESP inbound tunnel case.
+ */
+static inline int32_t
+esp_inb_tun_pkt_prepare(const struct rte_ipsec_sa *sa,
+	const struct replay_sqn *rsn, struct rte_mbuf *mb,
+	uint32_t hlen, union sym_op_data *icv)
+{
+	int32_t rc;
+	uint64_t sqn;
+	uint32_t icv_ofs, plen;
+	struct rte_mbuf *ml;
+	struct esp_hdr *esph;
+
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+
+	/*
+	 * retrieve and reconstruct SQN, then check it, then
+	 * convert it back into network byte order.
+	 */
+	sqn = rte_be_to_cpu_32(esph->seq);
+	if (IS_ESN(sa))
+		sqn = reconstruct_esn(rsn->sqn, sqn, sa->replay.win_sz);
+
+	rc = esn_inb_check_sqn(rsn, sa, sqn);
+	if (rc != 0)
+		return rc;
+
+	sqn = rte_cpu_to_be_64(sqn);
+
+	/* start packet manipulation */
+	plen = mb->pkt_len;
+	plen = plen - hlen;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	icv_ofs = ml->data_len - sa->icv_len + sa->sqh_len;
+
+	/* we have to allocate space for AAD somewhere,
+	 * right now - just use free trailing space at the last segment.
+	 * Would probably be more convenient to reserve space for AAD
+	 * inside rte_crypto_op itself
+	 * (again for IV space is already reserved inside cop).
+	 */
+	if (sa->aad_len + sa->sqh_len > rte_pktmbuf_tailroom(ml))
+		return -ENOSPC;
+
+	icv->va = rte_pktmbuf_mtod_offset(ml, void *, icv_ofs);
+	icv->pa = rte_pktmbuf_iova_offset(ml, icv_ofs);
+
+	inb_pkt_xprepare(sa, sqn, icv);
+	return plen;
+}
+
+/*
+ * setup/update packets and crypto ops for ESP inbound case.
+ */
+static uint16_t
+inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	struct rte_crypto_op *cop[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, hl;
+	struct rte_ipsec_sa *sa;
+	struct replay_sqn *rsn;
+	union sym_op_data icv;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+
+		hl = mb[i]->l2_len + mb[i]->l3_len;
+		rc = esp_inb_tun_pkt_prepare(sa, rsn, mb[i], hl, &icv);
+		if (rc >= 0)
+			rc = esp_inb_tun_cop_prepare(cop[k], sa, mb[i], &icv,
+				hl, rc);
+
+		if (rc == 0)
+			mb[k++] = mb[i];
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	/* update cops */
+	lksd_none_cop_prepare(ss, mb, cop, k);
+
+	/* copy not prepared mbufs beyond good ones */
+	if (k != num && k != 0)
+		mbuf_bulk_copy(mb + k, dr, num - k);
+
+	return k;
+}
+
+/*
+ *  setup crypto ops for LOOKASIDE_PROTO type of devices.
+ */
+static inline void
+lksd_proto_cop_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	uint32_t i;
+	struct rte_crypto_sym_op *sop;
+
+	for (i = 0; i != num; i++) {
+		sop = cop[i]->sym;
+		cop[i]->type = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+		cop[i]->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		cop[i]->sess_type = RTE_CRYPTO_OP_SECURITY_SESSION;
+		sop->m_src = mb[i];
+		__rte_security_attach_session(sop, ss->security.ses);
+	}
+}
+
+/*
+ *  setup packets and crypto ops for LOOKASIDE_PROTO type of devices.
+ *  Note that for LOOKASIDE_PROTO all packet modifications will be
+ *  performed by PMD/HW.
+ *  SW has only to prepare crypto op.
+ */
+static uint16_t
+lksd_proto_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], struct rte_crypto_op *cop[], uint16_t num)
+{
+	lksd_proto_cop_prepare(ss, mb, cop, num);
+	return num;
+}
+
+/*
+ * process ESP inbound tunnel packet.
+ */
+static inline int
+esp_inb_tun_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/*
+	 * check padding and next proto.
+	 * return an error if something is wrong.
+	 */
+	pd = (char *)espt - espt->pad_len;
+	if (espt->next_proto != sa->proto ||
+			memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* cut of L2/L3 headers, ESP header and IV */
+	hlen = mb->l2_len + mb->l3_len;
+	esph = rte_pktmbuf_mtod_offset(mb, struct esp_hdr *, hlen);
+	rte_pktmbuf_adj(mb, hlen + sa->ctp.cipher.offset);
+
+	/* retrieve SQN for later check */
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* reset mbuf metatdata: L2/L3 len, packet type */
+	mb->packet_type = RTE_PTYPE_UNKNOWN;
+	mb->l2_len = 0;
+	mb->l3_len = 0;
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * process ESP inbound transport packet.
+ */
+static inline int
+esp_inb_trs_single_pkt_process(struct rte_ipsec_sa *sa, struct rte_mbuf *mb,
+	uint32_t *sqn)
+{
+	uint32_t hlen, icv_len, l2len, l3len, tlen;
+	struct esp_hdr *esph;
+	struct esp_tail *espt;
+	struct rte_mbuf *ml;
+	char *np, *op, *pd;
+
+	if (mb->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED)
+		return -EBADMSG;
+
+	icv_len = sa->icv_len;
+
+	ml = rte_pktmbuf_lastseg(mb);
+	espt = rte_pktmbuf_mtod_offset(ml, struct esp_tail *,
+		ml->data_len - icv_len - sizeof(*espt));
+
+	/* check padding, return an error if something is wrong. */
+	pd = (char *)espt - espt->pad_len;
+	if (memcmp(pd, esp_pad_bytes, espt->pad_len))
+		return -EINVAL;
+
+	/* cut of ICV, ESP tail and padding bytes */
+	tlen = icv_len + sizeof(*espt) + espt->pad_len;
+	ml->data_len -= tlen;
+	mb->pkt_len -= tlen;
+
+	/* retrieve SQN for later check */
+	l2len = mb->l2_len;
+	l3len = mb->l3_len;
+	hlen = l2len + l3len;
+	op = rte_pktmbuf_mtod(mb, char *);
+	esph = (struct esp_hdr *)(op + hlen);
+	*sqn = rte_be_to_cpu_32(esph->seq);
+
+	/* cut off ESP header and IV, update L3 header */
+	np = rte_pktmbuf_adj(mb, sa->ctp.cipher.offset);
+	remove_esph(np, op, hlen);
+	update_trs_l3hdr(sa, np + l2len, mb->pkt_len, l2len, l3len,
+			espt->next_proto);
+
+	/* reset mbuf packet type */
+	mb->packet_type &= (RTE_PTYPE_L2_MASK | RTE_PTYPE_L3_MASK);
+
+	/* clear the PKT_RX_SEC_OFFLOAD flag if set */
+	mb->ol_flags &= ~(mb->ol_flags & PKT_RX_SEC_OFFLOAD);
+	return 0;
+}
+
+/*
+ * for group of ESP inbound packets perform SQN check and update.
+ */
+static inline uint16_t
+esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
+	struct rte_mbuf *mb[], struct rte_mbuf *dr[], uint16_t num)
+{
+	uint32_t i, k;
+	struct replay_sqn *rsn;
+
+	rsn = sa->sqn.inb;
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if (esn_inb_update_sqn(rsn, sa, sqn[i]) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound tunnel packets.
+ */
+static uint16_t
+inb_tun_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_ipsec_sa *sa;
+	uint32_t sqn[num];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_tun_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process group of ESP inbound transport packets.
+ */
+static uint16_t
+inb_trs_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	uint32_t sqn[num];
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	/* process packets, extract seq numbers */
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		/* good packet */
+		if (esp_inb_trs_single_pkt_process(sa, mb[i], sqn + k) == 0)
+			mb[k++] = mb[i];
+		/* bad packet, will drop from furhter processing */
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* update seq # and replay winow */
+	k = esp_inb_rsn_update(sa, sqn, mb, dr + i - k, k);
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * process outbound packets for SA with ESN support,
+ * for algorithms that require SQN.hibits to be implictly included
+ * into digest computation.
+ * In that case we have to move ICV bytes back to their proper place.
+ */
+static uint16_t
+outb_sqh_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k, icv_len, *icv;
+	struct rte_mbuf *ml;
+	struct rte_ipsec_sa *sa;
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	k = 0;
+	icv_len = sa->icv_len;
+
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0) {
+			ml = rte_pktmbuf_lastseg(mb[i]);
+			icv = rte_pktmbuf_mtod_offset(ml, void *,
+				ml->data_len - icv_len);
+			remove_sqh(icv, icv_len);
+			mb[k++] = mb[i];
+		} else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * simplest pkt process routine:
+ * all actual processing is already done by HW/PMD,
+ * just check mbuf ol_flags.
+ * used for:
+ * - inbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+ * - inbound/outbound for RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+ * - outbound for RTE_SECURITY_ACTION_TYPE_NONE when ESN is disabled
+ */
+static uint16_t
+pkt_flag_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
+	uint16_t num)
+{
+	uint32_t i, k;
+	struct rte_mbuf *dr[num];
+
+	RTE_SET_USED(ss);
+
+	k = 0;
+	for (i = 0; i != num; i++) {
+		if ((mb[i]->ol_flags & PKT_RX_SEC_OFFLOAD_FAILED) == 0)
+			mb[k++] = mb[i];
+		else
+			dr[i - k] = mb[i];
+	}
+
+	/* handle unprocessed mbufs */
+	if (k != num) {
+		rte_errno = EBADMSG;
+		if (k != 0)
+			mbuf_bulk_copy(mb + k, dr, num - k);
+	}
+
+	return k;
+}
+
+/*
+ * prepare packets for inline ipsec processing:
+ * set ol_flags and attach metadata.
+ */
+static inline void
+inline_outb_mbuf_prepare(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	uint32_t i, ol_flags;
+
+	ol_flags = ss->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA;
+	for (i = 0; i != num; i++) {
+
+		mb[i]->ol_flags |= PKT_TX_SEC_OFFLOAD;
+		if (ol_flags != 0)
+			rte_security_set_pkt_metadata(ss->security.ctx,
+				ss->security.ses, mb[i], NULL);
+	}
+}
+
+/*
+ * process group of ESP outbound tunnel packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_tun_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_tun_pkt_prepare(sa, sqc, iv, mb[i], &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * process group of ESP outbound transport packets destined for
+ * INLINE_CRYPTO type of device.
+ */
+static uint16_t
+inline_outb_trs_pkt_process(const struct rte_ipsec_session *ss,
+	struct rte_mbuf *mb[], uint16_t num)
+{
+	int32_t rc;
+	uint32_t i, k, n, l2, l3;
+	uint64_t sqn;
+	rte_be64_t sqc;
+	struct rte_ipsec_sa *sa;
+	union sym_op_data icv;
+	uint64_t iv[IPSEC_MAX_IV_QWORD];
+	struct rte_mbuf *dr[num];
+
+	sa = ss->sa;
+
+	n = num;
+	sqn = esn_outb_update_sqn(sa, &n);
+	if (n != num)
+		rte_errno = EOVERFLOW;
+
+	k = 0;
+	for (i = 0; i != n; i++) {
+
+		l2 = mb[i]->l2_len;
+		l3 = mb[i]->l3_len;
+
+		sqc = rte_cpu_to_be_64(sqn + i);
+		gen_iv(iv, sqc);
+
+		/* try to update the packet itself */
+		rc = esp_outb_trs_pkt_prepare(sa, sqc, iv, mb[i],
+				l2, l3, &icv);
+
+		/* success, update mbuf fields */
+		if (rc >= 0)
+			mb[k++] = mb[i];
+		/* failure, put packet into the death-row */
+		else {
+			dr[i - k] = mb[i];
+			rte_errno = -rc;
+		}
+	}
+
+	inline_outb_mbuf_prepare(ss, mb, k);
+
+	/* copy not processed mbufs beyond good ones */
+	if (k != n && k != 0)
+		mbuf_bulk_copy(mb + k, dr, n - k);
+
+	return k;
+}
+
+/*
+ * outbound for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+ * actual processing is done by HW/PMD, just set flags and metadata.
+ */
+static uint16_t
+outb_inline_proto_process(const struct rte_ipsec_session *ss,
+		struct rte_mbuf *mb[], uint16_t num)
+{
+	inline_outb_mbuf_prepare(ss, mb, num);
+	return num;
+}
+
+/*
+ * Select packet processing function for session on LOOKASIDE_NONE
+ * type of device.
+ */
+static int
+lksd_none_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = inb_pkt_prepare;
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->prepare = outb_tun_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->prepare = outb_trs_prepare;
+		pf->process = (sa->sqh_len != 0) ?
+			outb_sqh_process : pkt_flag_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for session on INLINE_CRYPTO
+ * type of device.
+ */
+static int
+inline_crypto_pkt_func_select(const struct rte_ipsec_sa *sa,
+		struct rte_ipsec_sa_pkt_func *pf)
+{
+	int32_t rc;
+
+	static const uint64_t msk = RTE_IPSEC_SATP_DIR_MASK |
+			RTE_IPSEC_SATP_MODE_MASK;
+
+	rc = 0;
+	switch (sa->type & msk) {
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_IB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inb_trs_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV4):
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TUNLV6):
+		pf->process = inline_outb_tun_pkt_process;
+		break;
+	case (RTE_IPSEC_SATP_DIR_OB | RTE_IPSEC_SATP_MODE_TRANS):
+		pf->process = inline_outb_trs_pkt_process;
+		break;
+	default:
+		rc = -ENOTSUP;
+	}
+
+	return rc;
+}
+
+/*
+ * Select packet processing function for given session based on SA parameters
+ * and type of associated with the session device.
+ */
 int
 ipsec_sa_pkt_func_select(const struct rte_ipsec_session *ss,
 	const struct rte_ipsec_sa *sa, struct rte_ipsec_sa_pkt_func *pf)
 {
 	int32_t rc;
 
-	RTE_SET_USED(sa);
-
 	rc = 0;
 	pf[0] = (struct rte_ipsec_sa_pkt_func) { 0 };
 
 	switch (ss->type) {
+	case RTE_SECURITY_ACTION_TYPE_NONE:
+		rc = lksd_none_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO:
+		rc = inline_crypto_pkt_func_select(sa, pf);
+		break;
+	case RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL:
+		if ((sa->type & RTE_IPSEC_SATP_DIR_MASK) ==
+				RTE_IPSEC_SATP_DIR_IB)
+			pf->process = pkt_flag_process;
+		else
+			pf->process = outb_inline_proto_process;
+		break;
+	case RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL:
+		pf->prepare = lksd_proto_prepare;
+		pf->process = pkt_flag_process;
+		break;
 	default:
 		rc = -ENOTSUP;
 	}
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 6/9] ipsec: rework SA replay window/SQN for MT environment
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (5 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 5/9] ipsec: implement " Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 7/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
                                 ` (2 subsequent siblings)
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

With these changes functions:
  - rte_ipsec_pkt_crypto_prepare
  - rte_ipsec_pkt_process
 can be safely used in MT environment, as long as the user can guarantee
 that they obey multiple readers/single writer model for SQN+replay_window
 operations.
 To be more specific:
 for outbound SA there are no restrictions.
 for inbound SA the caller has to guarantee that at any given moment
 only one thread is executing rte_ipsec_pkt_process() for given SA.
 Note that it is caller responsibility to maintain correct order
 of packets to be processed.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_ipsec/ipsec_sqn.h    | 113 +++++++++++++++++++++++++++++++-
 lib/librte_ipsec/rte_ipsec_sa.h |  33 ++++++++++
 lib/librte_ipsec/sa.c           |  80 +++++++++++++++++-----
 lib/librte_ipsec/sa.h           |  21 +++++-
 4 files changed, 225 insertions(+), 22 deletions(-)

diff --git a/lib/librte_ipsec/ipsec_sqn.h b/lib/librte_ipsec/ipsec_sqn.h
index 6e18c34eb..7de10bef5 100644
--- a/lib/librte_ipsec/ipsec_sqn.h
+++ b/lib/librte_ipsec/ipsec_sqn.h
@@ -15,6 +15,8 @@
 
 #define IS_ESN(sa)	((sa)->sqn_mask == UINT64_MAX)
 
+#define	SQN_ATOMIC(sa)	((sa)->type & RTE_IPSEC_SATP_SQN_ATOM)
+
 /*
  * gets SQN.hi32 bits, SQN supposed to be in network byte order.
  */
@@ -140,8 +142,12 @@ esn_outb_update_sqn(struct rte_ipsec_sa *sa, uint32_t *num)
 	uint64_t n, s, sqn;
 
 	n = *num;
-	sqn = sa->sqn.outb + n;
-	sa->sqn.outb = sqn;
+	if (SQN_ATOMIC(sa))
+		sqn = (uint64_t)rte_atomic64_add_return(&sa->sqn.outb.atom, n);
+	else {
+		sqn = sa->sqn.outb.raw + n;
+		sa->sqn.outb.raw = sqn;
+	}
 
 	/* overflow */
 	if (sqn > sa->sqn_mask) {
@@ -231,4 +237,107 @@ rsn_size(uint32_t nb_bucket)
 	return sz;
 }
 
+/**
+ * Copy replay window and SQN.
+ */
+static inline void
+rsn_copy(const struct rte_ipsec_sa *sa, uint32_t dst, uint32_t src)
+{
+	uint32_t i, n;
+	struct replay_sqn *d;
+	const struct replay_sqn *s;
+
+	d = sa->sqn.inb.rsn[dst];
+	s = sa->sqn.inb.rsn[src];
+
+	n = sa->replay.nb_bucket;
+
+	d->sqn = s->sqn;
+	for (i = 0; i != n; i++)
+		d->window[i] = s->window[i];
+}
+
+/**
+ * Get RSN for read-only access.
+ */
+static inline struct replay_sqn *
+rsn_acquire(struct rte_ipsec_sa *sa)
+{
+	uint32_t n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.rdidx;
+	rsn = sa->sqn.inb.rsn[n];
+
+	if (!SQN_ATOMIC(sa))
+		return rsn;
+
+	/* check there are no writers */
+	while (rte_rwlock_read_trylock(&rsn->rwl) < 0) {
+		rte_pause();
+		n = sa->sqn.inb.rdidx;
+		rsn = sa->sqn.inb.rsn[n];
+		rte_compiler_barrier();
+	}
+
+	return rsn;
+}
+
+/**
+ * Release read-only access for RSN.
+ */
+static inline void
+rsn_release(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	if (SQN_ATOMIC(sa))
+		rte_rwlock_read_unlock(&rsn->rwl);
+}
+
+/**
+ * Start RSN update.
+ */
+static inline struct replay_sqn *
+rsn_update_start(struct rte_ipsec_sa *sa)
+{
+	uint32_t k, n;
+	struct replay_sqn *rsn;
+
+	n = sa->sqn.inb.wridx;
+
+	/* no active writers */
+	RTE_ASSERT(n == sa->sqn.inb.rdidx);
+
+	if (!SQN_ATOMIC(sa))
+		return sa->sqn.inb.rsn[n];
+
+	k = REPLAY_SQN_NEXT(n);
+	sa->sqn.inb.wridx = k;
+
+	rsn = sa->sqn.inb.rsn[k];
+	rte_rwlock_write_lock(&rsn->rwl);
+	rsn_copy(sa, k, n);
+
+	return rsn;
+}
+
+/**
+ * Finish RSN update.
+ */
+static inline void
+rsn_update_finish(struct rte_ipsec_sa *sa, struct replay_sqn *rsn)
+{
+	uint32_t n;
+
+	if (!SQN_ATOMIC(sa))
+		return;
+
+	n = sa->sqn.inb.wridx;
+	RTE_ASSERT(n != sa->sqn.inb.rdidx);
+	RTE_ASSERT(rsn - sa->sqn.inb.rsn == n);
+
+	rte_rwlock_write_unlock(&rsn->rwl);
+	sa->sqn.inb.rdidx = n;
+}
+
+
 #endif /* _IPSEC_SQN_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_sa.h b/lib/librte_ipsec/rte_ipsec_sa.h
index d99028c2c..7802da3b1 100644
--- a/lib/librte_ipsec/rte_ipsec_sa.h
+++ b/lib/librte_ipsec/rte_ipsec_sa.h
@@ -55,6 +55,27 @@ struct rte_ipsec_sa_prm {
 	uint32_t replay_win_sz;
 };
 
+/**
+ * Indicates that SA will(/will not) need an 'atomic' access
+ * to sequence number and replay window.
+ * 'atomic' here means:
+ * functions:
+ *  - rte_ipsec_pkt_crypto_prepare
+ *  - rte_ipsec_pkt_process
+ * can be safely used in MT environment, as long as the user can guarantee
+ * that they obey multiple readers/single writer model for SQN+replay_window
+ * operations.
+ * To be more specific:
+ * for outbound SA there are no restrictions.
+ * for inbound SA the caller has to guarantee that at any given moment
+ * only one thread is executing rte_ipsec_pkt_process() for given SA.
+ * Note that it is caller responsibility to maintain correct order
+ * of packets to be processed.
+ * In other words - it is a caller responsibility to serialize process()
+ * invocations.
+ */
+#define	RTE_IPSEC_SAFLAG_SQN_ATOM	(1ULL << 0)
+
 /**
  * SA type is an 64-bit value that contain the following information:
  * - IP version (IPv4/IPv6)
@@ -62,6 +83,8 @@ struct rte_ipsec_sa_prm {
  * - inbound/outbound
  * - mode (TRANSPORT/TUNNEL)
  * - for TUNNEL outer IP version (IPv4/IPv6)
+ * - are SA SQN operations 'atomic'
+ * - ESN enabled/disabled
  * ...
  */
 
@@ -70,6 +93,8 @@ enum {
 	RTE_SATP_LOG2_PROTO,
 	RTE_SATP_LOG2_DIR,
 	RTE_SATP_LOG2_MODE,
+	RTE_SATP_LOG2_SQN = RTE_SATP_LOG2_MODE + 2,
+	RTE_SATP_LOG2_ESN,
 	RTE_SATP_LOG2_NUM
 };
 
@@ -90,6 +115,14 @@ enum {
 #define RTE_IPSEC_SATP_MODE_TUNLV4	(1ULL << RTE_SATP_LOG2_MODE)
 #define RTE_IPSEC_SATP_MODE_TUNLV6	(2ULL << RTE_SATP_LOG2_MODE)
 
+#define RTE_IPSEC_SATP_SQN_MASK		(1ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_RAW		(0ULL << RTE_SATP_LOG2_SQN)
+#define RTE_IPSEC_SATP_SQN_ATOM		(1ULL << RTE_SATP_LOG2_SQN)
+
+#define RTE_IPSEC_SATP_ESN_MASK		(1ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_DISABLE	(0ULL << RTE_SATP_LOG2_ESN)
+#define RTE_IPSEC_SATP_ESN_ENABLE	(1ULL << RTE_SATP_LOG2_ESN)
+
 /**
  * get type of given SA
  * @return
diff --git a/lib/librte_ipsec/sa.c b/lib/librte_ipsec/sa.c
index d263e7bcf..8d4ce1ac6 100644
--- a/lib/librte_ipsec/sa.c
+++ b/lib/librte_ipsec/sa.c
@@ -80,21 +80,37 @@ rte_ipsec_sa_type(const struct rte_ipsec_sa *sa)
 }
 
 static int32_t
-ipsec_sa_size(uint32_t wsz, uint64_t type, uint32_t *nb_bucket)
+ipsec_sa_size(uint64_t type, uint32_t *wnd_sz, uint32_t *nb_bucket)
 {
-	uint32_t n, sz;
+	uint32_t n, sz, wsz;
 
+	wsz = *wnd_sz;
 	n = 0;
-	if (wsz != 0 && (type & RTE_IPSEC_SATP_DIR_MASK) ==
-			RTE_IPSEC_SATP_DIR_IB)
-		n = replay_num_bucket(wsz);
+
+	if ((type & RTE_IPSEC_SATP_DIR_MASK) == RTE_IPSEC_SATP_DIR_IB) {
+
+		/*
+		 * RFC 4303 recommends 64 as minimum window size.
+		 * there is no point to use ESN mode without SQN window,
+		 * so make sure we have at least 64 window when ESN is enalbed.
+		 */
+		wsz = ((type & RTE_IPSEC_SATP_ESN_MASK) ==
+			RTE_IPSEC_SATP_ESN_DISABLE) ?
+			wsz : RTE_MAX(wsz, (uint32_t)WINDOW_BUCKET_SIZE);
+		if (wsz != 0)
+			n = replay_num_bucket(wsz);
+	}
 
 	if (n > WINDOW_BUCKET_MAX)
 		return -EINVAL;
 
+	*wnd_sz = wsz;
 	*nb_bucket = n;
 
 	sz = rsn_size(n);
+	if ((type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sz *= REPLAY_SQN_NUM;
+
 	sz += sizeof(struct rte_ipsec_sa);
 	return sz;
 }
@@ -158,6 +174,18 @@ fill_sa_type(const struct rte_ipsec_sa_prm *prm, uint64_t *type)
 	} else
 		return -EINVAL;
 
+	/* check for ESN flag */
+	if (prm->ipsec_xform.options.esn == 0)
+		tp |= RTE_IPSEC_SATP_ESN_DISABLE;
+	else
+		tp |= RTE_IPSEC_SATP_ESN_ENABLE;
+
+	/* interpret flags */
+	if (prm->flags & RTE_IPSEC_SAFLAG_SQN_ATOM)
+		tp |= RTE_IPSEC_SATP_SQN_ATOM;
+	else
+		tp |= RTE_IPSEC_SATP_SQN_RAW;
+
 	*type = tp;
 	return 0;
 }
@@ -191,7 +219,7 @@ esp_inb_tun_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm)
 static void
 esp_outb_init(struct rte_ipsec_sa *sa, uint32_t hlen)
 {
-	sa->sqn.outb = 1;
+	sa->sqn.outb.raw = 1;
 
 	/* these params may differ with new algorithms support */
 	sa->ctp.auth.offset = hlen;
@@ -277,11 +305,26 @@ esp_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	return 0;
 }
 
+/*
+ * helper function, init SA replay structure.
+ */
+static void
+fill_sa_replay(struct rte_ipsec_sa *sa, uint32_t wnd_sz, uint32_t nb_bucket)
+{
+	sa->replay.win_sz = wnd_sz;
+	sa->replay.nb_bucket = nb_bucket;
+	sa->replay.bucket_index_mask = nb_bucket - 1;
+	sa->sqn.inb.rsn[0] = (struct replay_sqn *)(sa + 1);
+	if ((sa->type & RTE_IPSEC_SATP_SQN_MASK) == RTE_IPSEC_SATP_SQN_ATOM)
+		sa->sqn.inb.rsn[1] = (struct replay_sqn *)
+			((uintptr_t)sa->sqn.inb.rsn[0] + rsn_size(nb_bucket));
+}
+
 int __rte_experimental
 rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 {
 	uint64_t type;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	int32_t rc;
 
 	if (prm == NULL)
@@ -293,7 +336,8 @@ rte_ipsec_sa_size(const struct rte_ipsec_sa_prm *prm)
 		return rc;
 
 	/* determine required size */
-	return ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	return ipsec_sa_size(type, &wsz, &nb);
 }
 
 int __rte_experimental
@@ -301,7 +345,7 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 	uint32_t size)
 {
 	int32_t rc, sz;
-	uint32_t nb;
+	uint32_t nb, wsz;
 	uint64_t type;
 	struct crypto_xform cxf;
 
@@ -314,7 +358,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		return rc;
 
 	/* determine required size */
-	sz = ipsec_sa_size(prm->replay_win_sz, type, &nb);
+	wsz = prm->replay_win_sz;
+	sz = ipsec_sa_size(type, &wsz, &nb);
 	if (sz < 0)
 		return sz;
 	else if (size < (uint32_t)sz)
@@ -347,12 +392,8 @@ rte_ipsec_sa_init(struct rte_ipsec_sa *sa, const struct rte_ipsec_sa_prm *prm,
 		rte_ipsec_sa_fini(sa);
 
 	/* fill replay window related fields */
-	if (nb != 0) {
-		sa->replay.win_sz = prm->replay_win_sz;
-		sa->replay.nb_bucket = nb;
-		sa->replay.bucket_index_mask = sa->replay.nb_bucket - 1;
-		sa->sqn.inb = (struct replay_sqn *)(sa + 1);
-	}
+	if (nb != 0)
+		fill_sa_replay(sa, wsz, nb);
 
 	return sz;
 }
@@ -877,7 +918,7 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	struct rte_mbuf *dr[num];
 
 	sa = ss->sa;
-	rsn = sa->sqn.inb;
+	rsn = rsn_acquire(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -896,6 +937,8 @@ inb_pkt_prepare(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 		}
 	}
 
+	rsn_release(sa, rsn);
+
 	/* update cops */
 	lksd_none_cop_prepare(ss, mb, cop, k);
 
@@ -1058,7 +1101,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 	uint32_t i, k;
 	struct replay_sqn *rsn;
 
-	rsn = sa->sqn.inb;
+	rsn = rsn_update_start(sa);
 
 	k = 0;
 	for (i = 0; i != num; i++) {
@@ -1068,6 +1111,7 @@ esp_inb_rsn_update(struct rte_ipsec_sa *sa, const uint32_t sqn[],
 			dr[i - k] = mb[i];
 	}
 
+	rsn_update_finish(sa, rsn);
 	return k;
 }
 
diff --git a/lib/librte_ipsec/sa.h b/lib/librte_ipsec/sa.h
index 616cf1b9f..392e8fd7b 100644
--- a/lib/librte_ipsec/sa.h
+++ b/lib/librte_ipsec/sa.h
@@ -5,6 +5,8 @@
 #ifndef _SA_H_
 #define _SA_H_
 
+#include <rte_rwlock.h>
+
 #define IPSEC_MAX_HDR_SIZE	64
 #define IPSEC_MAX_IV_SIZE	16
 #define IPSEC_MAX_IV_QWORD	(IPSEC_MAX_IV_SIZE / sizeof(uint64_t))
@@ -36,7 +38,11 @@ union sym_op_data {
 	};
 };
 
+#define REPLAY_SQN_NUM		2
+#define REPLAY_SQN_NEXT(n)	((n) ^ 1)
+
 struct replay_sqn {
+	rte_rwlock_t rwl;
 	uint64_t sqn;
 	__extension__ uint64_t window[0];
 };
@@ -74,10 +80,21 @@ struct rte_ipsec_sa {
 
 	/*
 	 * sqn and replay window
+	 * In case of SA handled by multiple threads *sqn* cacheline
+	 * could be shared by multiple cores.
+	 * To minimise perfomance impact, we try to locate in a separate
+	 * place from other frequently accesed data.
 	 */
 	union {
-		uint64_t outb;
-		struct replay_sqn *inb;
+		union {
+			rte_atomic64_t atom;
+			uint64_t raw;
+		} outb;
+		struct {
+			uint32_t rdidx; /* read index */
+			uint32_t wridx; /* write index */
+			struct replay_sqn *rsn[REPLAY_SQN_NUM];
+		} inb;
 	} sqn;
 
 } __rte_cache_aligned;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 7/9] ipsec: helper functions to group completed crypto-ops
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (6 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 6/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 8/9] test/ipsec: introduce functional test Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 9/9] doc: add IPsec library guide Konstantin Ananyev
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev; +Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev

Introduce helper functions to process completed crypto-ops
and group related packets by sessions they belong to.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 lib/librte_ipsec/Makefile              |   1 +
 lib/librte_ipsec/meson.build           |   2 +-
 lib/librte_ipsec/rte_ipsec.h           |   2 +
 lib/librte_ipsec/rte_ipsec_group.h     | 151 +++++++++++++++++++++++++
 lib/librte_ipsec/rte_ipsec_version.map |   2 +
 5 files changed, 157 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_ipsec/rte_ipsec_group.h

diff --git a/lib/librte_ipsec/Makefile b/lib/librte_ipsec/Makefile
index 71e39df0b..77506d6ad 100644
--- a/lib/librte_ipsec/Makefile
+++ b/lib/librte_ipsec/Makefile
@@ -21,6 +21,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += ses.c
 
 # install header files
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_group.h
 SYMLINK-$(CONFIG_RTE_LIBRTE_IPSEC)-include += rte_ipsec_sa.h
 
 include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_ipsec/meson.build b/lib/librte_ipsec/meson.build
index 6e8c6fabe..d2427b809 100644
--- a/lib/librte_ipsec/meson.build
+++ b/lib/librte_ipsec/meson.build
@@ -5,6 +5,6 @@ allow_experimental_apis = true
 
 sources=files('sa.c', 'ses.c')
 
-install_headers = files('rte_ipsec.h', 'rte_ipsec_sa.h')
+install_headers = files('rte_ipsec.h', 'rte_ipsec_group.h', 'rte_ipsec_sa.h')
 
 deps += ['mbuf', 'net', 'cryptodev', 'security']
diff --git a/lib/librte_ipsec/rte_ipsec.h b/lib/librte_ipsec/rte_ipsec.h
index 93e4df1bd..ff1ec801e 100644
--- a/lib/librte_ipsec/rte_ipsec.h
+++ b/lib/librte_ipsec/rte_ipsec.h
@@ -145,6 +145,8 @@ rte_ipsec_pkt_process(const struct rte_ipsec_session *ss, struct rte_mbuf *mb[],
 	return ss->pkt_func.process(ss, mb, num);
 }
 
+#include <rte_ipsec_group.h>
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ipsec/rte_ipsec_group.h b/lib/librte_ipsec/rte_ipsec_group.h
new file mode 100644
index 000000000..696ed277a
--- /dev/null
+++ b/lib/librte_ipsec/rte_ipsec_group.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#ifndef _RTE_IPSEC_GROUP_H_
+#define _RTE_IPSEC_GROUP_H_
+
+/**
+ * @file rte_ipsec_group.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE IPsec support.
+ * It is not recommended to include this file direclty,
+ * include <rte_ipsec.h> instead.
+ * Contains helper functions to process completed crypto-ops
+ * and group related packets by sessions they belong to.
+ */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Used to group mbufs by some id.
+ * See below for particular usage.
+ */
+struct rte_ipsec_group {
+	union {
+		uint64_t val;
+		void *ptr;
+	} id; /**< grouped by value */
+	struct rte_mbuf **m;  /**< start of the group */
+	uint32_t cnt;         /**< number of entries in the group */
+	int32_t rc;           /**< status code associated with the group */
+};
+
+/**
+ * Take crypto-op as an input and extract pointer to related ipsec session.
+ * @param cop
+ *   The address of an input *rte_crypto_op* structure.
+ * @return
+ *   The pointer to the related *rte_ipsec_session* structure.
+ */
+static inline __rte_experimental struct rte_ipsec_session *
+rte_ipsec_ses_from_crypto(const struct rte_crypto_op *cop)
+{
+	const struct rte_security_session *ss;
+	const struct rte_cryptodev_sym_session *cs;
+
+	if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
+		ss = cop->sym[0].sec_session;
+		return (void *)(uintptr_t)ss->opaque_data;
+	} else if (cop->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+		cs = cop->sym[0].session;
+		return (void *)(uintptr_t)cs->opaque_data;
+	}
+	return NULL;
+}
+
+/**
+ * Take as input completed crypto ops, extract related mbufs
+ * and group them by rte_ipsec_session they belong to.
+ * For mbuf which crypto-op wasn't completed successfully
+ * PKT_RX_SEC_OFFLOAD_FAILED will be raised in ol_flags.
+ * Note that mbufs with undetermined SA (session-less) are not freed
+ * by the function, but are placed beyond mbufs for the last valid group.
+ * It is a user responsibility to handle them further.
+ * @param cop
+ *   The address of an array of *num* pointers to the input *rte_crypto_op*
+ *   structures.
+ * @param mb
+ *   The address of an array of *num* pointers to output *rte_mbuf* structures.
+ * @param grp
+ *   The address of an array of *num* to output *rte_ipsec_group* structures.
+ * @param num
+ *   The maximum number of crypto-ops to process.
+ * @return
+ *   Number of filled elements in *grp* array.
+ */
+static inline uint16_t __rte_experimental
+rte_ipsec_pkt_crypto_group(const struct rte_crypto_op *cop[],
+	struct rte_mbuf *mb[], struct rte_ipsec_group grp[], uint16_t num)
+{
+	uint32_t i, j, k, n;
+	void *ns, *ps;
+	struct rte_mbuf *m, *dr[num];
+
+	j = 0;
+	k = 0;
+	n = 0;
+	ps = NULL;
+
+	for (i = 0; i != num; i++) {
+
+		m = cop[i]->sym[0].m_src;
+		ns = cop[i]->sym[0].session;
+
+		m->ol_flags |= PKT_RX_SEC_OFFLOAD;
+		if (cop[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+			m->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;
+
+		/* no valid session found */
+		if (ns == NULL) {
+			dr[k++] = m;
+			continue;
+		}
+
+		/* different SA */
+		if (ps != ns) {
+
+			/*
+			 * we already have an open group - finalize it,
+			 * then open a new one.
+			 */
+			if (ps != NULL) {
+				grp[n].id.ptr =
+					rte_ipsec_ses_from_crypto(cop[i - 1]);
+				grp[n].cnt = mb + j - grp[n].m;
+				n++;
+			}
+
+			/* start new group */
+			grp[n].m = mb + j;
+			ps = ns;
+		}
+
+		mb[j++] = m;
+	}
+
+	/* finalise last group */
+	if (ps != NULL) {
+		grp[n].id.ptr = rte_ipsec_ses_from_crypto(cop[i - 1]);
+		grp[n].cnt = mb + j - grp[n].m;
+		n++;
+	}
+
+	/* copy mbufs with unknown session beyond recognised ones */
+	if (k != 0 && k != num) {
+		for (i = 0; i != k; i++)
+			mb[j + i] = dr[i];
+	}
+
+	return n;
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_IPSEC_GROUP_H_ */
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index 4d4f46e4f..ee9f1961b 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,12 +1,14 @@
 EXPERIMENTAL {
 	global:
 
+	rte_ipsec_pkt_crypto_group;
 	rte_ipsec_pkt_crypto_prepare;
 	rte_ipsec_pkt_process;
 	rte_ipsec_sa_fini;
 	rte_ipsec_sa_init;
 	rte_ipsec_sa_size;
 	rte_ipsec_sa_type;
+	rte_ipsec_ses_from_crypto;
 	rte_ipsec_session_prepare;
 
 	local: *;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 8/9] test/ipsec: introduce functional test
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (7 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 7/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 9/9] doc: add IPsec library guide Konstantin Ananyev
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Mohammad Abdul Awal, Bernard Iremonger

Create functional test for librte_ipsec.
Note that the test requires null crypto pmd to pass successfully.

Signed-off-by: Mohammad Abdul Awal <mohammad.abdul.awal@intel.com>
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Declan Doherty <declan.doherty@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 test/test/Makefile     |    3 +
 test/test/meson.build  |    3 +
 test/test/test_ipsec.c | 2565 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 2571 insertions(+)
 create mode 100644 test/test/test_ipsec.c

diff --git a/test/test/Makefile b/test/test/Makefile
index ab4fec34a..e7c8108f2 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -207,6 +207,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 SRCS-$(CONFIG_RTE_LIBRTE_BPF) += test_bpf.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_IPSEC) += test_ipsec.c
+LDLIBS += -lrte_ipsec
+
 CFLAGS += -DALLOW_EXPERIMENTAL_API
 
 CFLAGS += -O3
diff --git a/test/test/meson.build b/test/test/meson.build
index 5a4816fed..9e45baf7a 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -50,6 +50,7 @@ test_sources = files('commands.c',
 	'test_hash_perf.c',
 	'test_hash_readwrite_lf.c',
 	'test_interrupts.c',
+	'test_ipsec.c',
 	'test_kni.c',
 	'test_kvargs.c',
 	'test_link_bonding.c',
@@ -117,6 +118,7 @@ test_deps = ['acl',
 	'eventdev',
 	'flow_classify',
 	'hash',
+	'ipsec',
 	'lpm',
 	'member',
 	'metrics',
@@ -182,6 +184,7 @@ test_names = [
 	'hash_readwrite_autotest',
 	'hash_readwrite_lf_autotest',
 	'interrupt_autotest',
+	'ipsec_autotest',
 	'kni_autotest',
 	'kvargs_autotest',
 	'link_bonding_autotest',
diff --git a/test/test/test_ipsec.c b/test/test/test_ipsec.c
new file mode 100644
index 000000000..ff1a1c4be
--- /dev/null
+++ b/test/test/test_ipsec.c
@@ -0,0 +1,2565 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2018 Intel Corporation
+ */
+
+#include <time.h>
+
+#include <netinet/in.h>
+#include <netinet/ip.h>
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_pause.h>
+#include <rte_bus_vdev.h>
+#include <rte_ip.h>
+
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_lcore.h>
+#include <rte_ipsec.h>
+#include <rte_random.h>
+#include <rte_esp.h>
+#include <rte_security_driver.h>
+
+#include "test.h"
+#include "test_cryptodev.h"
+
+#define VDEV_ARGS_SIZE	100
+#define MAX_NB_SESSIONS	100
+#define MAX_NB_SAS		2
+#define REPLAY_WIN_0	0
+#define REPLAY_WIN_32	32
+#define REPLAY_WIN_64	64
+#define REPLAY_WIN_128	128
+#define REPLAY_WIN_256	256
+#define DATA_64_BYTES	64
+#define DATA_80_BYTES	80
+#define DATA_100_BYTES	100
+#define ESN_ENABLED		1
+#define ESN_DISABLED	0
+#define INBOUND_SPI		7
+#define OUTBOUND_SPI	17
+#define BURST_SIZE		32
+#define REORDER_PKTS	1
+
+struct user_params {
+	enum rte_crypto_sym_xform_type auth;
+	enum rte_crypto_sym_xform_type cipher;
+	enum rte_crypto_sym_xform_type aead;
+
+	char auth_algo[128];
+	char cipher_algo[128];
+	char aead_algo[128];
+};
+
+struct ipsec_testsuite_params {
+	struct rte_mempool *mbuf_pool;
+	struct rte_mempool *cop_mpool;
+	struct rte_cryptodev_config conf;
+	struct rte_cryptodev_qp_conf qp_conf;
+
+	uint8_t valid_devs[RTE_CRYPTO_MAX_DEVS];
+	uint8_t valid_dev_count;
+};
+
+struct ipsec_unitest_params {
+	struct rte_crypto_sym_xform cipher_xform;
+	struct rte_crypto_sym_xform auth_xform;
+	struct rte_crypto_sym_xform aead_xform;
+	struct rte_crypto_sym_xform *crypto_xforms;
+
+	struct rte_security_ipsec_xform ipsec_xform;
+
+	struct rte_ipsec_sa_prm sa_prm;
+	struct rte_ipsec_session ss[MAX_NB_SAS];
+
+	struct rte_crypto_op *cop[BURST_SIZE];
+
+	struct rte_mbuf *obuf[BURST_SIZE], *ibuf[BURST_SIZE],
+		*testbuf[BURST_SIZE];
+
+	uint8_t *digest;
+	uint16_t pkt_index;
+};
+
+struct ipsec_test_cfg {
+	uint32_t replay_win_sz;
+	uint32_t esn;
+	uint64_t flags;
+	size_t pkt_sz;
+	uint16_t num_pkts;
+	uint32_t reorder_pkts;
+};
+
+static const struct ipsec_test_cfg test_cfg[] = {
+
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_0, ESN_DISABLED, 0, DATA_80_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, 1, 0},
+	{REPLAY_WIN_32, ESN_ENABLED, 0, DATA_100_BYTES, BURST_SIZE,
+		REORDER_PKTS},
+	{REPLAY_WIN_64, ESN_ENABLED, 0, DATA_64_BYTES, 1, 0},
+	{REPLAY_WIN_128, ESN_ENABLED, RTE_IPSEC_SAFLAG_SQN_ATOM,
+		DATA_80_BYTES, 1, 0},
+	{REPLAY_WIN_256, ESN_DISABLED, 0, DATA_100_BYTES, 1, 0},
+};
+
+static const int num_cfg = RTE_DIM(test_cfg);
+static struct ipsec_testsuite_params testsuite_params = { NULL };
+static struct ipsec_unitest_params unittest_params;
+static struct user_params uparams;
+
+static uint8_t global_key[128] = { 0 };
+
+struct supported_cipher_algo {
+	const char *keyword;
+	enum rte_crypto_cipher_algorithm algo;
+	uint16_t iv_len;
+	uint16_t block_size;
+	uint16_t key_len;
+};
+
+struct supported_auth_algo {
+	const char *keyword;
+	enum rte_crypto_auth_algorithm algo;
+	uint16_t digest_len;
+	uint16_t key_len;
+	uint8_t key_not_req;
+};
+
+const struct supported_cipher_algo cipher_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_CIPHER_NULL,
+		.iv_len = 0,
+		.block_size = 4,
+		.key_len = 0
+	},
+};
+
+const struct supported_auth_algo auth_algos[] = {
+	{
+		.keyword = "null",
+		.algo = RTE_CRYPTO_AUTH_NULL,
+		.digest_len = 0,
+		.key_len = 0,
+		.key_not_req = 1
+	},
+};
+
+static int
+dummy_sec_create(void *device, struct rte_security_session_conf *conf,
+	struct rte_security_session *sess, struct rte_mempool *mp)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(conf);
+	RTE_SET_USED(mp);
+
+	sess->sess_private_data = NULL;
+	return 0;
+}
+
+static int
+dummy_sec_destroy(void *device, struct rte_security_session *sess)
+{
+	RTE_SET_USED(device);
+	RTE_SET_USED(sess);
+	return 0;
+}
+
+static const struct rte_security_ops dummy_sec_ops = {
+	.session_create = dummy_sec_create,
+	.session_destroy = dummy_sec_destroy,
+};
+
+static struct rte_security_ctx dummy_sec_ctx = {
+	.ops = &dummy_sec_ops,
+};
+
+static const struct supported_cipher_algo *
+find_match_cipher_algo(const char *cipher_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(cipher_algos); i++) {
+		const struct supported_cipher_algo *algo =
+			&cipher_algos[i];
+
+		if (strcmp(cipher_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static const struct supported_auth_algo *
+find_match_auth_algo(const char *auth_keyword)
+{
+	size_t i;
+
+	for (i = 0; i < RTE_DIM(auth_algos); i++) {
+		const struct supported_auth_algo *algo =
+			&auth_algos[i];
+
+		if (strcmp(auth_keyword, algo->keyword) == 0)
+			return algo;
+	}
+
+	return NULL;
+}
+
+static int
+testsuite_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct rte_cryptodev_info info;
+	uint32_t nb_devs, dev_id;
+	size_t sess_sz;
+
+	memset(ts_params, 0, sizeof(*ts_params));
+
+	ts_params->mbuf_pool = rte_pktmbuf_pool_create(
+			"CRYPTO_MBUFPOOL",
+			NUM_MBUFS, MBUF_CACHE_SIZE, 0, MBUF_SIZE,
+			rte_socket_id());
+	if (ts_params->mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_MBUFPOOL\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->cop_mpool = rte_crypto_op_pool_create(
+			"MBUF_CRYPTO_SYM_OP_POOL",
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			NUM_MBUFS, MBUF_CACHE_SIZE,
+			DEFAULT_NUM_XFORMS *
+			sizeof(struct rte_crypto_sym_xform) +
+			MAXIMUM_IV_LENGTH,
+			rte_socket_id());
+	if (ts_params->cop_mpool == NULL) {
+		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
+		return TEST_FAILED;
+	}
+
+	nb_devs = rte_cryptodev_count();
+	if (nb_devs < 1) {
+		RTE_LOG(ERR, USER1, "No crypto devices found?\n");
+		return TEST_FAILED;
+	}
+
+	ts_params->valid_devs[ts_params->valid_dev_count++] = 0;
+
+	/* Set up all the qps on the first of the valid devices found */
+	dev_id = ts_params->valid_devs[0];
+
+	rte_cryptodev_info_get(dev_id, &info);
+
+	ts_params->conf.nb_queue_pairs = info.max_nb_queue_pairs;
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	sess_sz = rte_cryptodev_sym_get_private_session_size(dev_id);
+	sess_sz = RTE_MAX(sess_sz, sizeof(struct rte_security_session));
+
+	/*
+	 * Create mempools for sessions
+	 */
+	if (info.sym.max_nb_sessions != 0 &&
+			info.sym.max_nb_sessions < MAX_NB_SESSIONS) {
+		RTE_LOG(ERR, USER1, "Device does not support "
+				"at least %u sessions\n",
+				MAX_NB_SESSIONS);
+		return TEST_FAILED;
+	}
+
+	ts_params->qp_conf.mp_session_private = rte_mempool_create(
+				"test_priv_sess_mp",
+				MAX_NB_SESSIONS,
+				sess_sz,
+				0, 0, NULL, NULL, NULL,
+				NULL, SOCKET_ID_ANY,
+				0);
+
+	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session_private,
+			"private session mempool allocation failed");
+
+	ts_params->qp_conf.mp_session =
+		rte_cryptodev_sym_session_pool_create("test_sess_mp",
+			MAX_NB_SESSIONS, 0, 0, 0, SOCKET_ID_ANY);
+
+	TEST_ASSERT_NOT_NULL(ts_params->qp_conf.mp_session,
+			"session mempool allocation failed");
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_configure(dev_id,
+			&ts_params->conf),
+			"Failed to configure cryptodev %u with %u qps",
+			dev_id, ts_params->conf.nb_queue_pairs);
+
+	ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
+
+	TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
+		dev_id, 0, &ts_params->qp_conf,
+		rte_cryptodev_socket_id(dev_id)),
+		"Failed to setup queue pair %u on cryptodev %u",
+		0, dev_id);
+
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+
+	if (ts_params->mbuf_pool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+		rte_mempool_avail_count(ts_params->mbuf_pool));
+		rte_mempool_free(ts_params->mbuf_pool);
+		ts_params->mbuf_pool = NULL;
+	}
+
+	if (ts_params->cop_mpool != NULL) {
+		RTE_LOG(DEBUG, USER1, "CRYPTO_OP_POOL count %u\n",
+		rte_mempool_avail_count(ts_params->cop_mpool));
+		rte_mempool_free(ts_params->cop_mpool);
+		ts_params->cop_mpool = NULL;
+	}
+
+	/* Free session mempools */
+	if (ts_params->qp_conf.mp_session != NULL) {
+		rte_mempool_free(ts_params->qp_conf.mp_session);
+		ts_params->qp_conf.mp_session = NULL;
+	}
+
+	if (ts_params->qp_conf.mp_session_private != NULL) {
+		rte_mempool_free(ts_params->qp_conf.mp_session_private);
+		ts_params->qp_conf.mp_session_private = NULL;
+	}
+}
+
+static int
+ut_setup(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	/* Clear unit test parameters before running test */
+	memset(ut_params, 0, sizeof(*ut_params));
+
+	/* Reconfigure device to default parameters */
+	ts_params->conf.socket_id = SOCKET_ID_ANY;
+
+	/* Start the device */
+	TEST_ASSERT_SUCCESS(rte_cryptodev_start(ts_params->valid_devs[0]),
+			"Failed to start cryptodev %u",
+			ts_params->valid_devs[0]);
+
+	return TEST_SUCCESS;
+}
+
+static void
+ut_teardown(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int i;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		/* free crypto operation structure */
+		if (ut_params->cop[i])
+			rte_crypto_op_free(ut_params->cop[i]);
+
+		/*
+		 * free mbuf - both obuf and ibuf are usually the same,
+		 * so check if they point at the same address is necessary,
+		 * to avoid freeing the mbuf twice.
+		 */
+		if (ut_params->obuf[i]) {
+			rte_pktmbuf_free(ut_params->obuf[i]);
+			if (ut_params->ibuf[i] == ut_params->obuf[i])
+				ut_params->ibuf[i] = 0;
+			ut_params->obuf[i] = 0;
+		}
+		if (ut_params->ibuf[i]) {
+			rte_pktmbuf_free(ut_params->ibuf[i]);
+			ut_params->ibuf[i] = 0;
+		}
+
+		if (ut_params->testbuf[i]) {
+			rte_pktmbuf_free(ut_params->testbuf[i]);
+			ut_params->testbuf[i] = 0;
+		}
+	}
+
+	if (ts_params->mbuf_pool != NULL)
+		RTE_LOG(DEBUG, USER1, "CRYPTO_MBUFPOOL count %u\n",
+			rte_mempool_avail_count(ts_params->mbuf_pool));
+
+	/* Stop the device */
+	rte_cryptodev_stop(ts_params->valid_devs[0]);
+}
+
+#define IPSEC_MAX_PAD_SIZE	UINT8_MAX
+
+static const uint8_t esp_pad_bytes[IPSEC_MAX_PAD_SIZE] = {
+	1, 2, 3, 4, 5, 6, 7, 8,
+	9, 10, 11, 12, 13, 14, 15, 16,
+	17, 18, 19, 20, 21, 22, 23, 24,
+	25, 26, 27, 28, 29, 30, 31, 32,
+	33, 34, 35, 36, 37, 38, 39, 40,
+	41, 42, 43, 44, 45, 46, 47, 48,
+	49, 50, 51, 52, 53, 54, 55, 56,
+	57, 58, 59, 60, 61, 62, 63, 64,
+	65, 66, 67, 68, 69, 70, 71, 72,
+	73, 74, 75, 76, 77, 78, 79, 80,
+	81, 82, 83, 84, 85, 86, 87, 88,
+	89, 90, 91, 92, 93, 94, 95, 96,
+	97, 98, 99, 100, 101, 102, 103, 104,
+	105, 106, 107, 108, 109, 110, 111, 112,
+	113, 114, 115, 116, 117, 118, 119, 120,
+	121, 122, 123, 124, 125, 126, 127, 128,
+	129, 130, 131, 132, 133, 134, 135, 136,
+	137, 138, 139, 140, 141, 142, 143, 144,
+	145, 146, 147, 148, 149, 150, 151, 152,
+	153, 154, 155, 156, 157, 158, 159, 160,
+	161, 162, 163, 164, 165, 166, 167, 168,
+	169, 170, 171, 172, 173, 174, 175, 176,
+	177, 178, 179, 180, 181, 182, 183, 184,
+	185, 186, 187, 188, 189, 190, 191, 192,
+	193, 194, 195, 196, 197, 198, 199, 200,
+	201, 202, 203, 204, 205, 206, 207, 208,
+	209, 210, 211, 212, 213, 214, 215, 216,
+	217, 218, 219, 220, 221, 222, 223, 224,
+	225, 226, 227, 228, 229, 230, 231, 232,
+	233, 234, 235, 236, 237, 238, 239, 240,
+	241, 242, 243, 244, 245, 246, 247, 248,
+	249, 250, 251, 252, 253, 254, 255,
+};
+
+/* ***** data for tests ***** */
+
+const char null_plain_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+const char null_encrypted_data[] =
+	"Network Security People Have A Strange Sense Of Humor unlike Other "
+	"People who have a normal sense of humour";
+
+struct ipv4_hdr ipv4_outer  = {
+	.version_ihl = IPVERSION << 4 |
+		sizeof(ipv4_outer) / IPV4_IHL_MULTIPLIER,
+	.time_to_live = IPDEFTTL,
+	.next_proto_id = IPPROTO_ESP,
+	.src_addr = IPv4(192, 168, 1, 100),
+	.dst_addr = IPv4(192, 168, 2, 100),
+};
+
+static struct rte_mbuf *
+setup_test_string(struct rte_mempool *mpool,
+		const char *string, size_t len, uint8_t blocksize)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	size_t t_len = len - (blocksize ? (len % blocksize) : 0);
+
+	if (m) {
+		memset(m->buf_addr, 0, m->buf_len);
+		char *dst = rte_pktmbuf_append(m, t_len);
+
+		if (!dst) {
+			rte_pktmbuf_free(m);
+			return NULL;
+		}
+		if (string != NULL)
+			rte_memcpy(dst, string, t_len);
+		else
+			memset(dst, 0, t_len);
+	}
+
+	return m;
+}
+
+static struct rte_mbuf *
+setup_test_string_tunneled(struct rte_mempool *mpool, const char *string,
+	size_t len, uint32_t spi, uint32_t seq)
+{
+	struct rte_mbuf *m = rte_pktmbuf_alloc(mpool);
+	uint32_t hdrlen = sizeof(struct ipv4_hdr) + sizeof(struct esp_hdr);
+	uint32_t taillen = sizeof(struct esp_tail);
+	uint32_t t_len = len + hdrlen + taillen;
+	uint32_t padlen;
+
+	struct esp_hdr esph  = {
+		.spi = rte_cpu_to_be_32(spi),
+		.seq = rte_cpu_to_be_32(seq)
+	};
+
+	padlen = RTE_ALIGN(t_len, 4) - t_len;
+	t_len += padlen;
+
+	struct esp_tail espt  = {
+		.pad_len = padlen,
+		.next_proto = IPPROTO_IPIP,
+	};
+
+	if (m == NULL)
+		return NULL;
+
+	memset(m->buf_addr, 0, m->buf_len);
+	char *dst = rte_pktmbuf_append(m, t_len);
+
+	if (!dst) {
+		rte_pktmbuf_free(m);
+		return NULL;
+	}
+	/* copy outer IP and ESP header */
+	ipv4_outer.total_length = rte_cpu_to_be_16(t_len);
+	ipv4_outer.packet_id = rte_cpu_to_be_16(seq);
+	rte_memcpy(dst, &ipv4_outer, sizeof(ipv4_outer));
+	dst += sizeof(ipv4_outer);
+	m->l3_len = sizeof(ipv4_outer);
+	rte_memcpy(dst, &esph, sizeof(esph));
+	dst += sizeof(esph);
+
+	if (string != NULL) {
+		/* copy payload */
+		rte_memcpy(dst, string, len);
+		dst += len;
+		/* copy pad bytes */
+		rte_memcpy(dst, esp_pad_bytes, padlen);
+		dst += padlen;
+		/* copy ESP tail header */
+		rte_memcpy(dst, &espt, sizeof(espt));
+	} else
+		memset(dst, 0, t_len);
+
+	return m;
+}
+
+static int
+check_cryptodev_capablity(const struct ipsec_unitest_params *ut,
+		uint8_t devid)
+{
+	struct rte_cryptodev_sym_capability_idx cap_idx;
+	const struct rte_cryptodev_symmetric_capability *cap;
+	int rc = -1;
+
+	cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	cap_idx.algo.auth = ut->auth_xform.auth.algo;
+	cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+
+	if (cap != NULL) {
+		rc = rte_cryptodev_sym_capability_check_auth(cap,
+				ut->auth_xform.auth.key.length,
+				ut->auth_xform.auth.digest_length, 0);
+		if (rc == 0) {
+			cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+			cap_idx.algo.cipher = ut->cipher_xform.cipher.algo;
+			cap = rte_cryptodev_sym_capability_get(devid, &cap_idx);
+			if (cap != NULL)
+				rc = rte_cryptodev_sym_capability_check_cipher(
+					cap,
+					ut->cipher_xform.cipher.key.length,
+					ut->cipher_xform.cipher.iv.length);
+		}
+	}
+
+	return rc;
+}
+
+static int
+create_dummy_sec_session(struct ipsec_unitest_params *ut,
+	struct rte_cryptodev_qp_conf *qp, uint32_t j)
+{
+	static struct rte_security_session_conf conf;
+
+	ut->ss[j].security.ses = rte_security_session_create(&dummy_sec_ctx,
+					&conf, qp->mp_session_private);
+
+	if (ut->ss[j].security.ses == NULL)
+		return -ENOMEM;
+
+	ut->ss[j].security.ctx = &dummy_sec_ctx;
+	ut->ss[j].security.ol_flags = 0;
+	return 0;
+}
+
+static int
+create_crypto_session(struct ipsec_unitest_params *ut,
+	struct rte_cryptodev_qp_conf *qp, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	int32_t rc;
+	uint32_t devnum, i;
+	struct rte_cryptodev_sym_session *s;
+	uint8_t devid[RTE_CRYPTO_MAX_DEVS];
+
+	/* check which cryptodevs support SA */
+	devnum = 0;
+	for (i = 0; i < crypto_dev_num; i++) {
+		if (check_cryptodev_capablity(ut, crypto_dev[i]) == 0)
+			devid[devnum++] = crypto_dev[i];
+	}
+
+	if (devnum == 0)
+		return -ENODEV;
+
+	s = rte_cryptodev_sym_session_create(qp->mp_session);
+	if (s == NULL)
+		return -ENOMEM;
+
+	/* initiliaze SA crypto session for all supported devices */
+	for (i = 0; i != devnum; i++) {
+		rc = rte_cryptodev_sym_session_init(devid[i], s,
+			ut->crypto_xforms, qp->mp_session_private);
+		if (rc != 0)
+			break;
+	}
+
+	if (i == devnum) {
+		ut->ss[j].crypto.ses = s;
+		return 0;
+	}
+
+	/* failure, do cleanup */
+	while (i-- != 0)
+		rte_cryptodev_sym_session_clear(devid[i], s);
+
+	rte_cryptodev_sym_session_free(s);
+	return rc;
+}
+
+static int
+create_session(struct ipsec_unitest_params *ut,
+	struct rte_cryptodev_qp_conf *qp, const uint8_t crypto_dev[],
+	uint32_t crypto_dev_num, uint32_t j)
+{
+	if (ut->ss[j].type == RTE_SECURITY_ACTION_TYPE_NONE)
+		return create_crypto_session(ut, qp, crypto_dev,
+			crypto_dev_num, j);
+	else
+		return create_dummy_sec_session(ut, qp, j);
+}
+
+static void
+fill_crypto_xform(struct ipsec_unitest_params *ut_params,
+	const struct supported_auth_algo *auth_algo,
+	const struct supported_cipher_algo *cipher_algo)
+{
+	ut_params->auth_xform.type = RTE_CRYPTO_SYM_XFORM_AUTH;
+	ut_params->auth_xform.auth.algo = auth_algo->algo;
+	ut_params->auth_xform.auth.key.data = global_key;
+	ut_params->auth_xform.auth.key.length = auth_algo->key_len;
+	ut_params->auth_xform.auth.digest_length = auth_algo->digest_len;
+	ut_params->auth_xform.auth.op = RTE_CRYPTO_AUTH_OP_VERIFY;
+
+	ut_params->cipher_xform.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	ut_params->cipher_xform.cipher.algo = cipher_algo->algo;
+	ut_params->cipher_xform.cipher.key.data = global_key;
+	ut_params->cipher_xform.cipher.key.length = cipher_algo->key_len;
+	ut_params->cipher_xform.cipher.op = RTE_CRYPTO_CIPHER_OP_DECRYPT;
+	ut_params->cipher_xform.cipher.iv.offset = IV_OFFSET;
+	ut_params->cipher_xform.cipher.iv.length = cipher_algo->iv_len;
+
+	if (ut_params->ipsec_xform.direction ==
+			RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+		ut_params->crypto_xforms = &ut_params->auth_xform;
+		ut_params->auth_xform.next = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = NULL;
+	} else {
+		ut_params->crypto_xforms = &ut_params->cipher_xform;
+		ut_params->cipher_xform.next = &ut_params->auth_xform;
+		ut_params->auth_xform.next = NULL;
+	}
+}
+
+static int
+fill_ipsec_param(uint32_t replay_win_sz, uint64_t flags)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_sa_prm *prm = &ut_params->sa_prm;
+	const struct supported_auth_algo *auth_algo;
+	const struct supported_cipher_algo *cipher_algo;
+
+	memset(prm, 0, sizeof(*prm));
+
+	prm->userdata = 1;
+	prm->flags = flags;
+	prm->replay_win_sz = replay_win_sz;
+
+	/* setup ipsec xform */
+	prm->ipsec_xform = ut_params->ipsec_xform;
+	prm->ipsec_xform.salt = (uint32_t)rte_rand();
+
+	/* setup tunnel related fields */
+	prm->tun.hdr_len = sizeof(ipv4_outer);
+	prm->tun.next_proto = IPPROTO_IPIP;
+	prm->tun.hdr = &ipv4_outer;
+
+	/* setup crypto section */
+	if (uparams.aead != 0) {
+		/* TODO: will need to fill out with other test cases */
+	} else {
+		if (uparams.auth == 0 && uparams.cipher == 0)
+			return TEST_FAILED;
+
+		auth_algo = find_match_auth_algo(uparams.auth_algo);
+		cipher_algo = find_match_cipher_algo(uparams.cipher_algo);
+
+		fill_crypto_xform(ut_params, auth_algo, cipher_algo);
+	}
+
+	prm->crypto_xform = ut_params->crypto_xforms;
+	return TEST_SUCCESS;
+}
+
+static int
+create_sa(enum rte_security_session_action_type action_type,
+		uint32_t replay_win_sz, uint64_t flags, uint32_t j)
+{
+	struct ipsec_testsuite_params *ts = &testsuite_params;
+	struct ipsec_unitest_params *ut = &unittest_params;
+	size_t sz;
+	int rc;
+
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+
+	rc = fill_ipsec_param(replay_win_sz, flags);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	/* create rte_ipsec_sa*/
+	sz = rte_ipsec_sa_size(&ut->sa_prm);
+	TEST_ASSERT(sz > 0, "rte_ipsec_sa_size() failed\n");
+
+	ut->ss[j].sa = rte_zmalloc(NULL, sz, RTE_CACHE_LINE_SIZE);
+	TEST_ASSERT_NOT_NULL(ut->ss[j].sa,
+		"failed to allocate memory for rte_ipsec_sa\n");
+
+	ut->ss[j].type = action_type;
+	rc = create_session(ut, &ts->qp_conf, ts->valid_devs,
+		ts->valid_dev_count, j);
+	if (rc != 0)
+		return TEST_FAILED;
+
+	rc = rte_ipsec_sa_init(ut->ss[j].sa, &ut->sa_prm, sz);
+	rc = (rc > 0 && (uint32_t)rc <= sz) ? 0 : -EINVAL;
+	if (rc == 0)
+		rc = rte_ipsec_session_prepare(&ut->ss[j]);
+
+	return rc;
+}
+
+static int
+crypto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+	k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_enqueue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+lksd_proto_ipsec(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, k, ng;
+	struct rte_ipsec_group grp[1];
+
+	/* call crypto prepare */
+	k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[0], ut_params->ibuf,
+		ut_params->cop, num_pkts);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_prepare fail\n");
+		return TEST_FAILED;
+	}
+
+	/* check crypto ops */
+	for (i = 0; i != num_pkts; i++) {
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->type,
+			RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+			"%s: invalid crypto op type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->status,
+			RTE_CRYPTO_OP_STATUS_NOT_PROCESSED,
+			"%s: invalid crypto op status for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sess_type,
+			RTE_CRYPTO_OP_SECURITY_SESSION,
+			"%s: invalid crypto op sess_type for %u-th packet\n",
+			__func__, i);
+		TEST_ASSERT_EQUAL(ut_params->cop[i]->sym->m_src,
+			ut_params->ibuf[i],
+			"%s: invalid crypto op m_src for %u-th packet\n",
+			__func__, i);
+	}
+
+	/* update crypto ops, pretend all finished ok */
+	for (i = 0; i != num_pkts; i++)
+		ut_params->cop[i]->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, num_pkts);
+	if (ng != 1 ||
+		grp[0].m[0] != ut_params->obuf[0] ||
+		grp[0].cnt != num_pkts ||
+		grp[0].id.ptr != &ut_params->ss[0]) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail\n");
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	k = rte_ipsec_pkt_process(grp[0].id.ptr, grp[0].m, grp[0].cnt);
+	if (k != num_pkts) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+		return TEST_FAILED;
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+crypto_ipsec_2sa(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+
+	uint32_t k, ng, i, r;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		r = i % 2;
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[r],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+				ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+#define PKT_4	4
+#define PKT_12	12
+#define PKT_21	21
+
+static uint32_t
+crypto_ipsec_4grp(uint32_t pkt_num)
+{
+	uint32_t sa_ind;
+
+	/* group packets in 4 different size groups groups, 2 per SA */
+	if (pkt_num < PKT_4)
+		sa_ind = 0;
+	else if (pkt_num < PKT_12)
+		sa_ind = 1;
+	else if (pkt_num < PKT_21)
+		sa_ind = 0;
+	else
+		sa_ind = 1;
+
+	return sa_ind;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_mbufs(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint32_t i, j;
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		for (i = 0, j = 0; i < PKT_4; i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 1) {
+		for (i = 0, j = PKT_4; i < (PKT_12 - PKT_4); i++, j++) {
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+		}
+	} else if (grp_ind == 2) {
+		for (i = 0, j =  PKT_12; i < (PKT_21 - PKT_12); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else if (grp_ind == 3) {
+		for (i = 0, j = PKT_21; i < (BURST_SIZE - PKT_21); i++, j++)
+			if (grp[grp_ind].m[i] != ut_params->obuf[j]) {
+				rc = TEST_FAILED;
+				break;
+			}
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static uint32_t
+crypto_ipsec_4grp_check_cnt(uint32_t grp_ind, struct rte_ipsec_group *grp)
+{
+	uint32_t rc = 0;
+
+	if (grp_ind == 0) {
+		if (grp[grp_ind].cnt != PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 1) {
+		if (grp[grp_ind].cnt != PKT_12 - PKT_4)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 2) {
+		if (grp[grp_ind].cnt != PKT_21 - PKT_12)
+			rc = TEST_FAILED;
+	} else if (grp_ind == 3) {
+		if (grp[grp_ind].cnt != BURST_SIZE - PKT_21)
+			rc = TEST_FAILED;
+	} else
+		rc = TEST_FAILED;
+
+	return rc;
+}
+
+static int
+crypto_ipsec_2sa_4grp(void)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_ipsec_group grp[BURST_SIZE];
+	uint32_t k, ng, i, j;
+	uint32_t rc = 0;
+
+	for (i = 0; i < BURST_SIZE; i++) {
+		j = crypto_ipsec_4grp(i);
+
+		/* call crypto prepare */
+		k = rte_ipsec_pkt_crypto_prepare(&ut_params->ss[j],
+				ut_params->ibuf + i, ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_crypto_prepare fail\n");
+			return TEST_FAILED;
+		}
+		k = rte_cryptodev_enqueue_burst(ts_params->valid_devs[0], 0,
+				ut_params->cop + i, 1);
+		if (k != 1) {
+			RTE_LOG(ERR, USER1,
+				"rte_cryptodev_enqueue_burst fail\n");
+			return TEST_FAILED;
+		}
+	}
+
+	k = rte_cryptodev_dequeue_burst(ts_params->valid_devs[0], 0,
+		ut_params->cop, BURST_SIZE);
+	if (k != BURST_SIZE) {
+		RTE_LOG(ERR, USER1, "rte_cryptodev_dequeue_burst fail\n");
+		return TEST_FAILED;
+	}
+
+	ng = rte_ipsec_pkt_crypto_group(
+		(const struct rte_crypto_op **)(uintptr_t)ut_params->cop,
+		ut_params->obuf, grp, BURST_SIZE);
+	if (ng != 4) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_pkt_crypto_group fail ng=%d\n",
+			ng);
+		return TEST_FAILED;
+	}
+
+	/* call crypto process */
+	for (i = 0; i < ng; i++) {
+		k = rte_ipsec_pkt_process(grp[i].id.ptr, grp[i].m, grp[i].cnt);
+		if (k != grp[i].cnt) {
+			RTE_LOG(ERR, USER1, "rte_ipsec_pkt_process fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_cnt(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_cnt fail\n");
+			return TEST_FAILED;
+		}
+		rc = crypto_ipsec_4grp_check_mbufs(i, grp);
+		if (rc != 0) {
+			RTE_LOG(ERR, USER1,
+				"crypto_ipsec_4grp_check_mbufs fail\n");
+			return TEST_FAILED;
+		}
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+test_ipsec_reorder_inb_pkt_burst(uint16_t num_pkts)
+{
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	struct rte_mbuf *ibuf_tmp[BURST_SIZE];
+	uint16_t j;
+
+	/* reorder packets and create gaps in sequence numbers */
+	static const uint32_t reorder[BURST_SIZE] = {
+			24, 25, 26, 27, 28, 29, 30, 31,
+			16, 17, 18, 19, 20, 21, 22, 23,
+			8, 9, 10, 11, 12, 13, 14, 15,
+			0, 1, 2, 3, 4, 5, 6, 7,
+	};
+
+	if (num_pkts != BURST_SIZE)
+		return;
+
+	for (j = 0; j != BURST_SIZE; j++)
+		ibuf_tmp[j] = ut_params->ibuf[reorder[j]];
+
+	memcpy(ut_params->ibuf, ibuf_tmp, sizeof(ut_params->ibuf));
+}
+
+static int
+test_ipsec_crypto_op_alloc(uint16_t num_pkts)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc = 0;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->cop[j] = rte_crypto_op_alloc(ts_params->cop_mpool,
+				RTE_CRYPTO_OP_TYPE_SYMMETRIC);
+		if (ut_params->cop[j] == NULL) {
+			RTE_LOG(ERR, USER1,
+				"Failed to allocate symmetric crypto op\n");
+			rc = TEST_FAILED;
+		}
+	}
+
+	return rc;
+}
+
+static void
+test_ipsec_dump_buffers(struct ipsec_unitest_params *ut_params, int i)
+{
+	uint16_t j = ut_params->pkt_index;
+
+	printf("\ntest config: num %d\n", i);
+	printf("	replay_win_sz %u\n", test_cfg[i].replay_win_sz);
+	printf("	esn %u\n", test_cfg[i].esn);
+	printf("	flags 0x%" PRIx64 "\n", test_cfg[i].flags);
+	printf("	pkt_sz %zu\n", test_cfg[i].pkt_sz);
+	printf("	num_pkts %u\n\n", test_cfg[i].num_pkts);
+
+	if (ut_params->ibuf[j]) {
+		printf("ibuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->ibuf[j],
+			ut_params->ibuf[j]->data_len);
+	}
+	if (ut_params->obuf[j]) {
+		printf("obuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->obuf[j],
+			ut_params->obuf[j]->data_len);
+	}
+	if (ut_params->testbuf[j]) {
+		printf("testbuf[%u] data:\n", j);
+		rte_pktmbuf_dump(stdout, ut_params->testbuf[j],
+			ut_params->testbuf[j]->data_len);
+	}
+}
+
+static void
+destroy_sa(uint32_t j)
+{
+	struct ipsec_unitest_params *ut = &unittest_params;
+
+	rte_ipsec_sa_fini(ut->ss[j].sa);
+	rte_free(ut->ss[j].sa);
+	rte_cryptodev_sym_session_free(ut->ss[j].crypto.ses);
+	memset(&ut->ss[j], 0, sizeof(ut->ss[j]));
+}
+
+static int
+crypto_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+		uint16_t num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(
+					ut_params, i, num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+crypto_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *testbuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		testbuf_data = rte_pktmbuf_mtod(ut_params->testbuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(testbuf_data, obuf_data,
+			ut_params->obuf[j]->pkt_len,
+			"test and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->testbuf[j]->data_len,
+			"obuf data_len is not equal to testbuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->pkt_len,
+			ut_params->testbuf[j]->pkt_len,
+			"obuf pkt_len is not equal to testbuf pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate input mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			/* packet with sequence number 0 is invalid */
+			ut_params->testbuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->testbuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_inb_burst_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	uint16_t num_pkts)
+{
+	void *ibuf_data;
+	void *obuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal input data");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz,
+			INBOUND_SPI, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(
+			ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+		else {
+			/* Generate test mbuf data */
+			ut_params->obuf[j] = setup_test_string(
+				ts_params->mbuf_pool,
+				null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+inline_outb_burst_null_null_check(struct ipsec_unitest_params *ut_params,
+	uint16_t num_pkts)
+{
+	void *obuf_data;
+	void *ibuf_data;
+	uint16_t j;
+
+	for (j = 0; j < num_pkts && num_pkts <= BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the buffer data */
+		ibuf_data = rte_pktmbuf_mtod(ut_params->ibuf[j], void *);
+		obuf_data = rte_pktmbuf_mtod(ut_params->obuf[j], void *);
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(ibuf_data, obuf_data,
+			ut_params->ibuf[j]->data_len,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->data_len,
+			ut_params->obuf[j]->data_len,
+			"ibuf data_len is not equal to obuf data_len");
+		TEST_ASSERT_EQUAL(ut_params->ibuf[j]->pkt_len,
+			ut_params->obuf[j]->pkt_len,
+			"ibuf pkt_len is not equal to obuf pkt_len");
+
+		/* check mbuf ol_flags */
+		TEST_ASSERT(ut_params->ibuf[j]->ol_flags & PKT_TX_SEC_OFFLOAD,
+			"ibuf PKT_TX_SEC_OFFLOAD is not set");
+	}
+	return 0;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string_tunneled(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz,
+					OUTBOUND_SPI, j + 1);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_crypto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_crypto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int32_t rc;
+	uint32_t n;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_plain_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+
+		if (rc == 0) {
+			/* Generate test tunneled mbuf data for comparison */
+			ut_params->obuf[j] = setup_test_string(
+					ts_params->mbuf_pool,
+					null_plain_data, test_cfg[i].pkt_sz, 0);
+			if (ut_params->obuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == 0) {
+		n = rte_ipsec_pkt_process(&ut_params->ss[0], ut_params->ibuf,
+				num_pkts);
+		if (n == num_pkts)
+			rc = inline_outb_burst_null_null_check(ut_params,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1,
+				"rte_ipsec_pkt_process failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_inline_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = OUTBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_inline_proto_outb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string(ts_params->mbuf_pool,
+			null_encrypted_data, test_cfg[i].pkt_sz, 0);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0) {
+		if (test_cfg[i].reorder_pkts)
+			test_ipsec_reorder_inb_pkt_burst(num_pkts);
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+	}
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = lksd_proto_ipsec(num_pkts);
+		if (rc == 0)
+			rc = crypto_inb_burst_null_null_check(ut_params, i,
+					num_pkts);
+		else {
+			RTE_LOG(ERR, USER1, "%s failed, cfg %d\n",
+				__func__, i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_inb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_lksd_proto_outb_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_lksd_proto_inb_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+replay_inb_null_null_check(struct ipsec_unitest_params *ut_params, int i,
+	int num_pkts)
+{
+	uint16_t j;
+
+	for (j = 0; j < num_pkts; j++) {
+		/* compare the buffer data */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number inside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI,
+			test_cfg[i].replay_win_sz);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, 1);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI,
+		test_cfg[i].replay_win_sz + 2);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/* generate packet with seq number outside the replay window */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				if (test_cfg[i].esn == 0) {
+					RTE_LOG(ERR, USER1,
+						"packet is not outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+						i,
+						test_cfg[i].replay_win_sz + 2,
+						1);
+					rc = TEST_FAILED;
+				}
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is outside the replay window, cfg %d pkt0_seq %u pkt1_seq %u\n",
+					i, test_cfg[i].replay_win_sz + 2, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_outside_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_outside_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	int rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n", i);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 * generate packet with repeat seq number in the replay
+		 * window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		ut_params->ibuf[0] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+		if (ut_params->ibuf[0] == NULL)
+			rc = TEST_FAILED;
+		else
+			rc = test_ipsec_crypto_op_alloc(1);
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(1);
+			if (rc == 0) {
+				RTE_LOG(ERR, USER1,
+					"packet is not repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = TEST_FAILED;
+			} else {
+				RTE_LOG(ERR, USER1,
+					"packet is repeated in the replay window, cfg %d seq %u\n",
+					i, 1);
+				rc = 0;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_repeat_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_repeat_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	int rc;
+	int j;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa*/
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* Generate inbound mbuf data */
+	ut_params->ibuf[0] = setup_test_string_tunneled(ts_params->mbuf_pool,
+		null_encrypted_data, test_cfg[i].pkt_sz, INBOUND_SPI, 1);
+	if (ut_params->ibuf[0] == NULL)
+		rc = TEST_FAILED;
+	else
+		rc = test_ipsec_crypto_op_alloc(1);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec(1);
+		if (rc == 0)
+			rc = replay_inb_null_null_check(ut_params, i, 1);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+					i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if ((rc == 0) && (test_cfg[i].replay_win_sz != 0)) {
+		/*
+		 *  generate packet(s) with seq number(s) inside the
+		 *  replay window
+		 */
+		if (ut_params->ibuf[0]) {
+			rte_pktmbuf_free(ut_params->ibuf[0]);
+			ut_params->ibuf[0] = 0;
+		}
+
+		for (j = 0; j < num_pkts && rc == 0; j++) {
+			/* packet with sequence number 1 already processed */
+			ut_params->ibuf[j] = setup_test_string_tunneled(
+				ts_params->mbuf_pool, null_encrypted_data,
+				test_cfg[i].pkt_sz, INBOUND_SPI, j + 2);
+			if (ut_params->ibuf[j] == NULL)
+				rc = TEST_FAILED;
+		}
+
+		if (rc == 0) {
+			if (test_cfg[i].reorder_pkts)
+				test_ipsec_reorder_inb_pkt_burst(num_pkts);
+			rc = test_ipsec_crypto_op_alloc(num_pkts);
+		}
+
+		if (rc == 0) {
+			/* call ipsec library api */
+			rc = crypto_ipsec(num_pkts);
+			if (rc == 0)
+				rc = replay_inb_null_null_check(
+						ut_params, i, num_pkts);
+			else {
+				RTE_LOG(ERR, USER1, "crypto_ipsec failed\n");
+				rc = TEST_FAILED;
+			}
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+
+	return rc;
+}
+
+static int
+test_ipsec_replay_inb_inside_burst_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_replay_inb_inside_burst_null_null(i);
+	}
+
+	return rc;
+}
+
+
+static int
+crypto_inb_burst_2sa_null_null_check(struct ipsec_unitest_params *ut_params,
+		int i)
+{
+	uint16_t j;
+
+	for (j = 0; j < BURST_SIZE; j++) {
+		ut_params->pkt_index = j;
+
+		/* compare the data buffers */
+		TEST_ASSERT_BUFFERS_ARE_EQUAL(null_plain_data,
+			rte_pktmbuf_mtod(ut_params->obuf[j], void *),
+			test_cfg[i].pkt_sz,
+			"input and output data does not match\n");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			ut_params->obuf[j]->pkt_len,
+			"data_len is not equal to pkt_len");
+		TEST_ASSERT_EQUAL(ut_params->obuf[j]->data_len,
+			test_cfg[i].pkt_sz,
+			"data_len is not equal to input data");
+	}
+
+	return 0;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, r;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		r = j % 2;
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + r, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_null_null(i);
+	}
+
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null(int i)
+{
+	struct ipsec_testsuite_params *ts_params = &testsuite_params;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+	uint16_t num_pkts = test_cfg[i].num_pkts;
+	uint16_t j, k;
+	int rc = 0;
+
+	if (num_pkts != BURST_SIZE)
+		return rc;
+
+	uparams.auth = RTE_CRYPTO_SYM_XFORM_AUTH;
+	uparams.cipher = RTE_CRYPTO_SYM_XFORM_CIPHER;
+	strcpy(uparams.auth_algo, "null");
+	strcpy(uparams.cipher_algo, "null");
+
+	/* create rte_ipsec_sa */
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 0);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		return TEST_FAILED;
+	}
+
+	/* create second rte_ipsec_sa */
+	ut_params->ipsec_xform.spi = INBOUND_SPI + 1;
+	rc = create_sa(RTE_SECURITY_ACTION_TYPE_NONE,
+			test_cfg[i].replay_win_sz, test_cfg[i].flags, 1);
+	if (rc != 0) {
+		RTE_LOG(ERR, USER1, "rte_ipsec_sa_init failed, cfg %d\n",
+			i);
+		destroy_sa(0);
+		return TEST_FAILED;
+	}
+
+	/* Generate test mbuf data */
+	for (j = 0; j < num_pkts && rc == 0; j++) {
+		k = crypto_ipsec_4grp(j);
+
+		/* packet with sequence number 0 is invalid */
+		ut_params->ibuf[j] = setup_test_string_tunneled(
+			ts_params->mbuf_pool, null_encrypted_data,
+			test_cfg[i].pkt_sz, INBOUND_SPI + k, j + 1);
+		if (ut_params->ibuf[j] == NULL)
+			rc = TEST_FAILED;
+	}
+
+	if (rc == 0)
+		rc = test_ipsec_crypto_op_alloc(num_pkts);
+
+	if (rc == 0) {
+		/* call ipsec library api */
+		rc = crypto_ipsec_2sa_4grp();
+		if (rc == 0)
+			rc = crypto_inb_burst_2sa_null_null_check(
+					ut_params, i);
+		else {
+			RTE_LOG(ERR, USER1, "crypto_ipsec failed, cfg %d\n",
+				i);
+			rc = TEST_FAILED;
+		}
+	}
+
+	if (rc == TEST_FAILED)
+		test_ipsec_dump_buffers(ut_params, i);
+
+	destroy_sa(0);
+	destroy_sa(1);
+	return rc;
+}
+
+static int
+test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper(void)
+{
+	int i;
+	int rc = 0;
+	struct ipsec_unitest_params *ut_params = &unittest_params;
+
+	ut_params->ipsec_xform.spi = INBOUND_SPI;
+	ut_params->ipsec_xform.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+	ut_params->ipsec_xform.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP;
+	ut_params->ipsec_xform.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL;
+	ut_params->ipsec_xform.tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4;
+
+	for (i = 0; i < num_cfg && rc == 0; i++) {
+		ut_params->ipsec_xform.options.esn = test_cfg[i].esn;
+		rc = test_ipsec_crypto_inb_burst_2sa_4grp_null_null(i);
+	}
+
+	return rc;
+}
+
+static struct unit_test_suite ipsec_testsuite  = {
+	.suite_name = "IPsec NULL Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_crypto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_inline_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_inb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_lksd_proto_outb_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_outside_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_repeat_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_replay_inb_inside_burst_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_null_null_wrapper),
+		TEST_CASE_ST(ut_setup, ut_teardown,
+			test_ipsec_crypto_inb_burst_2sa_4grp_null_null_wrapper),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_ipsec(void)
+{
+	return unit_test_suite_runner(&ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(ipsec_autotest, test_ipsec);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* [dpdk-dev] [PATCH v8 9/9] doc: add IPsec library guide
  2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
                                 ` (8 preceding siblings ...)
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 8/9] test/ipsec: introduce functional test Konstantin Ananyev
@ 2019-01-10 21:06               ` Konstantin Ananyev
  9 siblings, 0 replies; 194+ messages in thread
From: Konstantin Ananyev @ 2019-01-10 21:06 UTC (permalink / raw)
  To: dev
  Cc: akhil.goyal, pablo.de.lara.guarch, thomas, Konstantin Ananyev,
	Bernard Iremonger

Add IPsec library guide and update release notes.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
---
 doc/guides/prog_guide/index.rst        |   1 +
 doc/guides/prog_guide/ipsec_lib.rst    | 168 +++++++++++++++++++++++++
 doc/guides/rel_notes/release_19_02.rst |  11 ++
 3 files changed, 180 insertions(+)
 create mode 100644 doc/guides/prog_guide/ipsec_lib.rst

diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index ba8c1f6ad..6726b1e8d 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -54,6 +54,7 @@ Programmer's Guide
     vhost_lib
     metrics_lib
     bpf_lib
+    ipsec_lib
     source_org
     dev_kit_build_system
     dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
new file mode 100644
index 000000000..992fdf46b
--- /dev/null
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -0,0 +1,168 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright(c) 2018 Intel Corporation.
+
+IPsec Packet Processing Library
+===============================
+
+DPDK provides a library for IPsec data-path processing.
+The library utilizes the existing DPDK crypto-dev and
+security API to provide the application with a transparent and
+high performant IPsec packet processing API.
+The library is concentrated on data-path protocols processing
+(ESP and AH), IKE protocol(s) implementation is out of scope
+for this library.
+
+SA level API
+------------
+
+This API operates on the IPsec Security Association (SA) level.
+It provides functionality that allows user for given SA to process
+inbound and outbound IPsec packets.
+
+To be more specific:
+
+*  for inbound ESP/AH packets perform decryption, authentication, integrity checking, remove ESP/AH related headers
+*  for outbound packets perform payload encryption, attach ICV, update/add IP headers, add ESP/AH headers/trailers,
+*  setup related mbuf fields (ol_flags, tx_offloads, etc.).
+*  initialize/un-initialize given SA based on user provided parameters.
+
+The SA level API is based on top of crypto-dev/security API and relies on
+them to perform actual cipher and integrity checking.
+
+Due to the nature of the crypto-dev API (enqueue/dequeue model) the library
+introduces an asynchronous API for IPsec packets destined to be processed by
+the crypto-device.
+
+The expected API call sequence for data-path processing would be:
+
+.. code-block:: c
+
+    /* enqueue for processing by crypto-device */
+    rte_ipsec_pkt_crypto_prepare(...);
+    rte_cryptodev_enqueue_burst(...);
+    /* dequeue from crypto-device and do final processing (if any) */
+    rte_cryptodev_dequeue_burst(...);
+    rte_ipsec_pkt_crypto_group(...); /* optional */
+    rte_ipsec_pkt_process(...);
+
+For packets destined for inline processing no extra overhead
+is required and the synchronous API call: rte_ipsec_pkt_process()
+is sufficient for that case.
+
+.. note::
+
+    For more details about the IPsec API, please refer to the *DPDK API Reference*.
+
+The current implementation supports all four currently defined
+rte_security types:
+
+RTE_SECURITY_ACTION_TYPE_NONE
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - check SQN
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security*
+    device completed successfully
+  - check SQN
+  - check padding data
+  - remove outer IP header (tunnel mode) / update IP header (transport mode)
+  - remove ESP header and trailer, padding, IV and ICV data
+  - update SA replay window
+
+* for outbound packets:
+
+  - generate SQN and IV
+  - add outer IP header (tunnel mode) / update IP header (transport mode)
+  - add ESP header and trailer, padding and IV data
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - verify that integity check and decryption performed by *rte_security*
+    device completed successfully
+
+* for outbound packets:
+
+  - update *ol_flags* inside *struct  rte_mbuf* to inidicate that
+    inline-crypto processing has to be performed by HW on this packet
+  - invoke *rte_security* device specific *set_pkt_metadata()* to associate
+    secuirty device specific data with the packet
+
+RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In that mode the library functions perform
+
+* for inbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that integity check and decryption performed by crypto device
+    completed successfully
+
+* for outbound packets:
+
+  - prepare *rte_crypto_op* structure for each input packet
+  - verify that crypto device operations (encryption, ICV generation)
+    were completed successfully
+
+To accommodate future custom implementations function pointers
+model is used for both *crypto_prepare* and *process* implementations.
+
+
+Supported features
+------------------
+
+*  ESP protocol tunnel mode both IPv4/IPv6.
+
+*  ESP protocol transport mode both IPv4/IPv6.
+
+*  ESN and replay window.
+
+*  algorithms: AES-CBC, AES-GCM, HMAC-SHA1, NULL.
+
+
+Limitations
+-----------
+
+The following features are not properly supported in the current version:
+
+*  ESP transport mode for IPv6 packets with extension headers.
+*  Multi-segment packets.
+*  Updates of the fields in inner IP header for tunnel mode
+   (as described in RFC 4301, section 5.1.2).
+*  Hard/soft limit for SA lifetime (time interval/byte count).
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 1aebd27c7..fe231152f 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -122,6 +122,17 @@ New Features
   The checks include the session's user data read/write check and the
   session private data referencing status check while freeing a session.
 
+* **Added IPsec Library.**
+
+  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
+  transport support for IPv4 and IPv6 packets.
+
+  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
+  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
+  planned to add more algorithms in future releases.
+
+  See :doc:`../prog_guide/ipsec_lib` for more information.
+
 
 Removed Items
 -------------
-- 
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing
  2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2019-01-10 23:59                 ` De Lara Guarch, Pablo
  0 siblings, 0 replies; 194+ messages in thread
From: De Lara Guarch, Pablo @ 2019-01-10 23:59 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev; +Cc: akhil.goyal, thomas



> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Thursday, January 10, 2019 9:06 PM
> To: dev@dpdk.org
> Cc: akhil.goyal@nxp.com; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; thomas@monjalon.net; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Subject: [PATCH v8 0/9] ipsec: new library for IPsec data-path processing
> 
> v7 -> v8
> - update release notes with new version for librte_security
> - rebase on top of crypto-next

Series applied to dpdk-next-crypto.

Thanks,
Pablo

^ permalink raw reply	[flat|nested] 194+ messages in thread

* Re: [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing
  2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
@ 2019-01-11  1:09             ` Xu, Yanjie
  0 siblings, 0 replies; 194+ messages in thread
From: Xu, Yanjie @ 2019-01-11  1:09 UTC (permalink / raw)
  To: Ananyev, Konstantin, dev, dev; +Cc: akhil.goyal

The patch series latest versions be tested by yanjie xu,  which work for crypto and inline ipsec cases.

-----Original Message-----
From: Ananyev, Konstantin 
Sent: Friday, January 4, 2019 4:16 AM
To: dev@dpdk.org; dev@dpdk.org
Cc: akhil.goyal@nxp.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>
Subject: [PATCH v6 00/10] ipsec: new library for IPsec data-path processing

v5 -> v6
 - Fix issues reported by Akhil:
     rte_ipsec_session_prepare() fails for lookaside-proto

v4 -> v5
 - Fix issue with SQN overflows
 - Address Akhil comments:
     documentation update
     spell checks spacing etc.
     fix input crypto_xform check/prepcess
     test cases for lookaside and inline proto

v3 -> v4
 - Changes to adress Declan comments
 - Update docs

v2 -> v3
 - Several fixes for IPv6 support
 - Extra checks for input parameters in public APi functions 

v1 -> v2
 - Changes to get into account l2_len for outbound transport packets
   (Qi comments)
 - Several bug fixes
 - Some code restructured
 - Update MAINTAINERS file

RFCv2 -> v1
 - Changes per Jerin comments
 - Implement transport mode
 - Several bug fixes
 - UT largely reworked and extended

This patch introduces a new library within DPDK: librte_ipsec.
The aim is to provide DPDK native high performance library for IPsec data-path processing.
The library is supposed to utilize existing DPDK crypto-dev and security API to provide application with transparent IPsec processing API.
The library is concentrated on data-path protocols processing (ESP and AH), IKE protocol(s) implementation is out of scope for that library.
Current patch introduces SA-level API.

SA (low) level API
==================

API described below operates on SA level.
It provides functionality that allows user for given SA to process inbound and outbound IPsec packets.
To be more specific:
- for inbound ESP/AH packets perform decryption, authentication,
  integrity checking, remove ESP/AH related headers
- for outbound packets perform payload encryption, attach ICV,
  update/add IP headers, add ESP/AH headers/trailers,
  setup related mbuf felids (ol_flags, tx_offloads, etc.).
- initialize/un-initialize given SA based on user provided parameters.

The following functionality:
  - match inbound/outbound packets to particular SA
  - manage crypto/security devices
  - provide SAD/SPD related functionality
  - determine what crypto/security device has to be used
    for given packet(s)
is out of scope for SA-level API.

SA-level API is based on top of crypto-dev/security API and relies on them to perform actual cipher and integrity checking.
To have an ability to easily map crypto/security sessions into related IPSec SA opaque userdata field was added into rte_cryptodev_sym_session and rte_security_session structures.
That implies ABI change for both librte_crytpodev and librte_security.

Due to the nature of crypto-dev API (enqueue/deque model) we use asynchronous API for IPsec packets destined to be processed by crypto-device.
Expected API call sequence would be:
  /* enqueue for processing by crypto-device */
  rte_ipsec_pkt_crypto_prepare(...);
  rte_cryptodev_enqueue_burst(...);
  /* dequeue from crypto-device and do final processing (if any) */
  rte_cryptodev_dequeue_burst(...);
  rte_ipsec_pkt_crypto_group(...); /* optional */
  rte_ipsec_pkt_process(...);

Though for packets destined for inline processing no extra overhead is required and synchronous API call: rte_ipsec_pkt_process() is sufficient for that case.

Current implementation supports all four currently defined rte_security types.
Though to accommodate future custom implementations function pointers model is used for both for *crypto_prepare* and *process* impelementations.

Konstantin Ananyev (10):
  cryptodev: add opaque userdata pointer into crypto sym session
  security: add opaque userdata pointer into security session
  net: add ESP trailer structure definition
  lib: introduce ipsec library
  ipsec: add SA data-path API
  ipsec: implement SA data-path API
  ipsec: rework SA replay window/SQN for MT environment
  ipsec: helper functions to group completed crypto-ops
  test/ipsec: introduce functional test
  doc: add IPsec library guide

 MAINTAINERS                            |    8 +-
 config/common_base                     |    5 +
 doc/guides/prog_guide/index.rst        |    1 +
 doc/guides/prog_guide/ipsec_lib.rst    |  168 ++
 doc/guides/rel_notes/release_19_02.rst |   11 +
 lib/Makefile                           |    2 +
 lib/librte_cryptodev/rte_cryptodev.h   |    2 +
 lib/librte_ipsec/Makefile              |   27 +
 lib/librte_ipsec/crypto.h              |  123 ++
 lib/librte_ipsec/iph.h                 |   84 +
 lib/librte_ipsec/ipsec_sqn.h           |  343 ++++
 lib/librte_ipsec/meson.build           |   10 +
 lib/librte_ipsec/pad.h                 |   45 +
 lib/librte_ipsec/rte_ipsec.h           |  154 ++
 lib/librte_ipsec/rte_ipsec_group.h     |  151 ++
 lib/librte_ipsec/rte_ipsec_sa.h        |  174 ++
 lib/librte_ipsec/rte_ipsec_version.map |   15 +
 lib/librte_ipsec/sa.c                  | 1527 ++++++++++++++
 lib/librte_ipsec/sa.h                  |  106 +
 lib/librte_ipsec/ses.c                 |   52 +
 lib/librte_net/rte_esp.h               |   10 +-
 lib/librte_security/rte_security.h     |    2 +
 lib/meson.build                        |    2 +
 mk/rte.app.mk                          |    2 +
 test/test/Makefile                     |    3 +
 test/test/meson.build                  |    3 +
 test/test/test_ipsec.c                 | 2555 ++++++++++++++++++++++++
 27 files changed, 5583 insertions(+), 2 deletions(-)  create mode 100644 doc/guides/prog_guide/ipsec_lib.rst
 create mode 100644 lib/librte_ipsec/Makefile  create mode 100644 lib/librte_ipsec/crypto.h  create mode 100644 lib/librte_ipsec/iph.h  create mode 100644 lib/librte_ipsec/ipsec_sqn.h  create mode 100644 lib/librte_ipsec/meson.build  create mode 100644 lib/librte_ipsec/pad.h  create mode 100644 lib/librte_ipsec/rte_ipsec.h  create mode 100644 lib/librte_ipsec/rte_ipsec_group.h
 create mode 100644 lib/librte_ipsec/rte_ipsec_sa.h  create mode 100644 lib/librte_ipsec/rte_ipsec_version.map
 create mode 100644 lib/librte_ipsec/sa.c  create mode 100644 lib/librte_ipsec/sa.h  create mode 100644 lib/librte_ipsec/ses.c  create mode 100644 test/test/test_ipsec.c

--
2.17.1

^ permalink raw reply	[flat|nested] 194+ messages in thread

end of thread, other threads:[~2019-01-11  1:09 UTC | newest]

Thread overview: 194+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-24 16:53 [dpdk-dev] [RFC] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2018-09-03 12:41 ` Joseph, Anoob
2018-09-03 18:21   ` Ananyev, Konstantin
2018-09-05 14:39     ` Joseph, Anoob
     [not found]       ` <2601191342CEEE43887BDE71AB977258EA954BAD@irsmsx105.ger.corp.intel.com>
2018-09-12 18:09         ` Ananyev, Konstantin
2018-09-15 17:06           ` Joseph, Anoob
2018-09-16 10:56             ` Jerin Jacob
2018-09-17 18:12               ` Ananyev, Konstantin
2018-09-18 12:42                 ` Ananyev, Konstantin
2018-09-20 14:26                   ` Akhil Goyal
2018-09-24 10:51                     ` Ananyev, Konstantin
2018-09-25  7:48                       ` Akhil Goyal
2018-09-30 21:00                         ` Ananyev, Konstantin
2018-10-01 12:49                           ` Akhil Goyal
2018-10-02 23:24                             ` Ananyev, Konstantin
2018-09-18 17:54                 ` Jerin Jacob
2018-09-24  8:45                   ` Ananyev, Konstantin
2018-09-26 18:02                     ` Jerin Jacob
2018-10-02 23:56                       ` Ananyev, Konstantin
2018-10-03  9:37                         ` Jerin Jacob
2018-10-09 18:24                           ` Ananyev, Konstantin
2018-09-17 10:36             ` Ananyev, Konstantin
2018-09-17 14:41               ` Joseph, Anoob
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 0/9] " Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 4/9] lib: introduce ipsec library Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
2018-10-18 17:37   ` Jerin Jacob
2018-10-21 22:01     ` Ananyev, Konstantin
2018-10-24 12:03       ` Jerin Jacob
2018-10-28 20:37         ` Ananyev, Konstantin
2018-10-29 10:19           ` Jerin Jacob
2018-10-30 13:53             ` Ananyev, Konstantin
2018-10-31  6:37               ` Jerin Jacob
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 6/9] ipsec: implement " Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2018-10-09 18:23 ` [dpdk-dev] [RFC v2 9/9] test/ipsec: introduce functional test Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2018-11-16 10:23   ` Mohammad Abdul Awal
2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2018-12-04 13:13     ` Mohammad Abdul Awal
2018-12-04 15:32       ` Trahe, Fiona
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 1/9] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2018-12-11 17:24       ` Doherty, Declan
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 01/10] " Konstantin Ananyev
2018-12-19  9:26         ` Akhil Goyal
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2019-01-11  1:09             ` Xu, Yanjie
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2019-01-04  0:25             ` Stephen Hemminger
2019-01-04  9:29               ` Ananyev, Konstantin
2019-01-09 23:41                 ` Thomas Monjalon
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2019-01-10 14:25               ` Thomas Monjalon
2019-01-10 14:40                 ` De Lara Guarch, Pablo
2019-01-10 14:52                 ` Ananyev, Konstantin
2019-01-10 14:54                   ` Thomas Monjalon
2019-01-10 14:58                     ` Ananyev, Konstantin
2019-01-10 15:00                       ` Akhil Goyal
2019-01-10 15:09                         ` Akhil Goyal
2019-01-10 14:51               ` Akhil Goyal
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 01/10] cryptodev: add opaque userdata pointer into crypto sym session Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 0/9] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2019-01-10 23:59                 ` De Lara Guarch, Pablo
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 1/9] security: add opaque userdata pointer into security session Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 2/9] net: add ESP trailer structure definition Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 3/9] lib: introduce ipsec library Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 4/9] ipsec: add SA data-path API Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 5/9] ipsec: implement " Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 6/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 7/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 8/9] test/ipsec: introduce functional test Konstantin Ananyev
2019-01-10 21:06               ` [dpdk-dev] [PATCH v8 9/9] doc: add IPsec library guide Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 03/10] net: add ESP trailer structure definition Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 04/10] lib: introduce ipsec library Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 05/10] ipsec: add SA data-path API Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 06/10] ipsec: implement " Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 09/10] test/ipsec: introduce functional test Konstantin Ananyev
2019-01-10 14:20             ` [dpdk-dev] [PATCH v7 10/10] doc: add IPsec library guide Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 03/10] net: add ESP trailer structure definition Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 04/10] lib: introduce ipsec library Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 05/10] ipsec: add SA data-path API Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 06/10] ipsec: implement " Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 09/10] test/ipsec: introduce functional test Konstantin Ananyev
2019-01-03 20:16           ` [dpdk-dev] [PATCH v6 10/10] doc: add IPsec library guide Konstantin Ananyev
2019-01-10  8:35             ` Thomas Monjalon
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 03/10] net: add ESP trailer structure definition Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 04/10] lib: introduce ipsec library Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 05/10] ipsec: add SA data-path API Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 06/10] ipsec: implement " Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 09/10] test/ipsec: introduce functional test Konstantin Ananyev
2018-12-28 15:17         ` [dpdk-dev] [PATCH v5 10/10] doc: add IPsec library guide Konstantin Ananyev
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 02/10] security: add opaque userdata pointer into security session Konstantin Ananyev
2018-12-19  9:26         ` Akhil Goyal
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 03/10] net: add ESP trailer structure definition Konstantin Ananyev
2018-12-19  9:32         ` Akhil Goyal
2018-12-27 10:13           ` Olivier Matz
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 04/10] lib: introduce ipsec library Konstantin Ananyev
2018-12-19 12:08         ` Akhil Goyal
2018-12-19 12:39           ` Thomas Monjalon
2018-12-20 14:06           ` Ananyev, Konstantin
2018-12-20 14:14             ` Thomas Monjalon
2018-12-20 14:26               ` Ananyev, Konstantin
2018-12-20 18:17             ` Ananyev, Konstantin
2018-12-21 11:57               ` Akhil Goyal
2018-12-21 11:53             ` Akhil Goyal
2018-12-21 12:41               ` Ananyev, Konstantin
2018-12-21 12:54                 ` Ananyev, Konstantin
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 05/10] ipsec: add SA data-path API Konstantin Ananyev
2018-12-19 13:04         ` Akhil Goyal
2018-12-20 10:17           ` Ananyev, Konstantin
2018-12-21 12:14             ` Akhil Goyal
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 06/10] ipsec: implement " Konstantin Ananyev
2018-12-19 15:32         ` Akhil Goyal
2018-12-20 12:56           ` Ananyev, Konstantin
2018-12-21 12:36             ` Akhil Goyal
2018-12-21 14:27               ` Ananyev, Konstantin
2018-12-21 14:39                 ` Thomas Monjalon
2018-12-21 14:51                 ` Akhil Goyal
2018-12-21 15:16                   ` Ananyev, Konstantin
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 07/10] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 08/10] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2018-12-19 15:46         ` Akhil Goyal
2018-12-20 13:00           ` Ananyev, Konstantin
2018-12-21 12:37             ` Akhil Goyal
2018-12-14 16:23       ` [dpdk-dev] [PATCH v4 09/10] test/ipsec: introduce functional test Konstantin Ananyev
2018-12-19 15:53         ` Akhil Goyal
2018-12-20 13:03           ` Ananyev, Konstantin
2018-12-21 12:41             ` Akhil Goyal
2018-12-14 16:27       ` [dpdk-dev] [PATCH v4 10/10] doc: add IPsec library guide Konstantin Ananyev
2018-12-19  3:46         ` Thomas Monjalon
2018-12-19 16:01         ` Akhil Goyal
2018-12-20 13:06           ` Ananyev, Konstantin
2018-12-21 12:58             ` Akhil Goyal
2018-12-14 16:29       ` [dpdk-dev] [PATCH v4 00/10] ipsec: new library for IPsec data-path processing Konstantin Ananyev
2018-12-21 13:32         ` Akhil Goyal
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
2018-12-11 17:25       ` Doherty, Declan
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 3/9] net: add ESP trailer structure definition Konstantin Ananyev
2018-12-11 17:25       ` Doherty, Declan
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 4/9] lib: introduce ipsec library Konstantin Ananyev
2018-12-11 17:25       ` Doherty, Declan
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 5/9] ipsec: add SA data-path API Konstantin Ananyev
2018-12-11 17:25       ` Doherty, Declan
2018-12-12  7:37         ` Ananyev, Konstantin
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 6/9] ipsec: implement " Konstantin Ananyev
2018-12-12 17:47       ` Doherty, Declan
2018-12-13 11:21         ` Ananyev, Konstantin
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2018-12-13 12:14       ` Doherty, Declan
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2018-12-13 12:14       ` Doherty, Declan
2018-12-06 15:38     ` [dpdk-dev] [PATCH v3 9/9] test/ipsec: introduce functional test Konstantin Ananyev
2018-12-13 12:54       ` Doherty, Declan
2018-11-30 16:45   ` [dpdk-dev] [PATCH v2 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
2018-12-04 13:13     ` Mohammad Abdul Awal
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 3/9] net: add ESP trailer structure definition Konstantin Ananyev
2018-12-04 13:12     ` Mohammad Abdul Awal
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 4/9] lib: introduce ipsec library Konstantin Ananyev
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 5/9] ipsec: add SA data-path API Konstantin Ananyev
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 6/9] ipsec: implement " Konstantin Ananyev
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2018-11-30 16:46   ` [dpdk-dev] [PATCH v2 9/9] test/ipsec: introduce functional test Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 2/9] security: add opaque userdata pointer into security session Konstantin Ananyev
2018-11-16 10:24   ` Mohammad Abdul Awal
2018-11-15 23:53 ` [dpdk-dev] [PATCH 3/9] net: add ESP trailer structure definition Konstantin Ananyev
2018-11-16 10:22   ` Mohammad Abdul Awal
2018-11-15 23:53 ` [dpdk-dev] [PATCH 4/9] lib: introduce ipsec library Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 5/9] ipsec: add SA data-path API Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 6/9] ipsec: implement " Konstantin Ananyev
2018-11-20  1:03   ` Zhang, Qi Z
2018-11-20  9:44     ` Ananyev, Konstantin
2018-11-20 10:02       ` Ananyev, Konstantin
2018-11-15 23:53 ` [dpdk-dev] [PATCH 7/9] ipsec: rework SA replay window/SQN for MT environment Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 8/9] ipsec: helper functions to group completed crypto-ops Konstantin Ananyev
2018-11-15 23:53 ` [dpdk-dev] [PATCH 9/9] test/ipsec: introduce functional test Konstantin Ananyev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).